Guidelines for optimizing AWS EC2 instances
This page provides a list of best practices when optimizing applications on AWS EC2 instances with Akamas.
Environment setup
Before planning out an AWS study you must first ensure you have all the rights to effectively launch it. This means you have to:
Check your policies
Check your security groups
Policies
Policies allow instance manipulation and access by being attached to users, group or AWS resources.
The suggested best practice to avoid impacting other critical environments, like production, is to create multiple accounts in order to isolate the Akamas instance and the tested resources.
Whether the case, you’re also required to comply with the AWS Policies for instance manipulation. In the following, we show a standard, tag and resource-based policy.
Tags are indeed a robust way to scope your Akamas optimizations: you can invariantly refer to instances and sets of instances across generations and stack them for more elaborate conditions.
Tailoring your policies on your EC2 resources neatly avoids collateral effects during your experiments.
AWS Policies
In order to correctly enable the provisioning and manipulation of EC2 instances you have to set the correct AWS policies. Notice that AWS offers several pre-made policies.
Here we provide some basic resource-based permissions required in our scope:
EC2 instance start
EC2 instance stop
EC2 instance reboot
EC2 instance termination
EC2 instance description
EC2 instance manipulation
Consider using tags in order to enforce finer access control for the automation tools, as shown in the example above.
Refer to the official reference for a complete list of AWS EC2 policies.
Security Groups
Security groups control inbound and outbound traffic to and from your AWS instances. In your typical optimization scenario, the security group should allow inbound connections on SSH and any other relevant port, including the ones used to gather the telemetry, from the Akamas instance to the ones in the tuned system.
Workflows setup
Akamas workflows managing EC2 instances usually expect you to either create throwaway instances or resize already existing ones. This page provides an example for both cases: all you need to do is to apply the FileConfigurator operator by making the instance type and instance size parameters tunable.
Notice that:
The Sleep operator comes in handy in between instance provisioning. Creating an instance and more so waiting for its DNS to come up may take a while, so forcing a few minutes to wait is usually worth it. This operator is a viable option if you can’t force it through an automation tool.
The Executor operator is better suited for launching benchmarks and applications rather than for setting up your instance. It’s better to use automation tools or ready-to-use AMIs to set up all required packages and dependencies, as the workflow should cover your actual study case.
The study environment has to be restored to its original configuration between experiments. While that is quite simple to achieve when creating and terminating instances, this may require additional steps in a resizing scenario.
The deletion and creation of new instances determine changes in ECDSA host keys. This may be interpreted as DNS spoofing from the Akamas instance, so consider overriding the default settings in such contexts.
Telemetry setup
While the CloudWatch Exporter is a natural choice for EC2 instance monitoring, EC2 instances are often Linux instances, so it’s useful to use the Linux optimization pack through the Prometheus provider paired with the Node exporter every time you can directly access the instance.
Last updated