Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The following picture represents the high-level architecture of how Akamas operates in this scenario.
Here are the main elements:
The target system to be optimized is Konakart, a real-world e-commerce web application based on Java and running as a Docker container within a dedicated cloud instance - in this guide, the goal is to optimize Konakart throughput and response time
The optimization scope in the Konakart Java Virtual Machine (JVM) with the configuration of the JVM parameters specified in a Docker configuration file on the Konakart cloud instance
The optimization experiments leverage JMeter as a load-testing tool
The optimization's results leverage Prometheus as a telemetry provider to collect load testing, JVM, and OS-level metrics.
In this guide, you’ll learn how to optimize Konakart, a real-world Java-based e-commerce application, by leveraging the JMeter performance testing tool and Prometheus monitoring tool.
How to optimize a real-world Java application with Akamas in a realistic performance environment
How to integrate JMeter load testing tool with Akamas
How to integrate the Prometheus monitoring tool with Akamas
How to automate the configuration of the parameters in a containerized Java application
How to conduct an optimization with performance constraints
How to analyze and identify performance insights from study results
An Akamas-in-a-box instance installed with a valid license - see the Akamas In a Box guide.
The Konakart performance environment - see the Konakart setup guide.
Familiarity with Akamas concepts. If you're new to Akamas, please review the Java quickstart guide.
Akamas provides an out-of-the-box optimization pack called Web Application that comes very handy for modeling typical web applications, as it includes metrics such as transactions_throughput
and transaction_response_time
which you will use in this guide to define the optimization goal and analyze the optimization results. These metrics will be gathered from JMeter, thanks to Akamas out-of-the-box Prometheus telemetry provider.
Let's create the system and its components.
The file system.yaml
contains the following definition for our system:
Run the command to create it:
Now, install the Web Application optimization pack from the UI:
Akamas provides an out-of-the-box optimization pack called Web Application that comes very handy for modeling typical web applications, as it includes metrics such as transactions_throughput
and transaction_response_time
which you will use in this guide to define the optimization goal and analyze the optimization results. These metrics will be gathered from JMeter, thanks to Akamas out-of-the-box Prometheus telemetry provider.
You can now create the component modeling of the Konakart web application.
The file comp_konakart.yaml
defines the component as follows:
As you can see, this component contains some custom properties, instance and job, under the prometheus group. These properties are used by the Prometheus telemetry provider as values for the corresponding instance and job labels in the Prometheus queries to collect metrics for the correct entities. You will configure the Prometheus integration in the next sections.
You can now run the command to create the component:
You can now explore the result of your system modeling in the UI. As you can see, your konakart
component is now populated with all the typical metrics of a web application:
Next you will need to create a workflow that specifies how Akamas applies the parameters to be optimized, how to automate the launch of JMeter performance tests, and how to collect metrics from Prometheus telemetry. For now, you will create a simple automation workflow that executes a quick two-minute performance test to make sure everything is working properly.
The file workflow-baseline.yaml
contains the definition of the steps to perform during the test:
Please make sure to modify the workflow-baseline.yaml
file replacing the following placeholders with the correct references to your environment:
hostname should reference your Konakart instance in place of the placeholder target_host
username and key must reflect your Konakart instance user and SSH private key file (also check the path /home/jsmith)
TARGET_HOST in the JMeter command line should reference your Konakart instance in place of the placeholder target_host
Then create the workflow:
To execute this workflow we'll use a simple Akamas study that includes a single step of type baseline
. This type of step simply executes one experiment without leveraging Akamas AI - you will add the AI-driven optimization step later.
The study-baseline.yaml
file defines the study as follows:
Create the study:
Now, you can run the study by clicking Start from the UI, or by executing the following command:
You should now see the baseline experiment running in the Progress tab of the Akamas UI.
Notice that you can also monitor JMeter performance tests live by accessing Grafana on port 3000
of your Konakart instance, then selecting the JMeter Exporter dashboard:
You can relaunch the baseline study at any time you want by pressing the Start button again. If you want, you can also adjust the JMeter scenario settings in the workflow - see the Konakart setup guide for more details on the JMeter plans and variables you can set.
You will notice that the baseline experiment will fail on the telemetry task - see Progress tab in the UI. This is expected, as you still have not configured the Akamas telemetry, i.e. how Akamas can collect metrics - you will do this in the next section.
It is time now to configure Akamas telemetry to collect the relevant JMeter performance metrics. You will use the out-of-the-box Prometheus provider for that purpose.
The Prometheus telemetry provider collects metrics for a variety of technologies, including JVM and Linux OS metrics. Moreover, you can easily extend it to import additional metrics via custom promQL queries. In this example, you are collecting JMeter performance test metrics that are exposed by the JMeter Prometheus exporter already configured in the Konakart performance environment.
The file tel_prometheus.yaml
defines the telemetry instance as follows - make sure to replace the target_host placeholder with the address of your Konakart instance:
Now create a telemetry instance associated with the konakart
system:
Now you can test the Prometheus integration by running again the baseline study you have created before (you can simply press again the Start button in the Study page). At the end of the experiment, you should see JMeter performance metrics such as transactions_throughput
and transactions_response_time
displayed as time series in the Metrics tab, and as aggregated metrics in the Analysis tab:
At this point, you can launch your JMeter performance tests from Akamas and see the relevant performance metrics imported from Prometheus.
Before starting the optimization, you need to also add the JVM component to your system.
First of all, install the Java optimization pack:
The file comp_jvm.yaml
defines the JVM as follows:
Notice that the jvm component has some additional properties, instance and job, under the prometheus group. These properties are used by the Prometheus telemetry provider as values for the corresponding instance and job labels used in Prometheus queries to collect JVM metrics (e.g. JVM garbage collection time or heap utilization). The Prometheus telemetry provider collects these metrics out-of-the-box - no query needs to be specified.
You can create the JVM component as follows:
You can now see all the JVM parameters and metrics from the UI:
At this point, your system is composed of the web application and the JVM component you need to perform the optimization study.
You can now create a new workflow that you will use in your optimization study.
A workflow in an optimization study is typically composed of the following tasks:
Apply a new configuration of the selected optimization parameters to the target system: in this example, you will leverage the Akamas FileConfigurator operator - this operator can be used to write parameter values into a generic file, which could represent a shell script, an application configuration file, or any other file used to apply parameters to the target systems
Restart the application (optional): in this example, the Konakart docker container needs to be restarted in order to launch the Konakart JVM for the new configuration to be effectively applied
Launch the performance test: in this example, the JMeter performance tests are launched as described in a previous section (same as the baseline workflow)
The file workflow-optimize.yaml
contains the pre-configured workflow, you only need to include the correct references to your environment:
Please make sure to modify the workflow-optimize.yaml
file so as to get some variables replaced with the correct references to your environment:
hostname should reference your Konakart instance instead of the placeholder target_host
username and key must reflect your Konakart instance user and SSH private key file (also change the path /home/jsmith)
path and commands should have the correct file paths to Docker Compose files
TARGET_HOST in the JMeter command-line variable should reference your Konakart instance instead of the placeholder target_host
RAMP_UP_TIME in the JMeter command-line variable should be set to the desired length of the test: you may set this value to 300 seconds (5 minutes) test to make sure everything works correctly, and then change it to 900 seconds (15 minutes), which is more appropriate for optimization purposes
Once you have edited this file, you can then run the following command to create the workflow:
In the workflow, the FileConfigurator operator is used to automatically apply the configuration of the JVM parameters at each experiment. In order for this to work, you need to allow Akamas to set the parameter values being tested in each experiment. This is made possible by the following Akamas templating approach:
locate your application configuration file where the optimization parameters need to be set
find the place where the parameter that need to be optimized is specified - for example, the heap size of the JVM: tomcat_jvm_heapsize=1024
replace the hardcoded value with the Akamas parameter template string, where you specify both the component name and the name of the Akamas parameter - for example: tomcat_jvm_heapsize=${jvm.maxHeapSize}
at this point, every time the FileConfiguration operator is invoked in your workflow, a new application configuration file will be created where each of the parameter templates replaced with the parameter values being tested by Akamas in the corresponding experiment (e.g. tomcat_jvm_heapsize=537
).
Therefore, you will now prepare the Konakart configuration file (a Docker Compose file).
First of all, you want to inspect the Konakart configuration file by executing the following command:
which should return the following output, where you can see that the JAVA_OPTS variable specifies a maximum heap size of 256 MB:
In order to allow Akamas to be able to apply this hardcoded heap size value (and any other required as optimization parameter) at each experiment, you need to prepare a new Konakart Docker Compose file docker-compose.yml.templ
where you can put the Akamas parameter template.
First, copy the Docker Compose file and rename it so as to keep the original file:
Now, edit this file docker-compose.yml.templ
file and replace the hardcoded value for the JAVA_OPTS variable with the Akamas parameter template:
aside positive
Notice that instead of specifying one single parameter at a time, Akamas also allows you to put wildcards ('*') and have all the JVM parameters replaced in place.
Therefore, the FileConfigurator operator in your workflow will expand all. the JVM parameters and replace them with the actual values provided by Akamas AI-driven optimization engine.
At this point, you are ready to create your optimization study!
In this guide, your goal is to optimize Konakart performance such that:
throughput is maximized, so that your e-commerce service can successfully support the high traffic peaks expected in the upcoming highly demanding season
as you need to take into account the customer experience, you want also to make sure that the response time always remains within the required service-level objective (SLO) of 100ms.
This business-level goal translates into the following configuration for your Akamas study:
goal: maximize transactions_throughput
metric
constraint: transactions_response_time
metric to stay under 100ms
The study-max-throughput-with-SLO.yaml
provides the pre-configured study:
Run the following command to create your study:
Let's now take a look at the results and benefits Akamas achieved in this real-life optimization.
Notice: in your environment, you might achieve different results with respect to what is described in this guide. The actual best configuration might depend on your actual setup - operating systems, cloud or virtualization platform, and the hardware
By optimally configuring the application configurations (JVM options), Akamas increased the application throughput by 30%:
Properly tuning a modern JVM is a complex challenge, and might require weeks of time and effort even for performance experts.
Akamas was able to find the optimal JVM configuration after a bit more than half a day of automatic tuning:
In the Summary tab you can quickly see the optimal JVM configuration Akamas found:
As you can see, without being told anything about how the application works, Akamas learned the best settings for some interesting JVM parameters:
it almost tripled the JVM max heap size
it changed the garbage collector from G1 (the default) to Parallel, and it adjusted the number of GC threads
it significantly changed the sizing of the Survivor spaces and the new generation
Those are not easy settings to tune manually!
Another very interesting side benefit is that the optimized configuration not only improved application throughput, but also made Konakart run 23% with respect to the baseline (Configuration Analysis tab):
Also notice how the 3rd best configuration actually improved response time even more (26%).
The significant effects the optimal configuration had on application scalability can be also analyzed by looking at the over-time metrics (Metrics tab).
As you can see, the best configuration highly increased the application scalability and the ability to sustain peak traffic volumes with very low response times. Also notice how Akamas automatically detected the peak throughput achieved by the different configurations while keeping the response time under 100 ms, as per the goal constraints.
As a final but important benefit, the best configuration Akamas identified is also more efficient CPU-wise. As you can see by looking at the jvm.jvm_cpu_used metric, at peak load the CPU consumption of the optimized JVM was more than 20% less than the baseline configuration. This can translate to direct cost savings on the cloud, as it allows using a smaller instance size or container.
Congratulations, you have just done your first Akamas optimization of a real-life Java application in a performance testing environment with JMeter and Prometheus!