Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The following picture represents the high-level architecture of how Akamas operates in this scenario.
Here are the main elements:
The target system to be optimized is Konakart, a real-world e-commerce web application based on Java and running as a Docker container within a dedicated cloud instance - in this guide, the goal is to optimize Konakart throughput and response time
The optimization scope in the Konakart Java Virtual Machine (JVM) with the configuration of the JVM parameters specified in a Docker configuration file on the Konakart cloud instance
The optimization experiments leverage LoadRunner Enterprise as a load-testing tool
The optimization results leverage Prometheus as a telemetry provider to collect JVM and OS-level metrics.
You need a fully working LoadRunner Enterprise environment to run a load test on your target Konakart system.
Take note of the following ids/configurations while setting up all your LRE artifacts:
the credentials used to access and run the scripts on your LRE project;
your LRE project name and domain;
your test id;
the test set your test belongs to
the project your test belongs to
the domain your project belongs to
the tenant id your project belongs to (for multi-tenant installations only)
which you will need to set into your workflow configuration.
Moreover, take note of the address, the schema name, and the credentials of your InfluxDB external analysis server since they will be required while configuring the telemetry instance.
To create the LoadRunner Enterprise test you will need a script to simulate user navigations on the Konakart website. You can find a working script in the repository.
Please notice that you need to replace the URL of the requests from http://konakart.dev.akamas.io:8780
to the FQDN and port of the instance where Konakart is deployed.
In this guide, you'll learn how to optimize Konakart, a real-world Java-based e-commerce application, by leveraging Micro Focus LoadRunner Enterprise performance testing tool and Prometheus monitoring tool.
Please refer to this knowledge base article on how to setup a Konkart test environment and to this page on how to integrate LoadRunner Enterprise with Akamas.
How to optimize a real-world Java application with Akamas in a realistic performance environment
How to integrate the Prometheus monitoring tool with Akamas
How to automate the configuration of the parameters in a containerized Java application
How to conduct an optimization with performance constraints
How to analyze and identify performance insights from study results
An Akamas-in-a-box instance installed with a valid license - see the Akamas In a Box guide.
The Konakart performance environment - see the Konakart setup guide.
A working LoadRunner Enterprise installation
Telemetry instances need to be created to allow Akamas to leverage data collected from LoadRunner Enterprise (web application metrics) and Prometheus (JVM and OS metrics).
The Prometheus telemetry instance collects metrics for a variety of technologies, including JVM and Linux OS metrics. Moreover, it can also be easily extended to import additional metrics (via custom promQL queries). In this example, you are going to use Prometheus to import JVM metrics exposed by the Prometheus JMX exporter.
First, update the tel_prometheus.yaml
file replacing the target_host placeholder with the address of your Konakart instance:
And then create a telemetry instance associated with the konakart
system:
As described in the LRE integration guide you need an instance of InfluxDB running in your environment to act as an external analysis server for your LRE instance. Therefore, the telemetry instance needs to provide all the configurations required to connect to that InfluxDB server.
The file tel_lre.yaml
is an example of a LRE telemetry instance. Make sure to replace the variables with the actual values of your configurations:
and then create telemetry instance:
Akamas provides an out-of-the-box optimization pack called Web Application that comes very handy for modeling typical web applications, as it includes metrics such as transactions_throughput
and transaction_response_time
which you will use in this guide to define the optimization goal and to analyze the optimization results. These metrics will be gathered from LRE, thanks to Akamas out-of-the-box LoadRunner Enterprise telemetry provider.
Let's start by creating the system and its components.
The file system.yaml
contains the following description of the system:
Run the command to create it:
The Web Application component is used to model the typical performance metrics characterizing the performance of a web application (e.g. the response time or the transactions throughput).
Akamas comes with a Web Application optimization pack out-of-the-box. You can install it from the UI:
You can now create the component modeling the Konakart web application.
The comp_konakart.yaml
file describes the component as follows:
As you can see, this component contains the loadrunnerenterprise
property that instructs Akamas to populate the metrics for this component leveraging the LoadRunner Enterprise integration.
Create the component running:
You can now explore the result of your system modeling in the UI. As you can see, your konakart
component is now populated with all the typical metrics of a web application:
Before starting the optimization, you need to add the JVM component to your system.
First of all, install the Java optimization pack:
The comp_jvm.yaml
file defines the component for the JVM as follows:
Notice how the jvm component has some additional properties, instance and job, under the prometheus group. These properties are used by the Prometheus telemetry provider as values for the corresponding instance and job labels used in Prometheus queries to collect JVM metrics (e.g. JVM garbage collection time or heap utilization). Such metrics are collected out-of-the-box by the Prometheus telemetry provider - no query needs to be specified.
You can create the JVM component as follows:
You can now see all the JVM parameters and metrics from the UI:
You have now successfully completed your system modeling.
In this guide, your goal is to optimize Konakart performance such that:
throughput is maximized so that your e-commerce service can successfully support the high traffic peaks expected in the upcoming highly demanding season
as you need to take into account the customer experience, you want also to make sure that the response time always remains within the required service-level objective (SLO) of 100ms.
This business-level goal translates into the following configuration for your Akamas study:
goal: maximize transactions_throughput
metric
constraint: transactions_response_time
metric to stay under 100ms
You can simply take the following description of your study and copy it in a study-max-throughput-with-SLO.yaml
file:
and then run the following command to create your study:
You can now create a new workflow that you will use in your optimization study.
A workflow in an optimization study is typically composed of the following tasks:
Apply a new configuration of the selected optimization parameters to the target system: in this example, you will leverage the Akamas FileConfigurator operator - this operator can be used to write parameter values into a generic file, which could represent a shell script, an application configuration file, or any other file used to apply parameters to the target systems
Restart the application (optional): in this example, the Konakart docker container needs to be restarted in order to launch the Konakart JVM for the new configuration to be effectively applied
Launch the performance test using LRE
To create the optimization workflow, update the workflow-optimize.yaml
file replacing the correct references to your environment:
Make sure to replace the placeholders with the correct references to your environment:
hostname should reference your Konakart instance in place of the placeholder target_host
username and key must reflect your Konakart instance user and SSH private key file (also change the path /home/jsmith)
path and commands should have the correct file paths to Docker Compose files
Regarding the LoadRunnerEnterprise operator, update the configuration above with the actual values of:
address: the FQDN of your LRE farm (LRE server)
username: and password the credentials of the LRE user
project: the name of the project created on LRE
domain: the domain of the project you created on LRE
tenantID: the tenant of your project (if multi-tenancy is enabled)
testId: the id of your test on LRE
testSet: the test set name your test belongs to
timeSlot: the time slot reserved by Akamas on LRE to run your tests
verifySSL: it configures Akamas to validate the SSL configuration or skip it (useful for self-signed certificates)
For more information about the configurations available for LoadRunner Enterprise, please refer to LRE dedicated integration guide.
Once you have edited this file, run the following command to create the workflow:
In the workflow, the FileConfigurator operator is used to automatically apply the configuration of the JVM parameters at each experiment. In order for this to work, you need to allow Akamas to set the parameter values being tested in each experiment. This is made possible by the following Akamas templating approach:
locate your application configuration file where the optimization parameters need to be set
find the place where the parameter that need to be optimized is specified - for example, the heap size of the JVM: tomcat_jvm_heapsize=1024
replace the hardcoded value with the Akamas parameter template string, where you specify both the component name and the name of the Akamas parameter - for example: tomcat_jvm_heapsize=${jvm.maxHeapSize}
at this point, every time the FileConfiguration operator is invoked in your workflow, a new application configuration file will be created where each of the parameter templates replaced with the parameter values being tested by Akamas in the corresponding experiment (e.g. tomcat_jvm_heapsize=537
).
Therefore, you will now prepare the Konakart configuration file (a Docker Compose file).
First of all, you want to inspect the Konakart configuration file by executing the following command:
which should return the following output, where you can see that the JAVA_OPTS variable specifies a maximum heap size of 256 MB:
In order to allow Akamas to be able to apply this hardcoded heap size value (and any other required as optimization parameter) at each experiment, you need to prepare a new Konakart Docker Compose file docker-compose.yml.templ
where you can put the Akamas parameter template.
First, copy the Docker Compose file and rename it so as to keep the original file:
Now, edit this file docker-compose.yml.templ
file and replace the hardcoded value for the JAVA_OPTS variable with the Akamas parameter template:
aside positive
Notice that instead of specifying one single parameter at a time, Akamas also allows you to put wildcards ('*') and have all the JVM parameters replaced in place.
Therefore, the FileConfigurator operator in your workflow will expand all. the JVM parameters and replace them with the actual values provided by Akamas AI-driven optimization engine.
At this point, you are ready to create your optimization study!