Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The target system for the optimization study described in this guide is a Java-based microservice acting as an authentication service for multiple digital applications all running on Kubernetes.
The reference architecture is illustrated by the following diagram. Notice the load testing (JMeter) and monitoring (Elastic APM) tools, respectively used to load the target system and to collect the KPIs used in the optimization study.
The goal of this optimization study is to reduce the cost of the Kubernetes infrastructure.
Access the Akamas-in-a-sandbox (AIAS) environment and select "Studies" from the left-hand-side menu.
Select the preloaded "Authentication service - minimize K8s cost" study.
The Summary tab displays high-level study information at a glance, including the best score obtained so far, a summary of the optimized parameters, and their values for the best configuration.
At first look, the higher part of the Study page shows:
the ID of the Study - this can be used to run commands against this specific study
the System under test, that is "Auth Service"
the workflow used to run experiments against this system, that is "Auth Service"
In the lower part of the Summary tab you can see:
the best score, that is the best result achieved in the optimization with respect to the baseline, according to the specified goal
the optimization goal & constraints (see here below)
the selected KPIs (more on this layer in the Explore Results section)
the optimization scope, that is the number of parameters that Akamas tuned in this study
Akamas is a goal-driven optimization solution, meaning that you can specify the goal you want to achieve, including your application performance and reliability constraints. If you click on the "Details" button in the Goal card, you can see that:
the goal has been set to minimize the application cost, which is a metric representing the cost of the K8s deployments under optimization, based on CPU and memory requests
three constraints have been set to ensure that cost is reduced but application performance and reliability are not impacted. In this case, application response time, transaction throughput and error rate do not differ more than 10% from the respective values in the baseline configuration.
This section provides a set of "quick guides" that can be used during the Akamas Free Trial (see here the different options available).
While all the guides are meant to be self-contained, it is advised to first read this introduction about Akamas and its architecture. You may also refer to the glossary, when necessary.
There are two options when running an Akamas Free Trial:
Akamas-in-a-sandbox (AIAS)
Akamas-provided sandbox as cloud instance - no setup time
sample applications (e.g. Online Boutique) included in the AIAS environment as optimization targets
predefined optimization study and artifacts available to quickly familiarize with Akamas concepts
access via both UI and CLI (from your own workstation)
Akamas-in-a-box (AIAB)
Akamas server to be installed on your own system (server/laptop)
sample applications (e.g. Online Boutique) included in the AIAB package as optimization targets
ability to connect Akamas to your own environment to be managed.
The AIAS option is the standard one that is activated, while the AIAB option is available upon request.
From the Study page, you can explore the System, which is the application being optimized.
As you can see, the System is represented by several components, whose parameters and metrics you can explore:
the JVM powering the application. In this example, we selected to optimize 5 common JVM parameters like heap size and garbage collector (click on the card for the full set of 61 parameters for the OpenJDK 11 runtime)
the Kubernetes Container, whose parameters CPU and Memory limits and requests have all been selected in the optimization scope
the Kubernetes Workload, which tracks deployment-level metrics like the number of pod replicas
the Web Application, whose metrics are used to measure how the impact of the configurations experimented during the study, including the cost and the application performance metrics
From the Study page, you get to the following page by following the Workflow link.
As you can see, the Workflow includes steps that are used to fully automate the performance optimization process. You can explore it by following the left and right arrows.
Akamas executes these steps defined in the workflow for each tuning experiment.
In this example, the worfklow performs the following tasks:
write the new configuration (parameter values recommended by Akamas AI) to the K8s deployment file
calculate the cloud cost associated with this configuration.
starts the load test (in this study this was done by leveraging a JMeter integration, as illustrated in the )
The following picture represents the high-level architecture of how Akamas operates in this scenario.
The optimization goal is to make Renaissance run faster, a common optimization need for Java applications.
Therefore, the optimization study needs to have the following properties:
The goal of the optimization will be to minimize the response time of the Java application (another goal would be reducing the JVM memory usage)
The parameters to be optimized include some common JVM options like the heap size and the garbage collector type
The metrics that you will measure for each experiment will be the application response time, CPU, and memory usage
The study will execute up to 30 experiments, for a duration of about 1 hour
It’s now time to select which parameters Akamas needs to tune in this study.
Select the jvm
component and add the following JVM parameters:
jvm_maxHeapSize
jvm_newSize
jvm_maxHeapFreeRatio
jvm_gcType
jvm_survivorRatio
jvm_maxTenuringThreshold
It is also possible to tell Akamas the range of values each parameter can have: this way, AI will suggest configurations that respect desired limits. For example, in the sandbox environment, the max heap size of the JVM has to be limited to 1GB.
Select the following range of values by clicking on the EDIT DOMAIN of the corresponding parameter:
jvm_maxHeapSize: from 32MB to 1024MB
jvm_newSize: from 32MB to 1024MB
jvm_maxHeapFreeRatio: from 41 to 100
Akamas supports constraints among parameters to avoid incorrect combinations of values. In this example, the JVM newSize parameter needs to be lower or equal to maxHeapSize. You can add a new constraint to tell Akamas to keep this relation in consideration during the optimization:
This guide explains how to navigate an already executed optimization study that is available as part of the Akamas-in-a-sandbox (AIAS) environment. This is intended as a first step to then learn how to create an optimization study leveraging the Akamas UI and the CLI.
Notice: This is a sample study designed to provide an overview of the Akamas capabilities. You cannot run this optimization study, as the target Kubernetes application is not available in the Akamas free trial sandbox environment.
See the next guide to create your first real Akamas optimization!
How to explore an already executed optimization study
How to analyze the results of the optimization study
How to compare configurations with respect to selected KPIs
Access to the Akamas-In-A-Sandbox (AIAS) environment with valid credentials.
The best configuration Akamas found (for the defined optimization goal and constraints) is displayed at the bottom of the Summary tab. In this table, you can see the optimal value Akamas AI found for each parameter defined in the study optimization scope. You can also see the baseline value, which is the original value the parameter had before the optimization.
Moreover, the Insight section highlights other interesting configurations that Akamas found during the optimization process, with respect to the defined KPIs.
These KPIs are automatically selected by Akamas based on the metrics included in the optimization goal and constraints, but can also be customized by clicking on the "KPIs" section of the Summary page.
By selecting the big right arrow from the Insight section, you can visualize all of the configurations of interest for all the selected KPIs.
You can also quickly compare how different configurations score in terms of the KPIs, by clicking the histogram icon on those configurations of interest.
In this case, it is worth noticing that there is a configuration (#12) with a slightly lower cost reduction goal (-48.9% with respect to -49.1% provided by the best configuration) that provides a slight improvement (+1.4%) in terms of transaction throughput with respect to both the baseline and the best configuration.
With the Insights, you can discover the best configurations that are most interesting for you to optimize your application efficiency and performance. This shows how Insights provides support for a better decision-making process on which configuration to apply.
Congratulations! You have finished the exploration of your first Akamas optimization study. Now things get interesting: you can continue your journey by creating and running your first study! Your free trial sandbox is equipped with sample apps that you can use to play with Akamas AI-driven optimization. Follow the second guide to optimize the performance of a Java app using the UI, or the third guide to optimize the resource efficiency of K8s deployment requests and limits using the CLI.
To create the study, go to the Study menu in the UI and click on the Create button, then select Study:
The study creation wizard will now walk you through the different steps required to create the study.
First of all, choose a name and description for your study, then select the renaissance system and the renaissance-optimize workflow:
This guide describes how to create an Akamas study to optimize the performance of a Java application. To do that, you will create an Akamas study using the UI wizard.
The target application to optimize is Renaissance, a sample Java benchmark that is available in the sandbox environment.
The optimization goal is to make Renaissance run faster (minimize the response time of the benchmark iterations). You will leverage Akamas to automatically identify the optimal JVM parameters (like the size of the heap and inner pools, type of garbage collector, etc.) that can speed up the application.
How to create your first study using the Akamas UI
How to conduct an optimization with performance constraints
How to analyze and identify performance insights from a study results
Access to the Akamas-In-A-Sandbox (AIAS) environment with valid credentials.
Welcome to the Akamas Free Trial!
These guides are meant to support your Akamas Free Trial experience. We suggest to go through all of them in the presented order.
Guide | System | Optimization scope |
---|---|---|
B2B Auth Service app
Kubernetes container resources and JVM options
Renaissance Java benchmark
JVM options
Online Boutique K8s microservice app
Kubernetes container resources
Now, you can select the metrics of interest among all the metrics provided by your components.
You can do that by expanding the Choose metrics section and selecting the three metrics of the renaissance
component:
CPU used, representing the total CPU used by the JVM process running the benchmark
Memory used, representing the peak memory used by the JVM process running the benchmark
Response time, representing the response time of the benchmark
You can also click on the jvm
component to deselect its corresponding metrics, as they are not available in this example.
The "Analysis" tab you can monitor the optimization progress. Each dot is an experiment, e.g. a load test with a different configuration applied by Akamas. You can see the score of each experiment and understand how Akamas has explored the configuration space by identifying better configurations over time as a result of Akamas AI learning the application behaviour.
This chart shows how quickly better configurations were identified after a few experiments. The best configuration was discovered in experiment #12.
You can use the table below the chart to explore all experiments by analyzing the corresponding values for the different metrics and parameters.
Moreover, the "Metrics" tab allows you to analyze all the metrics and parameters over time and to compare their behavior. By default, the chart lets you compare the baseline and the best configuration, using a set of default metrics based on the selected KPIs. Feel free to explore the results by selecting any other experiment (trials) and metric you are interested in to evaluate the optimization benefits.
The study will run for 30 experiments or about 1 hour. After running the study, you can explore the results of your AI-driven performance optimization study.
Select your study from the UI Study menu.
The Summary tab displays high-level study information at a glance, including the best score obtained so far, and a summary of the tuned parameters and their values for the optimal configuration.
In this example, Akamas was able to cut the application response time by 40%. A significant result that was achieved by optimally configuring the application runtime, without any code changes!
What are the best JVM settings Akamas found that made the application run so much faster?
Without being told anything about how the application works, Akamas learned the best settings for some interesting JVM parameters:
the max heap size was slightly changed
the best garbage collector is Parallel
The Progress tab allows following the experiments and their workflow tasks execution (including logs for troubleshooting).
The Analysis tab shows the experiments' scores over time, plus a detailed table with key parameters and metrics for each experiment.
Properly tuning a modern JVM is a complex challenge, and might require weeks of time and effort even for performance experts. Akamas AI is designed to converge rapidly toward optimal configurations. In this example, Akamas was able to find the optimal JVM configuration after about 16 automated performance experiments:
The Configuration Analysis tab lets you explore the additional insights and benefits of the configurations Akamas explored with respect to other key metrics besides the goal.
Interestingly, another configuration Akamas found was able to cut CPU utilization by 33%, while still improving response time by 17%. So you improved the performance and reduced costs, at the same time.
The Metrics tab allows you to check the metrics that were collected by the telemetry for each experiment.
Congratulations! You have finished your first study! Continue your journey by following the third guide to learn how to optimize the resource efficiency of K8s deployments requests and limits using the CLI.
To model the Online Boutique inside Akamas, we need to create a corresponding System with its components in Akamas, and also associate a Prometheus telemetry instance to the system to allow Akamas to collect the performance metrics.
Now you need to first login to Akamas with the following command:
Start by installing the necessary optimization packs:
Then, create the Online Boutique system using the artifacts you previously downloaded:
You should see a message like this:
Now, you can create all the components by running:
Lastly, create the telemetry instance:
At this point, you can access the Akamas UI and verify that the Online Boutique system and its components are created under the Systems menu:
Notice that this System leverages the following Optimization Packs:
Kubernetes: it provides a component type required to model each Kubernetes Pod - one for each Deployment in the Online Boutique.
Web Application: it models the end-to-end metrics of the Online Boutique, such as the application response time and throughput.
This guide describes how to optimize the cost efficiency of a Kubernetes microservice application.
The target application is Online Boutique, a popular app developed by Google for demo purposes. Online Boutique runs on a minimal Kubernetes cluster within the Akamas sandbox environment so you don't need to install anything on your side.
You will use the Akamas CLI to create this study, to familiarize with Akamas Optimization-as-Code approach.
All the configuration files and scripts referenced in this guide can be downloaded from Akamas' public GitHub repository.
To start, clone the repository.
How to use the Akamas CLI to create an Optimization Study and all its supporting artifacts
How to model the Online Boutique application (which is available in the environment)
How to configure Prometheus to let Akamas collect Kubernetes performance metrics
How to optimize a Kubernetes application using Akamas
Access to the Akamas-In-A-Sandbox (AIAS) environment with valid credentials.
Install the Akamas CLI and use your free trial instance URL as API address as follows: https://<your-free-trial-sandbox-address>:8443
The next step is to create a workflow describing the steps executed in each experiment of your optimization study.
A workflow in an optimization study for Kubernetes is typically composed of the following tasks:
Update the Kubernetes deployment file of the Kubernetes workloads with the new values of the selected optimization parameters (e.g. CPU and memory requests and limits), using the FileConfigurator operator.
Apply the new deployment files via kubectl
. This triggers a rollout of the new deployment in Kubernetes.
Wait until the rollout is complete.
Run the performance test.
To create the workflow, launch the following command:
You can verify that this workflow has been created by accessing the "boutique" workflow in the Workflow menu:
All scripts and templates used in the steps of this workflow are available in the kubernetes-online-boutique/workflow-artifacts
folder.
It's now time to define the optimization study. The overall objective is to increase the cost efficiency of the application, without impacting application reliability in terms of response time or error rates.
To achieve that objective, you create an Akamas study with the goal of maximizing the ratio between application throughput and cloud cost, where:
application throughput is the transactions per second as measured by the load-testing tool
cloud cost is the total cost to run Kubernetes microservices on the cloud, and is a function of the CPU and memory requests assigned to each container. We assume a sample pricing of $29 per CPU core/month and $3.2 per memory GB/month.
Hence, a good configuration is one that either increases throughput with the same cloud cost, or that keeps throughput constant but with a lower cloud cost.
To avoid impacting application reliability, you can define Akamas metric constraints on transaction response time lower than 500 milliseconds and error rate below 2%.
Here is the relevant section of the study:
As regard the parameters to optimize, in this example Akamas is tuning CPU and memory limits (requests are set equal to the limits) of each deployment in the Online Boutique application, for a total of 22 parameters. Here is the relevant section of the study:
You can review the complete optimization study by looking at the study.yaml
file in the akamas/studies
folder.
You can now create the optimization study:
and then start the optimization:
You can now explore this study from the Study menu in the Akamas UI and then move to the Analysis tab.
As the optimization study executes the different experiments, this chart will display more points and their associated score.
The next step is to define your optimization goal.
Leave the MINIMIZE
option, select the renaissance
component, and then choose the response_time
metric for the goal of reducing the benchmark execution time.
This section describes how to build a fresh Ubuntu 20.04 VM on your workstation with few commands using Multipass.
Notice: another option to install Akamas is using a VM running a freshly installed Ubuntu 20.04 or RHEL / CentOS 8 on your workstation or cloud service provider of choice. If this is the case you can skip this section.
Notice that, as of the date of this document, Multipass is not yet supported on macOS for Apple M1-based computers.
The following picture represents the high-level architecture of how Akamas operates in this scenario. As this picture illustrates, the Akamas commands in this guide need to be executed from an Akamas shell in the VM while those related to Multipass need to be executed from a shell on your workstation (the host running the VM).
If you already use VMware or VirtualBox, it is better to create a new VM starting from the Ubuntu ISO, since Multipass may conflict with your current virtualization setup. If you are not currently using any other virtual machine, then you should be fine.
Duration: 01:00
Once Multipass is installed, you can create an Ubuntu 20.04 instance with the resources required by Akamas by executing the following command:
Once this step is complete, you can check the instance IP (note this down, it will be required later)
Now you can launch a shell logged into this instance:
Finally, you can define the steps the study will go through.
Typically, an Akamas study is composed of:
a baseline
step, which is used to assess the system performance in the initial configuration (e.g. the settings currently in place for your system)
an optimize
step, which is used to execute the actual AI-driven performance optimization
The Akamas study wizard already creates these two steps for you.
In this study, we simply configure the baseline configuration by setting jvm_maxHeapSize
to 1024MB:
Then click on the Optimize step in the left panel to specify the optimization configuration.
For now, simply set the number of experiments to execute to 30:
Now click on Create Study.
You have now created your first study and you are ready to start the optimization!
You can review your study in the Definition tab or go and launch the study by pressing the Start button:
At this point, you should have your VM up and running.
To get Akamas installed, you just need to first download the installation script:
and then run it:
Please take into account that this installation procedure may take a while as it has to first download the entire Akamas software and then boot all Akamas services. Depending on how fast is your internet connection, please plan from 30 to 60 minutes to get this step done.
Please make sure to reply Yes to all requests related to whether to get AWS CLI and Docker packages installed. And of course please say yes also to Akamas' license agreement.
Once the installation process is completed, you can verify if the installation has been successful. Please take into account that right after the installation, Akamas services are automatically started for the first time and that this may require a few more minutes.
You can verify if Akamas services are up and running by executing the following command or by accessing directly the UI (see next section):
Once Akamas services are up and running, you can access the Akamas UI.
You can retrieve the IP address of your Akamas instance from the dashboard of your cloud service provider or, if you are running on Multipass on your workstation, by executing the following command from the host machine:
Now you can open your browser and type this IP address to get access the Akamas login page:
Notice: If you get a message "Waiting for Akamas services" or "Loading Akamas (90%)", then this means that some Akamas services are still starting. You can always check the status of Akamas services by running the following command:
akamas status -d
At this point, You are almost done: you just need to install the Akamas license
To install the Akamas license, you need to upload the license file which contains your license key on your Akamas instance. For example, you can leverage Multipass to transfer the license file by running the following command from the host system:
Alternatively, you can put the license key into a file using your favorite text editor. Also in this case we assume the license file to be named license.ak
.
Once you have the license file on your VM, you can install the license you have been provided by running the following commands from inside the Akamas shell (the path to the license file may be also specified) and finally log into Akamas using your credentials (that you received from Akamas support).
Congratulations, your Akamas instance is now fully installed and ready to be used!
These Quick Guides are meant to support your Akamas Free Trial experience for the . Please refer to the.
First of all, get Multipass installed by following the instructions on the . Multipass is a lightweight VM manager that provides the fastest way to create an Ubuntu VM on , , or .
Let's now take a look at the results and benefits Akamas achieved in this optimization study.
First of all, the best configuration was quickly identified, providing a cost-efficiency increase of 17%, without affecting the response time.
Let's look at the best configuration in the Summary tab. Here you can see the right amount Akamas AI found for all CPU and memory requests & limits, considering the goal of maximizing cost efficiency and matching the application performance and reliability constraints.
It’s interesting to notice the best configuration Akamas found for every single microservice:
For some microservices (e.g., frontend), both the CPU and memory resources were increased.
For others (e.g., paymentservice), the memory was decreased while the CPU was slightly increased.
For some others (e.g., productcatalogservice), only the memory was decreased.
Let's navigate the Insights section, which provides details of the best experiment for each of the selected KPIs.
The best experiments according to the selected KPIs are automatically tagged and listed in the table. Interestingly, experiment 34 reached the best efficiency, while experiment 53 achieved the best throughput and a significant decrease in the application response time. Also, notice that a couple of identified configurations improved the application response time even more (up to 87%)!
The experiments can be plotted with the histogram icon to better analyze the impact of the selected configurations.
This optimization study shows how it is possible to tune a Kubernetes application made by several microservices, which represents a complex challenge, typically requiring days or weeks of time and effort even for expert performance engineers, developers, or SRE. With Akamas, the optimization study only took about 4 hours to automatically identify the optimal configuration for each Kubernetes microservice.
Congratulations! You have completed your first study to optimize a Kubernetes application! As a next step, take a look at all the other guides available in our free trial sandbox. Have you already completed all of the free trial guides? Get in touch with us and share your feedback!
First of all, let's create a System.
In the provided repository you can find a file called system.yaml
with the following content:
Create the corresponding system as follows:
You can directly see it in the UI under the Systems menu:
You can always list all systems and verify that a system has been correctly created:
At this point, you need to add the two components corresponding to the JVM and the Renaissance application. For both these components you can take advantage of the corresponding pre-defined Optimization Packs. An Optimization Pack provides for each specific technology the recommended Parameters, Metrics, and Telemetry Providers.
Since Renaissance is a Java-based application, we can take advantage of the corresponding Akamas Optimization Pack.
In order to install the Akamas Optimization Pack for Java OpenJDK 11, you can either run the following command
or simply operate from the UI, by installing it from the section listing all the Optimization Packs available in your installation:
which should result in this Optimization Pack being installed (and thus possibly ready to be de-installed)
Notice: you only need to install an Optimization Pack once. So if an Optimization Pack has already been installed there is no need to follow these instructions once again. You can verify which Optimization Packs are already installed directly from the UI or by executing the following command:
Now that the Optimization Pack has been installed, you can create the JVM component for this system.
The file comp_jvm.yaml
contains the following definition for our system:
Create the corresponding component for the renaissance
system:
You can now see the JVM component in the UI:
You can always list all components in a system and verify that they have been correctly created:
At this point, you can install the Optimization Pack for Renaissance. This Optimization Pack is needed to tell Akamas which metrics you are going to optimize. Our goal for this study is to minimize the benchmark response time, and to do so you need to create a component with the response time metric. For each benchmark execution, we will also track the CPU and memory usage of the Java process, so in the Optimization Pack we have included those metrics as well.
Let's now install the Renaissance Optimization Pack. This time, you will use a JSON file already provided in the Akamas-in-a-box installation (or if you're using a cloud instance from your laptop, you can download it from our GitHub repository here):
You can easily create new Optimization Packs with the Akamas CLI to optimize your application-specific parameters! See the akamas build optimization-pack
command to create the JSON file, and the product documentation to learn more.
Now you can create the renaissance
component.
The file comp_renaissance.yaml
defines the component as follows:
Create the component by specifying the YAML file and the name of the system renaissance
:
You can now see the renaissance component and the associated response time, CPU, and memory usage metrics:
Now we need to move to the next step and define where to collect metrics for this system.
In this scenario, the goal is to make Renaissance run faster by optimizing the Java Virtual Machine (JVM) parameters.
The following picture represents the high-level architecture of how Akamas operates in this scenario.
In the following, you will first create a System with two components corresponding to the JVM and the Renaissance application (which only for your first studies is represented by a benchmark). Then you will create a Telemetry Instance from the CVS Telemetry Provider so as to collect metrics generated in CSV format by Renaissance. Finally, you will create a Workflow to automate the whole process of applying a benchmark configuration, launching the benchmark, collecting the resulting metrics. This will allow you to create your optimization Study to identify the optimal configuration.
Before starting, you need to login to Akamas with the following command:
The Online Boutique application is a cloud-native microservices application implemented by Google as a demo application for Kubernetes. It is a web-based sample, yet a fully-fledged, e-commerce application.
To stress the application and validate Akamas AI-based configurations under load, the setup includes the Locust load testing tool which simulates user activity on the application.
Prometheus is used to collect metrics related to the application (throughput and response times) as well as Kubernetes infrastructure (container resource usage).
In this guide, you will install an Akamas-in-a-box, which is a full-functional Akamas instance as a Virtual Machine (VM), running on your workstation or in the cloud.
How to install an Akamas instance on a VM
How to execute some basic Akamas commands
The minimum requirements to get Akamas running are:
2 CPUs
8 GB of RAM
30 GB of disk space
an Akamas license.
These requirements refer to the VM that hosts Akamas. If you are creating it on your desktop or laptop it should have at least 4 Cores and 12GB of RAM to effectively run the Akamas VM.
To get started, you only need the following:
A workstation to install Akamas
An Akamas license
Notice: an Akamas license should have been sent to you as part of the free trial welcome kit. If that's not the case, please send a request to info@akamas.io
In this guide, you’ll learn how to optimize , a real-world Java-based e-commerce application, by leveraging the JMeter performance testing tool and Prometheus monitoring tool.
How to optimize a real-world Java application with Akamas in a realistic performance environment
How to integrate JMeter load testing tool with Akamas
How to integrate the Prometheus monitoring tool with Akamas
How to automate the configuration of the parameters in a containerized Java application
How to conduct an optimization with performance constraints
How to analyze and identify performance insights from study results
An Akamas-in-a-box instance installed with a valid license - see the Akamas In a Box guide.
The Konakart performance environment - see the Konakart setup guide.
Familiarity with Akamas concepts. If you're new to Akamas, please review the Java quickstart guide.
In this guide, you will learn how to optimize the performance of your first Java application with Akamas.
For your very first optimization, the target application will be Renaissance, a sample Java benchmark that comes already shipped with Akamas-in-a-box - no need to install anything! The optimization goal is to make Renaissance run faster (minimize the response time of the benchmark iterations). You will leverage Akamas to automatically identify the optimal JVM parameters (like the size of the heap and inner pools, type of garbage collector, etc.) that can speed up the application.
Enjoy your first optimization exercise on a simple benchmark application. Once you have completed your first optimizations, make sure to check out the other quick guides to feel the power of Akamas applied to real-world optimization scenarios.
Get ready for your first optimization!
How to optimize the performance of a Java application
The main Akamas concepts: system, optimization pack, telemetry instance, workflow and optimization study
How to interact with Akamas through the CLI and web UI
How to create, list, and delete Akamas resources
An Akamas instance installed with a valid license - see the Akamas In a Box guide
Basic understanding of Akamas - you can watch a 2m video in the Welcome to Akamas guide.
Notice: all the configuration files used in this guide can be found on our .
In this section you will configure how Akamas collects metrics related to the renaissance
system. Metrics are required both to define your optimization goal (e.g.: minimize the renaissance.response_time
metric) and analyze the optimization results.
A Telemetry Provider specifies how to collect these metrics from a source, such as a monitoring platform (e.g. Prometheus or Dynatrace), a test tool (eg. Neoload or Loadrunner) or a simple CSV file. Akamas ships several out-of-the-box Telemetry Providers.
For each specific source, an instance of the corresponding Telemetry Provider needs to be defined at the system level.
The output of the Renaissance benchmark suite is a CSV report file once the benchmark completes, which includes the benchmark execution time, CPU, and memory usage. Therefore, you will now create a CSV telemetry instance.
The file tel_csv.yaml
provides the following definition:
Create the telemetry instance as follows:
You can verify your new telemetry instance under the corresponding tab within the UI:
You can always list all telemetry instances in a system and verify that they have been correctly created:
So far you have defined how the application to be optimized looks like in terms of Akamas system and components, and the telemetry required to gather the relevant metrics. Your next step is to create a workflow, that is defining how to run optimization experiments.
How does Akamas understand when given JVM configuration is improving performance or making it worse? This is done by running optimization experiments. Thus you now need to tell Akamas how to conduct these experiments. You can do this by creating a workflow.
A workflow is a sequence of tasks that Akamas executes for every experiment, such as:
apply the parameters to an application configuration file
restart the application
launch a performance test to assess the effects of the new configuration
In the next guides you will learn how to define a workflow from the UI or to specify the YAML file to get it created from the CLI. But since the purpose of this guide is to focus on getting you started, for this time you will simply use the workflow.yaml
file provided in the repository, defined as the following:
Create the workflow by running the following command:
You can verify your new workflow has been created by accessing the corresponding Workflow menu in the UI:
You can always list all workflows and verify that they have been correctly created:
At this point, you are ready to create your optimization study.
Akamas provides an out-of-the-box optimization pack called Web Application that comes very handy for modeling typical web applications, as it includes metrics such as transactions_throughput
and transaction_response_time
which you will use in this guide to define the optimization goal and analyze the optimization results. These metrics will be gathered from JMeter, thanks to Akamas out-of-the-box Prometheus telemetry provider.
Let's create the system and its components.
The file system.yaml
contains the following definition for our system:
Run the command to create it:
Now, install the Web Application optimization pack from the UI:
Akamas provides an out-of-the-box optimization pack called Web Application that comes very handy for modeling typical web applications, as it includes metrics such as transactions_throughput
and transaction_response_time
which you will use in this guide to define the optimization goal and analyze the optimization results. These metrics will be gathered from JMeter, thanks to Akamas out-of-the-box Prometheus telemetry provider.
You can now create the component modeling of the Konakart web application.
The file comp_konakart.yaml
defines the component as follows:
As you can see, this component contains some custom properties, instance and job, under the prometheus group. These properties are used by the Prometheus telemetry provider as values for the corresponding instance and job labels in the Prometheus queries to collect metrics for the correct entities. You will configure the Prometheus integration in the next sections.
You can now run the command to create the component:
You can now explore the result of your system modeling in the UI. As you can see, your konakart
component is now populated with all the typical metrics of a web application:
Next you will need to create a workflow that specifies how Akamas applies the parameters to be optimized, how to automate the launch of JMeter performance tests, and how to collect metrics from Prometheus telemetry. For now, you will create a simple automation workflow that executes a quick two-minute performance test to make sure everything is working properly.
The file workflow-baseline.yaml
contains the definition of the steps to perform during the test:
Please make sure to modify the workflow-baseline.yaml
file replacing the following placeholders with the correct references to your environment:
hostname should reference your Konakart instance in place of the placeholder target_host
username and key must reflect your Konakart instance user and SSH private key file (also check the path /home/jsmith)
TARGET_HOST in the JMeter command line should reference your Konakart instance in place of the placeholder target_host
Then create the workflow:
To execute this workflow we'll use a simple Akamas study that includes a single step of type baseline
. This type of step simply executes one experiment without leveraging Akamas AI - you will add the AI-driven optimization step later.
The study-baseline.yaml
file defines the study as follows:
Create the study:
Now, you can run the study by clicking Start from the UI, or by executing the following command:
You should now see the baseline experiment running in the Progress tab of the Akamas UI.
Notice that you can also monitor JMeter performance tests live by accessing Grafana on port 3000
of your Konakart instance, then selecting the JMeter Exporter dashboard:
You can relaunch the baseline study at any time you want by pressing the Start button again. If you want, you can also adjust the JMeter scenario settings in the workflow - see the Konakart setup guide for more details on the JMeter plans and variables you can set.
You will notice that the baseline experiment will fail on the telemetry task - see Progress tab in the UI. This is expected, as you still have not configured the Akamas telemetry, i.e. how Akamas can collect metrics - you will do this in the next section.
You can now look at the results of your first AI-driven performance optimization study.
Notice: in your environment, you might achieve different results with respect to what is described in this guide. The actual best configuration might depend on your actual setup - operating systems, cloud or virtualization platform, and the hardware
Select your study from the UI Study menu.
The Summary tab displays high-level study information at a glance, including the best score obtained so far, a summary of the optimized parameters, and their values for the best configuration.
By optimally configuring the JVM parameters, Akamas was able to cut the application response time by almost 41%:
The Analysis tab shows the experiments' score over time.
Properly tuning a modern JVM is a complex challenge, and might require weeks of time and effort even for performance experts.
Akamas AI is designed to converge rapidly toward optimal configurations. In this case, Akamas was able to find the optimal JVM configuration after only 30 automated performance experiments:
Below the optimization chart, you can also find a table showing aggregated performance metrics and parameters set for each experiment. For each metric, you can find a percentage variation with respect to the baseline experiment, so that you can quickly see the impact the new parameters had on other interesting key metrics (you can sort them too).
The Insights drawer lets you explore the additional benefits of the configurations Akamas explored with respect to other key metrics besides the goal. Choose some KPIs in the study's main page to discover the insights.
In this optimization, the best configurations Akamas found not only made the application run significantly faster, but also made the application run more efficiently on the CPU:
From a CPU efficiency perspective, the best configuration Akamas found was able to cut CPU utilization by 33%, while still improving response time by 17%.
What are the best JVM settings Akamas found that made the application run so much faster?
You can find them in the Best Configuration table in the Summary tab.
Without being told anything about how the application works, Akamas learned the best settings for some interesting JVM parameters:
the max heap size was slightly changed
the best garbage collector is Parallel
Those are not easy insights to discover, without being an expert and doing dozens of manual performance experiments!
The Metrics tab allows you to check the metrics that the telemetry modules collected over time for each experiment. In the chart, Akamas presents you with a comparison of the key metrics related to the baseline and the best experiment (you can add more using the filters).
Despite this first optimization relying on short benchmark execution times, the best configuration is consistently faster than the baseline.
Congratulations! You have just completed your first Akamas optimization of a sample Java application!
You can now create a new workflow that you will use in your optimization study.
A workflow in an optimization study is typically composed of the following tasks:
Apply a new configuration of the selected optimization parameters to the target system: in this example, you will leverage the Akamas FileConfigurator operator - this operator can be used to write parameter values into a generic file, which could represent a shell script, an application configuration file, or any other file used to apply parameters to the target systems
Restart the application (optional): in this example, the Konakart docker container needs to be restarted in order to launch the Konakart JVM for the new configuration to be effectively applied
Launch the performance test: in this example, the JMeter performance tests are launched as described in a previous section (same as the baseline workflow)
The file workflow-optimize.yaml
contains the pre-configured workflow, you only need to include the correct references to your environment:
Please make sure to modify the workflow-optimize.yaml
file so as to get some variables replaced with the correct references to your environment:
hostname should reference your Konakart instance instead of the placeholder target_host
username and key must reflect your Konakart instance user and SSH private key file (also change the path /home/jsmith)
path and commands should have the correct file paths to Docker Compose files
TARGET_HOST in the JMeter command-line variable should reference your Konakart instance instead of the placeholder target_host
RAMP_UP_TIME in the JMeter command-line variable should be set to the desired length of the test: you may set this value to 300 seconds (5 minutes) test to make sure everything works correctly, and then change it to 900 seconds (15 minutes), which is more appropriate for optimization purposes
Once you have edited this file, you can then run the following command to create the workflow:
In the workflow, the FileConfigurator operator is used to automatically apply the configuration of the JVM parameters at each experiment. In order for this to work, you need to allow Akamas to set the parameter values being tested in each experiment. This is made possible by the following Akamas templating approach:
locate your application configuration file where the optimization parameters need to be set
find the place where the parameter that need to be optimized is specified - for example, the heap size of the JVM: tomcat_jvm_heapsize=1024
replace the hardcoded value with the Akamas parameter template string, where you specify both the component name and the name of the Akamas parameter - for example: tomcat_jvm_heapsize=${jvm.maxHeapSize}
at this point, every time the FileConfiguration operator is invoked in your workflow, a new application configuration file will be created where each of the parameter templates replaced with the parameter values being tested by Akamas in the corresponding experiment (e.g. tomcat_jvm_heapsize=537
).
Therefore, you will now prepare the Konakart configuration file (a Docker Compose file).
First of all, you want to inspect the Konakart configuration file by executing the following command:
which should return the following output, where you can see that the JAVA_OPTS variable specifies a maximum heap size of 256 MB:
In order to allow Akamas to be able to apply this hardcoded heap size value (and any other required as optimization parameter) at each experiment, you need to prepare a new Konakart Docker Compose file docker-compose.yml.templ
where you can put the Akamas parameter template.
First, copy the Docker Compose file and rename it so as to keep the original file:
Now, edit this file docker-compose.yml.templ
file and replace the hardcoded value for the JAVA_OPTS variable with the Akamas parameter template:
aside positive
Notice that instead of specifying one single parameter at a time, Akamas also allows you to put wildcards ('*') and have all the JVM parameters replaced in place.
Therefore, the FileConfigurator operator in your workflow will expand all. the JVM parameters and replace them with the actual values provided by Akamas AI-driven optimization engine.
At this point, you are ready to create your optimization study!
In this guide, your goal is to optimize Konakart performance such that:
throughput is maximized, so that your e-commerce service can successfully support the high traffic peaks expected in the upcoming highly demanding season
as you need to take into account the customer experience, you want also to make sure that the response time always remains within the required service-level objective (SLO) of 100ms.
This business-level goal translates into the following configuration for your Akamas study:
goal: maximize transactions_throughput
metric
constraint: transactions_response_time
metric to stay under 100ms
The study-max-throughput-with-SLO.yaml
provides the pre-configured study:
Run the following command to create your study:
It is time now to configure Akamas telemetry to collect the relevant JMeter performance metrics. You will use the out-of-the-box Prometheus provider for that purpose.
The Prometheus telemetry provider collects metrics for a variety of technologies, including JVM and Linux OS metrics. Moreover, you can easily extend it to import additional metrics via custom promQL queries. In this example, you are collecting JMeter performance test metrics that are exposed by the JMeter Prometheus exporter already configured in the Konakart performance environment.
The file tel_prometheus.yaml
defines the telemetry instance as follows - make sure to replace the target_host placeholder with the address of your Konakart instance:
Now create a telemetry instance associated with the konakart
system:
Now you can test the Prometheus integration by running again the baseline study you have created before (you can simply press again the Start button in the Study page). At the end of the experiment, you should see JMeter performance metrics such as transactions_throughput
and transactions_response_time
displayed as time series in the Metrics tab, and as aggregated metrics in the Analysis tab:
At this point, you can launch your JMeter performance tests from Akamas and see the relevant performance metrics imported from Prometheus.
Before starting the optimization, you need to also add the JVM component to your system.
First of all, install the Java optimization pack:
The file comp_jvm.yaml
defines the JVM as follows:
Notice that the jvm component has some additional properties, instance and job, under the prometheus group. These properties are used by the Prometheus telemetry provider as values for the corresponding instance and job labels used in Prometheus queries to collect JVM metrics (e.g. JVM garbage collection time or heap utilization). The Prometheus telemetry provider collects these metrics out-of-the-box - no query needs to be specified.
You can create the JVM component as follows:
You can now see all the JVM parameters and metrics from the UI:
At this point, your system is composed of the web application and the JVM component you need to perform the optimization study.
The following picture represents the high-level architecture of how Akamas operates in this scenario.
Here are the main elements:
The target system to be optimized is Konakart, a real-world e-commerce web application based on Java and running as a Docker container within a dedicated cloud instance - in this guide, the goal is to optimize Konakart throughput and response time
The optimization scope in the Konakart Java Virtual Machine (JVM) with the configuration of the JVM parameters specified in a Docker configuration file on the Konakart cloud instance
The optimization experiments leverage JMeter as a load-testing tool
The optimization's results leverage Prometheus as a telemetry provider to collect load testing, JVM, and OS-level metrics.
It's now time to set up and run our first optimization! In Akamas, an optimization is called a Study.
In this scenario, you will be creating a Study with the following properties:
The goal of the optimization will be to minimize the duration of our Renaissance benchmark - indeed for data analytics applications it's critical to reduce the time required to analyze the data and provide insights
The parameters to be optimized include just a handful of common JVM options like the heap size and the garbage collector type
The metrics that we will measure for each experiment will be the application response time and the resource consumption (CPU and memory usage)
The duration of the study will be 30 experiments, which typically takes about 1 hour
In the other guides you will learn how to define a study from the UI or to specify the YAML file to get it created from the CLI. Since the purpose of this guide is to focus on getting your started, for this time you will simply use the study-max-performance.yaml
file provided in the repository, defined as the following:
Create the study:
You can also check from the UI that you have this Study in the corresponding UI section.
Congratulations, your first Akamas study is ready to be started!
Duration: 01:00
You can now start your first study with the command:
or by accessing the Study menu in the UI, clicking on the study, and then pressing the Start button:
Notice that all steps so far can be done even without a valid installation. However, in order to start a study, you need to have a valid license installed. See Install the license
Congratulations, your first Akamas study is now running!
You can follow the progress of the study from the UI by selecting the Progress tab:
The Progress tab also allows following the workflow tasks execution, including logs for troubleshooting:
You can also execute the following command from time to time:
This optimization study is expected to automatically run all its experiments for about 1 hour. Of course, this may vary depending on your specific system and study configuration.
Notice that you can always stop (and restart) a study before all experiments are completed. This can be done by using the Stop button in the UI.
or by executing the following command:
Notice: while Akamas is running experiments, it may happen that a bad configuration results in an error such as Java OutOfMemoryError or similar. When that happens, the experiment is marked as failed. But don't worry! Akamas AI learns from those failures and it automatically takes them into account in order to quickly explore the most promising areas of the optimization space.
You can now analyze the partial results of the optimization study as its experiments progress.
Let's now take a look at the results and benefits Akamas achieved in this real-life optimization.
Notice: in your environment, you might achieve different results with respect to what is described in this guide. The actual best configuration might depend on your actual setup - operating systems, cloud or virtualization platform, and the hardware
By optimally configuring the application configurations (JVM options), Akamas increased the application throughput by 30%:
Properly tuning a modern JVM is a complex challenge, and might require weeks of time and effort even for performance experts.
Akamas was able to find the optimal JVM configuration after a bit more than half a day of automatic tuning:
In the Summary tab you can quickly see the optimal JVM configuration Akamas found:
As you can see, without being told anything about how the application works, Akamas learned the best settings for some interesting JVM parameters:
it almost tripled the JVM max heap size
it changed the garbage collector from G1 (the default) to Parallel, and it adjusted the number of GC threads
it significantly changed the sizing of the Survivor spaces and the new generation
Those are not easy settings to tune manually!
Another very interesting side benefit is that the optimized configuration not only improved application throughput, but also made Konakart run 23% with respect to the baseline (Configuration Analysis tab):
Also notice how the 3rd best configuration actually improved response time even more (26%).
The significant effects the optimal configuration had on application scalability can be also analyzed by looking at the over-time metrics (Metrics tab).
As you can see, the best configuration highly increased the application scalability and the ability to sustain peak traffic volumes with very low response times. Also notice how Akamas automatically detected the peak throughput achieved by the different configurations while keeping the response time under 100 ms, as per the goal constraints.
As a final but important benefit, the best configuration Akamas identified is also more efficient CPU-wise. As you can see by looking at the jvm.jvm_cpu_used metric, at peak load the CPU consumption of the optimized JVM was more than 20% less than the baseline configuration. This can translate to direct cost savings on the cloud, as it allows using a smaller instance size or container.
Congratulations, you have just done your first Akamas optimization of a real-life Java application in a performance testing environment with JMeter and Prometheus!
You need a fully working LoadRunner Enterprise environment to run a load test on your target Konakart system.
Take note of the following ids/configurations while setting up all your LRE artifacts:
the credentials used to access and run the scripts on your LRE project;
your LRE project name and domain;
your test id;
the test set your test belongs to
the project your test belongs to
the domain your project belongs to
the tenant id your project belongs to (for multi-tenant installations only)
which you will need to set into your workflow configuration.
Moreover, take note of the address, the schema name, and the credentials of your InfluxDB external analysis server since they will be required while configuring the telemetry instance.
To create the LoadRunner Enterprise test you will need a script to simulate user navigations on the Konakart website. You can find a working script in the repository.
Please notice that you need to replace the URL of the requests from http://konakart.dev.akamas.io:8780
to the FQDN and port of the instance where Konakart is deployed.
In this guide, you'll learn how to optimize , a real-world Java-based e-commerce application, by leveraging Micro Focus LoadRunner Enterprise performance testing tool and Prometheus monitoring tool.
Please refer to this knowledge base article on how to setup a Konkart test environment and to this page on how to integrate LoadRunner Enterprise with Akamas.
How to optimize a real-world Java application with Akamas in a realistic performance environment
How to integrate the Prometheus monitoring tool with Akamas
How to automate the configuration of the parameters in a containerized Java application
How to conduct an optimization with performance constraints
How to analyze and identify performance insights from study results
An Akamas-in-a-box instance installed with a valid license - see the Akamas In a Box guide.
The Konakart performance environment - see the Konakart setup guide.
A working LoadRunner Enterprise installation
Akamas provides an out-of-the-box optimization pack called Web Application that comes very handy for modeling typical web applications, as it includes metrics such as transactions_throughput
and transaction_response_time
which you will use in this guide to define the optimization goal and to analyze the optimization results. These metrics will be gathered from LRE, thanks to Akamas out-of-the-box LoadRunner Enterprise telemetry provider.
Let's start by creating the system and its components.
The file system.yaml
contains the following description of the system:
Run the command to create it:
The Web Application component is used to model the typical performance metrics characterizing the performance of a web application (e.g. the response time or the transactions throughput).
Akamas comes with a Web Application optimization pack out-of-the-box. You can install it from the UI:
You can now create the component modeling the Konakart web application.
The comp_konakart.yaml
file describes the component as follows:
As you can see, this component contains the loadrunnerenterprise
property that instructs Akamas to populate the metrics for this component leveraging the LoadRunner Enterprise integration.
Create the component running:
You can now explore the result of your system modeling in the UI. As you can see, your konakart
component is now populated with all the typical metrics of a web application:
Before starting the optimization, you need to add the JVM component to your system.
First of all, install the Java optimization pack:
The comp_jvm.yaml
file defines the component for the JVM as follows:
Notice how the jvm component has some additional properties, instance and job, under the prometheus group. These properties are used by the Prometheus telemetry provider as values for the corresponding instance and job labels used in Prometheus queries to collect JVM metrics (e.g. JVM garbage collection time or heap utilization). Such metrics are collected out-of-the-box by the Prometheus telemetry provider - no query needs to be specified.
You can create the JVM component as follows:
You can now see all the JVM parameters and metrics from the UI:
You have now successfully completed your system modeling.
The following picture represents the high-level architecture of how Akamas operates in this scenario.
Here are the main elements:
The target system to be optimized is Konakart, a real-world e-commerce web application based on Java and running as a Docker container within a dedicated cloud instance - in this guide, the goal is to optimize Konakart throughput and response time
The optimization scope in the Konakart Java Virtual Machine (JVM) with the configuration of the JVM parameters specified in a Docker configuration file on the Konakart cloud instance
The optimization experiments leverage LoadRunner Enterprise as a load-testing tool
The optimization results leverage Prometheus as a telemetry provider to collect JVM and OS-level metrics.
In this guide, your goal is to optimize Konakart performance such that:
throughput is maximized so that your e-commerce service can successfully support the high traffic peaks expected in the upcoming highly demanding season
as you need to take into account the customer experience, you want also to make sure that the response time always remains within the required service-level objective (SLO) of 100ms.
This business-level goal translates into the following configuration for your Akamas study:
goal: maximize transactions_throughput
metric
constraint: transactions_response_time
metric to stay under 100ms
You can simply take the following description of your study and copy it in a study-max-throughput-with-SLO.yaml
file:
and then run the following command to create your study:
To get the target application (Online Boutique), the load generator (Locust), and the telemetry provider (Prometheus) installed, you need to use the three Kubernetes manifests available in the kube
folder of your cloned repo. The corresponding kubectl
commands must be issued from any terminal pointing to your cluster.
Notice: if you have installed the minikube cluster with the scripts provided in this guide, you can skip this command and proceed to the paragraph Install the target application.
Notice that all these three manifests refer to a label akamas/node=akamas
to ensure that the corresponding pods are scheduled on the same node. For the sake of simplicity, run the following command to assign this label to the node you want to use for these pods (this is not needed for the Minikube cluster, which already is correctly configured):
To install the Online Boutique application, you need to apply the boutique.yaml
manifest to your cluster with the following command:
This command will create the namespace akamas-demo
and all the Deployment and Services of the Online Boutique inside that namespace. You can verify that all the pods are up and running with the command:
You can wait until the output is similar to the following one, then proceed:
Then, to install Locust, you need to apply the loadgenerator.yaml
manifest to your cluster:
You can verify that all the pods are up and running with the following command:
The output should be similar to the following one:
Finally, to install Prometheus, you need to apply the prometheus.yaml
manifest:
You can verify that all the pods are up and running with the command:
The output should be similar to the following one:
This guide will walk you through the steps of optimizing an application running on a Kubernetes cluster using Akamas.
You will optimize Online Boutique, a cloud-native microservices application implemented by Google as a demo application for Kubernetes. It is a web-based sample, yet a fully-fledged, e-commerce application.
How to integrate your Kubernetes Cluster with Akamas
How to model the Online Boutique application inside Akamas
How to configure Prometheus to let Akamas collect Kubernetes performance metrics
How to optimize a Kubernetes application using Akamas
Basic understanding of the concepts - you may want to watch a quick (2m) video included in this Welcome to Akamas guide and get yourself familiar with Akamas' key concepts by reading the Learn Akamas key concepts guide.
An Akamas installation with a valid license - you may want to read the Akamas in-a-box guide to set up your Akamas instance.
A working Kubernetes cluster - if you do not have such a cluster available, you can easily create a local one following the next sections of this guide.
You can find all the configuration files used in this guide on Akamas' public GitHub repository.
Please clone the repository in the home directory of the Akamas instance by running the command
Telemetry instances need to be created to allow Akamas to leverage data collected from LoadRunner Enterprise (web application metrics) and Prometheus (JVM and OS metrics).
The Prometheus telemetry instance collects metrics for a variety of technologies, including JVM and Linux OS metrics. Moreover, it can also be easily extended to import additional metrics (via custom promQL queries). In this example, you are going to use Prometheus to import JVM metrics exposed by the Prometheus JMX exporter.
First, update the tel_prometheus.yaml
file replacing the target_host placeholder with the address of your Konakart instance:
And then create a telemetry instance associated with the konakart
system:
As described in the LRE integration guide you need an instance of InfluxDB running in your environment to act as an external analysis server for your LRE instance. Therefore, the telemetry instance needs to provide all the configurations required to connect to that InfluxDB server.
The file tel_lre.yaml
is an example of a LRE telemetry instance. Make sure to replace the variables with the actual values of your configurations:
and then create telemetry instance:
You can now create a new workflow that you will use in your optimization study.
A workflow in an optimization study is typically composed of the following tasks:
Apply a new configuration of the selected optimization parameters to the target system: in this example, you will leverage the Akamas FileConfigurator operator - this operator can be used to write parameter values into a generic file, which could represent a shell script, an application configuration file, or any other file used to apply parameters to the target systems
Restart the application (optional): in this example, the Konakart docker container needs to be restarted in order to launch the Konakart JVM for the new configuration to be effectively applied
Launch the performance test using LRE
To create the optimization workflow, update the workflow-optimize.yaml
file replacing the correct references to your environment:
Make sure to replace the placeholders with the correct references to your environment:
hostname should reference your Konakart instance in place of the placeholder target_host
username and key must reflect your Konakart instance user and SSH private key file (also change the path /home/jsmith)
path and commands should have the correct file paths to Docker Compose files
Regarding the LoadRunnerEnterprise operator, update the configuration above with the actual values of:
address: the FQDN of your LRE farm (LRE server)
username: and password the credentials of the LRE user
project: the name of the project created on LRE
domain: the domain of the project you created on LRE
tenantID: the tenant of your project (if multi-tenancy is enabled)
testId: the id of your test on LRE
testSet: the test set name your test belongs to
timeSlot: the time slot reserved by Akamas on LRE to run your tests
verifySSL: it configures Akamas to validate the SSL configuration or skip it (useful for self-signed certificates)
For more information about the configurations available for LoadRunner Enterprise, please refer to LRE dedicated integration guide.
Once you have edited this file, run the following command to create the workflow:
In the workflow, the FileConfigurator operator is used to automatically apply the configuration of the JVM parameters at each experiment. In order for this to work, you need to allow Akamas to set the parameter values being tested in each experiment. This is made possible by the following Akamas templating approach:
locate your application configuration file where the optimization parameters need to be set
find the place where the parameter that need to be optimized is specified - for example, the heap size of the JVM: tomcat_jvm_heapsize=1024
replace the hardcoded value with the Akamas parameter template string, where you specify both the component name and the name of the Akamas parameter - for example: tomcat_jvm_heapsize=${jvm.maxHeapSize}
at this point, every time the FileConfiguration operator is invoked in your workflow, a new application configuration file will be created where each of the parameter templates replaced with the parameter values being tested by Akamas in the corresponding experiment (e.g. tomcat_jvm_heapsize=537
).
Therefore, you will now prepare the Konakart configuration file (a Docker Compose file).
First of all, you want to inspect the Konakart configuration file by executing the following command:
which should return the following output, where you can see that the JAVA_OPTS variable specifies a maximum heap size of 256 MB:
In order to allow Akamas to be able to apply this hardcoded heap size value (and any other required as optimization parameter) at each experiment, you need to prepare a new Konakart Docker Compose file docker-compose.yml.templ
where you can put the Akamas parameter template.
First, copy the Docker Compose file and rename it so as to keep the original file:
Now, edit this file docker-compose.yml.templ
file and replace the hardcoded value for the JAVA_OPTS variable with the Akamas parameter template:
aside positive
Notice that instead of specifying one single parameter at a time, Akamas also allows you to put wildcards ('*') and have all the JVM parameters replaced in place.
Therefore, the FileConfigurator operator in your workflow will expand all. the JVM parameters and replace them with the actual values provided by Akamas AI-driven optimization engine.
At this point, you are ready to create your optimization study!
Duration: 02:00
To model the Online Boutique inside Akamas, we need to create a corresponding System with its components in Akamas, and also associate a Prometheus telemetry instance to the system to allow Akamas to collect the performance metrics.
The entire Akamas system, together with the Prometheus telemetry instance, can be installed with a single command by leveraging the create-system.sh
script provided in the scripts
folder.
This script requires, as an argument, the public IP address of the cluster CLUSTER_IP
.
Now you need to first login to Akamas with the following command:
and then run the above-mentioned script in a shell where you have the Akamas CLI installed:
The scripts should return a message similar to: System created correctly
.
At this point, you can access the Akamas UI under and verify that the System and its component are listed in the Systems menu:
Notice that this System leverages the following Optimization Packs:
Kubernetes: it provides a component type required to model each Kubernetes Pod - one for each Deployment in the Online Boutique.
Optimization Packs for Web Applications: it models the end-to-end metrics of the Online Boutique, such as the application response time and throughput.
The next step is to create a workflow describing the steps executed in each experiment of your optimization study.
A workflow in an optimization study for Kubernetes is typically composed of the following tasks:
Write the manifest file corresponding to a new configuration of the selected optimization parameters to the target system using the FileConfigurator operator.
Apply the manifest with the new configurations via kubectl
.
Wait until the rollout of all services has completed.
Run the performance test.
You can create the Akamas workflow with a single command by leveraging the script create-workflow.sh
provided in the scripts
folder. Also, this script requires a parameter:
CLUSTER_IP
: the public IP address of the cluster
The artifact in your cloned repo contains the workflow.yaml file that creates the workflow by issuing the following command:
You can verify that this workflow has been created by accessing the corresponding Workflow menu in the Akamas UI:
All scripts and templates used in the steps of this workflow can be found in the kubernetes-online-boutique/workflow-artifacts
folder.
Notice: if you installed the cluster with Minikube by following the steps documented in the section Build Minikube Cluster, then you do not need to set up Akamas. Thus, you can skip to the next section, Verify Akamas, to verify that everything is working fine.
As described in the section Architecture Overview, Akamas needs to communicate with the cluster to apply new configurations of the Online Boutique using the kubectl
tool. Therefore, if you are optimizing your own Kubernetes cluster, you need to make sure that Akamas can interact with it.
First, you need to go to your Akamas-in-a-box machine and copy your kubeconfig file (i.e., ~/.kube/config
) in the /akamas-config
folder. Then, you have to ensure that the container named benchmark has the credentials to access the cluster. The credentials to pass could differ depending on your cluster provider. The examples here below are specific to an EKS cluster and other providers.
Please copy & paste the following command and substitute your AWS credentials in place of the placeholders. It will create the file /akamas-config/envs
that contains the variables required to communicate with the cluster:
You might write a file envs
as above in the /akamas-config
directory and put there all the environment variables needed to connect to your cluster as in the example below.
The setup is now complete. You can now proceed to the next section to verify it.
Duration: 01:00
You need to make sure that Akamas can interact with the target cluster check that:
The benchmark container in your Akamas-in-a-box machine can run kubectl
commands against your cluster.
The benchmark container can reach your cluster through HTTP.
To check that the container can connect to your cluster run the following command and verify that you can see your Kubernetes namespaces:
Next, you need to check that you can reach your Prometheus and the loadgenerator from the benchmark container.
To verify that you can communicate with Prometheus try to run the following command, substituting the CLUSTER_IP
placeholder with your public cluster IP:
You should see the output:
Lastly, to verify that you can connect with the loadgenerator run the following command, substituting the CLUSTER_IP
placeholder with your public cluster IP:
You should see the output:
At this point, Akamas is correctly configured to interact with the target cluster, and you can start modeling and then optimizing your cluster in Akamas.
There are two possible architectural configurations, depending on whether you are using a dedicated cluster or running the application on Minikube inside your Akamas-in-a-box machine:
In this scenario, you are running the application on a dedicated cluster; it will require at least one node with 4 CPUs and 8 GB of RAM. Akamas will run on a dedicated VM.
In this scenario, you are running the application on a Minikube cluster installed on your Akamas-in-a-box machine; it will need at least 8 CPUs and 16 GB of RAM (e.g., c5.xlarge on AWS EC2). In the following section, Build Minikube Cluster, you will learn to install it with a single command.
In both scenarios, you will install the following applications in the cluster:
the application to tune: Online Boutique
the telemetry provider: Prometheus, which provides performance metrics of the application under optimization
the load generator: Locust, used to run the load tests
Notice that you plan to use your own cluster, please can skip this section and move on to the section Setup Online Boutique.
This section describes how to build a local Kubernetes cluster for the following Akamas optimization study. The local cluster will be installed in the Akamas-in-a-box machine, so this machine needs at least 8 CPUs and 16 GB of RAM (e.g., c5.2xlarge on AWS EC2).
Before proceeding with the installation of the Minikube cluster, please ensure that your Akamas-in-a-box host has all the following prerequisites:
Add your user to the docker group, if it has not been added yet, with the command: sudo usermod -aG docker $USER && newgrp docker
.
Note down your machine's public IP, as it will be used later in this guide as your CLUSTER_IP
. On Linux you can run: dig +short myip.opendns.com @resolver1.opendns.com
.
Then make sure Akamas is running. You can verify if Akamas services are up and running by executing the following command:
aside negative If you have another minikube cluster running on the same machine, you need to clean your environment with the command
minikube delete
before creating the new cluster.
At this point, you can create the Minikube cluster with a single command by leveraging the script create-minikube-cluster.sh
in the scripts
folder of the cloned repo (as described in section Download artifacts) and passing the public IP of your machine as the CLUSTER_IP
:
The command may take a few minutes, and will output the message "Cluster created" once the installation process completes correctly.
Now the cluster should be up and running. You can verify this by running any kubectl
command, such as:
Install (if you have installed Akamas-in-a-Box, you should have it already installed).
Install .
Install (you can stop before verifying the installation).
Let's now take a look at the results and benefits Akamas achieved in this optimization study. Mind that you might achieve different results as the actual best configuration may depend on your actual setup (i.e., operating systems, cloud or virtualization platform, and the hardware).
First of all, the best configuration was quickly identified, providing an application efficiency increase of 17%, without affecting the response time.
Let's look at the best configuration from the Summary tab: this configuration specifies the right amount of CPU and memory for each microservice.
It’s interesting to notice that Akamas did adjust the CPU and memory limits of every single microservice:
For some microservices (e.g., frontend), both the CPU and memory resources were increased.
For others (e.g., paymentservice), the memory was decreased while the CPU was slightly increased.
For some others (e.g., productcatalogservice), only the memory was decreased.
Let's navigate the Insights section, which provides details of the best experiment for each of the selected KPIs.
The best experiments according to the selected KPIs are automatically tagged and listed in the table. Experiment 34 reached the best efficiency, while experiment 53 achieves the best throughput and a decrease in the response time. Also, notice that a couple of identified configurations improved the application response time even more (up to 87%) while not representing the best configuration.
The experiments can be plotted and the results will be shown such as below.
This optimization study shows how it is possible to tune a Kubernetes application made by several microservices, which represents a complex challenge, typically requiring days or weeks of time and effort even for expert performance engineers, developers, or SRE. With Akamas, the optimization study only took about 4 hours to automatically identify the optimal configuration for each Kubernetes microservice.
For this study, we define a goal of increasing the application throughput, while enforcing the constraint of keeping the latency below 500 ms and error rate below 2%.
You can create the optimization study using the study.yaml
file in the akamas/studies
folder by issuing the Akamas command:
and then run it by issuing the Akamas command:
You can now explore this study from the Study menu in the Akamas UI and then move to the Analysis tab.
As the optimization study executes the different experiments, this chart will display more points and their associated score.