Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Access the Akamas-in-a-sandbox (AIAS) environment and select "Studies" from the left-hand-side menu.
Select the preloaded "Authentication service - minimize K8s cost" study.
The Summary tab displays high-level study information at a glance, including the best score obtained so far, a summary of the optimized parameters, and their values for the best configuration.
At first look, the higher part of the Study page shows:
the ID of the Study - this can be used to run commands against this specific study
the System under test, that is "Auth Service"
the workflow used to run experiments against this system, that is "Auth Service"
The following two sections Explore the System and Explore the Workflow are optional as this guide is more focused on analyzing the output of an already executed study more than illustrating how to set it up in the first place. Another guide focuses on how to create an Optimization Study using the UI and another guide on how to create an Optimization Study and supporting artifacts using the CLI.
The lower part of the Study page shows the Summary:
the best score, that is -49.1% with respect to the baseline
the optimization goal & constraints (see here below)
the selected KPIs (see section Explore Results in this guide)
the optimization scope, that is the JVM and Kubernetes parameters which have been tuned
The optimization goal & constraints can be inspected by clicking on "Details" (and expanding the subsections):
the goal has been set to minimize the application cost
three constraints have been set to ensure that in the best configuration the metrics for the application response time, transaction throughput and error rate do not differ more than 10% from the respective values in the baseline configuration.
This guide describes how to create an Optimization Study to optimize the performance of a Java application using the UI wizard.
All the artifacts required for this study are already been created and made available in the sandbox environment. Another guide describes how to create an optimization study and all required artifacts from the CLI.
For simplicity's sake, the target application to optimize is Renaissance, a sample Java benchmark that is available in the sandbox environment. The optimization goal is to make Renaissance run faster (minimize the response time of the benchmark iterations). You will leverage Akamas to automatically identify the optimal JVM parameters (like the size of the heap and inner pools, type of garbage collector, etc.) that can speed up the application.
How to create your first study using the Akamas UI
How to conduct an optimization with performance constraints
How to analyze and identify performance insights from a study results
Access to the Akamas-In-A-Sandbox (AIAS) environment with valid credentials.
The following picture represents the high-level architecture of how Akamas operates in this scenario.
The optimization goal is to make Renaissance run faster, a common optimization need for Java applications.
Therefore, the optimization study needs to have the following properties:
The goal of the optimization will be to minimize the response time of the Java application (another goal would be reducing the JVM memory usage)
The parameters to be optimized include some common JVM options like the heap size and the garbage collector type
The metrics that you will measure for each experiment will be the application response time, CPU, and memory usage
The study will execute up to 30 experiments, for a duration of about 1 hour
To create the study, go to the Study menu in the UI and click on the Create button, then select Study:
The study creation wizard will now walk you through the different steps required to create the study.
First of all, choose a name and description for your study, then select the renaissance system and the renaissance-optimize workflow:
The next step is to define your optimization goal.
Leave the MINIMIZE
option, select the renaissance
component, and then choose the response_time
metric for the goal of reducing the benchmark execution time.
It’s now time to select which parameters Akamas needs to tune in this study.
Select the jvm
component and add the following JVM parameters:
jvm_maxHeapSize
jvm_newSize
jvm_maxHeapFreeRatio
jvm_gcType
jvm_survivorRatio
jvm_maxTenuringThreshold
It is also possible to tell Akamas the range of values each parameter can have: this way, AI will suggest configurations that respect desired limits. For example, in the sandbox environment, the max heap size of the JVM has to be limited to 1GB.
Select the following range of values by clicking on the EDIT DOMAIN of the corresponding parameter:
jvm_maxHeapSize: from 32MB to 1024MB
jvm_newSize: from 32MB to 1024MB
jvm_maxHeapFreeRatio: from 41 to 100
Akamas supports constraints among parameters to avoid incorrect combinations of values. In this example, the JVM newSize parameter needs to be lower or equal to maxHeapSize. You can add a new constraint to tell Akamas to keep this relation in consideration during the optimization:
Finally, you can define the steps the study will go through.
Typically, an Akamas study is composed of:
a baseline
step, which is used to assess the system performance in the initial configuration (e.g. the settings currently in place for your system)
an optimize
step, which is used to execute the actual AI-driven performance optimization
The Akamas study wizard already creates these two steps for you, so unless you want to add additional steps you just need to configure these two steps.
In this study, we simply configure the baseline configuration by setting jvm_maxHeapSize
to 1024MB:
Then click on the Optimize step in the left panel to specify the optimization configuration.
For now, simply set the number of experiments to execute to 30:
Now click on Create Study.
You have now created your first study and you are ready to start the optimization!
You can review your study in the Definition tab or go and launch the study by pressing the Start button:
Now, you can select the metrics of interest among all the metrics provided by your components.
You can do that by expanding the Choose metrics section and selecting the three metrics of the renaissance
component:
CPU used, representing the total CPU used by the JVM process running the benchmark
Memory used, representing the peak memory used by the JVM process running the benchmark
Response time, representing the response time of the benchmark
You can also click on the jvm
component to deselect its corresponding metrics, as they are not available in this example.
After running the study, you can explore the results of your AI-driven performance optimization study.
Select your study from the UI Study menu.
The Summary tab displays high-level study information at a glance, including the best score obtained so far, and a summary of the tuned parameters and their values for the optimal configuration.
By optimally configuring the JVM parameters, Akamas was able to cut the application response time by 40%:
What are the best JVM settings Akamas found that made the application run so much faster?
Without being told anything about how the application works, Akamas learned the best settings for some interesting JVM parameters:
the max heap size was slightly changed
the best garbage collector is Parallel
The Progress tab allows following the experiments and their workflow tasks execution (including logs for troubleshooting).
The Analysis tab shows the experiments' scores over time, plus a detailed table with key parameters and metrics for each experiment.
Properly tuning a modern JVM is a complex challenge, and might require weeks of time and effort even for performance experts. Akamas AI is designed to converge rapidly toward optimal configurations. In this example, Akamas was able to find the optimal JVM configuration after about 16 automated performance experiments:
The Configuration Analysis tab lets you explore the additional insights and benefits of the configurations Akamas explored with respect to other key metrics besides the goal.
From a CPU efficiency perspective, the best configuration Akamas found was able to cut CPU utilization by 33%, while still improving response time by 17%.
The Metrics tab allows you to check the metrics that the telemetry modules collected over time for each experiment.
Congratulations! You have completed your first Akamas optimization of a Java application, using the Akamas UI optimization wizard!
These Quick Guides are meant to support your Akamas Free Trial experience for the Akamas-in-a-sandbox option. Please refer to the Akamas documentation.
Guide | System | Optimization scope |
---|---|---|
B2B Auth Service app
Kubernetes container resources and JVM options
Renaissance Java benchmark
JVM options
Online Boutique K8s microservice app
Kubernetes container resources
This guide explains how to navigate an already executed optimization study that is available as part of the Akamas-in-a-sandbox (AIAS) environment. This is intended as a first step to then learn how to create an optimization study leveraging the Akamas UI and the CLI.
Notice: This is a sample study designed to provide an overview of the Akamas capabilities. You cannot run this optimization study, as the target Kubernetes application is not available in the Akamas free trial sandbox environment.
See the next guide to create your first real Akamas optimization!
How to explore an already executed optimization study
How to analyze the results of the optimization study
How to compare configurations with respect to selected KPIs
Access to the Akamas-In-A-Sandbox (AIAS) environment with valid credentials.
The target system for the optimization study described in this guide is a Java-based microservice acting as authentication service for multiple digital applications all running on Kubernetes.
The reference architecture is illustrated by the following diagram. Notice that the load testing (JMeter) and monitoring (Elastic APM) tools, respectively used to load the target system and to collect the KPIs used in the optimization study.
The goal for this optimization study is to reduce the cost of the Kubernetes infrastructure.
From the Study page, you get to the following page by following the Workflow link
As you can see, the Workflow includes several steps which you fully explore by following the left and right arrows
These steps are executed at each experiment to set a specific configuration under test, apply the workload (in this study this was done by leveraging a JMeter integration, as illustrated in the architecture overview), and finally in calculating the cost associated with this configuration.
The tab "Analysis" in the study shows how Akamas has explored the configuration space by identifying better configurations, with each dot corresponding to an experiment.
This chart shows how quickly better configurations were identified after just a few dozen expertiments (you may also want change the toggle to look at absolute timeframes), with the best configuration discovered at experiment #34.
You can use the table below the chart to explore all experiments (and trials) by analyzing the corresponding values for the different metrics and parameters.
Moreover, the tab "Metrics" allows you to analyze all the metrics and parameters over time and to compare their behavior under the baseline and any other configuration explored during the study, including the best configuration.
The best configuration with respect to the optimization goal (and constraints) is displayed at the bottom of the Summary tab.
Moreover, the Insight section highlights other configurations of interest with respect to other KPIs.
These KPIs are automatically selected by Akamas based on the metrics included in the optimization goal and constraints, but can also be chosen by clicking on the "KPIs" section of the Summary page.
By selecting the big right arrow from the Insight section is possible to visualize all the configurations of interest for all the selected KPIs
and see how some of them compare with respect to these KPIs, by activating the histogram icon on those configurations of interest.
In this case, it is worth noticing that there is a configuration (#12) which sub-optimal with respect to the cost reduction goal (-48.9% with respect to -49.1% provided by the best configuration) that provides a slight improvement (+1.4%) in terms of transaction throughput with respect to both the baseline and the best configuration. Therefore suboptimal configuration this might be worth to be further explored and possibly selected for being applied in place of the best configuration.
This shows how Akamas Insights provide support for a better decision making process on which configuration to apply.
You have just finished exploring an Akamas optimization of a Kubernetes application!
This guide describes how to optimize the cost efficiency of Online Boutique, a Kubernetes microservice application, using the Akamas CLI. All the corresponding configuration files and scripts required are available in the Akamas GitHub repository (see here below).
Notice that the Online Boutique application used as the target system runs on a minimal Kubernetes cluster and is available in the Akamas In-A-Sandbox environment.
All the configuration files and scripts referenced in this guide can be downloaded from Akamas' public GitHub repository.
To start, clone the repository.
How to use the Akamas CLI to create an Optimization Study and all its supporting artifacts
How to model the Online Boutique application (which is available in the environment)
How to configure Prometheus to let Akamas collect Kubernetes performance metrics
How to optimize a Kubernetes application using Akamas
Access to the Akamas-In-A-Sandbox (AIAS) environment with valid credentials.
Install the Akamas CLI and use your free trial instance URL as API address as follows: https://<your-free-trial-sandbox-address>:8443
The Online Boutique application is a cloud-native microservices application implemented by Google as a demo application for Kubernetes. It is a web-based sample, yet a fully-fledged, e-commerce application.
To stress the application and validate Akamas AI-based configurations under load, the setup includes the Locust load testing tool which simulates user activity on the application.
Prometheus is used to collect metrics related to the application (throughput and response times) as well as Kubernetes infrastructure (container resource usage).
The next step is to create a workflow describing the steps executed in each experiment of your optimization study.
A workflow in an optimization study for Kubernetes is typically composed of the following tasks:
Update the Kubernetes deployment file of the Kubernetes workloads with the new values of the selected optimization parameters (e.g. CPU and memory requests and limits), using the FileConfigurator operator.
Apply the new deployment files via kubectl
. This triggers a rollout of the new deployment in Kubernetes.
Wait until the rollout of all deployments is complete.
Run the performance test.
To create the workflow, launch the following command:
You can verify that this workflow has been created by accessing the corresponding Workflow menu in the Akamas UI:
All scripts and templates used in the steps of this workflow are available in the kubernetes-online-boutique/workflow-artifacts
folder.
To model the Online Boutique inside Akamas, we need to create a corresponding System with its components in Akamas, and also associate a Prometheus telemetry instance to the system to allow Akamas to collect the performance metrics.
Now you need to first login to Akamas with the following command:
Start by installing the necessary optimization packs:
Then, create the Online Boutique system.
You should a message like this:
Now, you can create all the components by running:
Lastly, configure the telemetry instance:
At this point, you can access the Akamas UI and verify that the System and its component are listed in the Systems menu:
Notice that this System leverages the following Optimization Packs:
Kubernetes: it provides a component type required to model each Kubernetes Pod - one for each Deployment in the Online Boutique.
Web Application: it models the end-to-end metrics of the Online Boutique, such as the application response time and throughput.
It's now time to define the optimization study. The overall objective is to increase the cost efficiency of the application, without impacting application reliability in terms of response time or error rates.
To achieve that objective, you create an Akamas study with the goal of maximizing the ratio between application throughput and cloud cost, where:
application throughput is the transactions per second as measured by the load-testing tool
cloud cost is the total cost to run Kubernetes microservices on the cloud, and is a function of the CPU and memory requests assigned to each container. We assume a sample pricing of $29 per CPU core/month and $3.2 per memory GB/month.
Hence, a good configuration is one that either increases throughput with the same cloud cost, or that keeps throughput constant but with a lower cloud cost.
To avoid impacting application reliability, you can define Akamas metric constraints on transaction response time lower than 500 milliseconds and error rate below 2%.
Here is the relevant section of the study:
As regard the parameters to optimize, in this example Akamas is tuning CPU and memory limits (requests are set equal to the limits) of each deployment in the Online Boutique application, for a total of 22 parameters. Here is the relevant section of the study:
You can review the complete optimization study by looking at the study.yaml
file in the akamas/studies
folder.
You can now create the optimization study:
and then start the optimization:
You can now explore this study from the Study menu in the Akamas UI and then move to the Analysis tab.
As the optimization study executes the different experiments, this chart will display more points and their associated score.
From the Study page, you get to the following page by following the System link
As you can see, the System is represented by several components, whose parameters and metrics you can explore:
the JVM runtime with 5 parameters among the 61 available selected in the optimization scope
the Kubernetes Container, whose parameters (CPU and Memory limits and requests) have all been selected in the optimization scope
the Kubernetes Workload, whose only parameter is the number of replicas
the web application, whose metrics are used to measure how the impact of the configurations experimented during the study - the most important one being the associated cost, which represents the optimization goal
Let's now take a look at the results and benefits Akamas achieved in this optimization study. Mind that you might achieve different results as the actual best configuration may depend on your actual setup (i.e., operating systems, cloud or virtualization platform, and the hardware).
First of all, the best configuration was quickly identified, providing a cost-efficiency increase of 17%, without affecting the response time.
Let's look at the best configuration from the Summary tab: this configuration specifies the right amount of CPU and memory requests and limits for each microservice.
It’s interesting to notice that Akamas did adjust the CPU and memory requests and limits of every single microservice:
For some microservices (e.g., frontend), both the CPU and memory resources were increased.
For others (e.g., paymentservice), the memory was decreased while the CPU was slightly increased.
For some others (e.g., productcatalogservice), only the memory was decreased.
Let's navigate the Insights section, which provides details of the best experiment for each of the selected KPIs.
The best experiments according to the selected KPIs are automatically tagged and listed in the table. Experiment 34 reached the best efficiency, while experiment 53 achieves the best throughput and a decrease in the response time. Also, notice that a couple of identified configurations improved the application response time even more (up to 87%) while not representing the best configuration.
The experiments can be plotted and the results will be shown such as below.
This optimization study shows how it is possible to tune a Kubernetes application made by several microservices, which represents a complex challenge, typically requiring days or weeks of time and effort even for expert performance engineers, developers, or SRE. With Akamas, the optimization study only took about 4 hours to automatically identify the optimal configuration for each Kubernetes microservice.
Great! You have learned how to use the CLI to create an Akamas study and its supporting artifacts to optimize a Kubernetes application.