Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The target system for the optimization study described in this guide is a Java-based microservice acting as an authentication service for multiple digital applications all running on Kubernetes.
The reference architecture is illustrated by the following diagram. Notice the load testing (JMeter) and monitoring (Elastic APM) tools, respectively used to load the target system and to collect the KPIs used in the optimization study.
The goal of this optimization study is to reduce the cost of the Kubernetes infrastructure.
To create the study, go to the Study menu in the UI and click on the Create button, then select Study:
The study creation wizard will now walk you through the different steps required to create the study.
First of all, choose a name and description for your study, then select the renaissance system and the renaissance-optimize workflow:
Access the Akamas-in-a-sandbox (AIAS) environment and select "Studies" from the left-hand-side menu.
Select the preloaded "Authentication service - minimize K8s cost" study.
The Summary tab displays high-level study information at a glance, including the best score obtained so far, a summary of the optimized parameters, and their values for the best configuration.
At first look, the higher part of the Study page shows:
the ID of the Study - this can be used to run commands against this specific study
the System under test, that is "Auth Service"
the workflow used to run experiments against this system, that is "Auth Service"
In the lower part of the Summary tab you can see:
the best score, that is the best result achieved in the optimization with respect to the baseline, according to the specified goal
the optimization goal & constraints (see here below)
the selected KPIs (more on this layer in the Explore Results section)
the optimization scope, that is the number of parameters that Akamas tuned in this study
Akamas is a goal-driven optimization solution, meaning that you can specify the goal you want to achieve, including your application performance and reliability constraints. If you click on the "Details" button in the Goal card, you can see that:
the goal has been set to minimize the application cost, which is a metric representing the cost of the K8s deployments under optimization, based on CPU and memory requests
three constraints have been set to ensure that cost is reduced but application performance and reliability are not impacted. In this case, application response time, transaction throughput and error rate do not differ more than 10% from the respective values in the baseline configuration.
From the Study page, you can explore the System, which is the application being optimized.
As you can see, the System is represented by several components, whose parameters and metrics you can explore:
the JVM powering the application. In this example, we selected to optimize 5 common JVM parameters like heap size and garbage collector (click on the card for the full set of 61 parameters for the OpenJDK 11 runtime)
the Kubernetes Container, whose parameters CPU and Memory limits and requests have all been selected in the optimization scope
the Kubernetes Workload, which tracks deployment-level metrics like the number of pod replicas
the Web Application, whose metrics are used to measure how the impact of the configurations experimented during the study, including the cost and the application performance metrics
This guide explains how to navigate an already executed optimization study that is available as part of the Akamas-in-a-sandbox (AIAS) environment. This is intended as a first step to then learn how to create an optimization study leveraging the Akamas UI and the CLI.
Notice: This is a sample study designed to provide an overview of the Akamas capabilities. You cannot run this optimization study, as the target Kubernetes application is not available in the Akamas free trial sandbox environment.
See the next guide to create your first real Akamas optimization!
How to explore an already executed optimization study
How to analyze the results of the optimization study
How to compare configurations with respect to selected KPIs
Access to the Akamas-In-A-Sandbox (AIAS) environment with valid credentials.
The following picture represents the high-level architecture of how Akamas operates in this scenario.
The optimization goal is to make Renaissance run faster, a common optimization need for Java applications.
Therefore, the optimization study needs to have the following properties:
The goal of the optimization will be to minimize the response time of the Java application (another goal would be reducing the JVM memory usage)
The parameters to be optimized include some common JVM options like the heap size and the garbage collector type
The metrics that you will measure for each experiment will be the application response time, CPU, and memory usage
The study will execute up to 30 experiments, for a duration of about 1 hour
The best configuration Akamas found (for the defined optimization goal and constraints) is displayed at the bottom of the Summary tab. In this table, you can see the optimal value Akamas AI found for each parameter defined in the study optimization scope. You can also see the baseline value, which is the original value the parameter had before the optimization.
Moreover, the Insight section highlights other interesting configurations that Akamas found during the optimization process, with respect to the defined KPIs.
These KPIs are automatically selected by Akamas based on the metrics included in the optimization goal and constraints, but can also be customized by clicking on the "KPIs" section of the Summary page.
By selecting the big right arrow from the Insight section, you can visualize all of the configurations of interest for all the selected KPIs.
You can also quickly compare how different configurations score in terms of the KPIs, by clicking the histogram icon on those configurations of interest.
In this case, it is worth noticing that there is a configuration (#12) with a slightly lower cost reduction goal (-48.9% with respect to -49.1% provided by the best configuration) that provides a slight improvement (+1.4%) in terms of transaction throughput with respect to both the baseline and the best configuration.
With the Insights, you can discover the best configurations that are most interesting for you to optimize your application efficiency and performance. This shows how Insights provides support for a better decision-making process on which configuration to apply.
Congratulations! You have finished the exploration of your first Akamas optimization study. Now things get interesting: you can continue your journey by creating and running your first study! Your free trial sandbox is equipped with sample apps that you can use to play with Akamas AI-driven optimization. Follow the second guide to optimize the performance of a Java app using the UI, or the third guide to optimize the resource efficiency of K8s deployment requests and limits using the CLI.
From the Study page, you get to the following page by following the Workflow link.
As you can see, the Workflow includes steps that are used to fully automate the performance optimization process. You can explore it by following the left and right arrows.
Akamas executes these steps defined in the workflow for each tuning experiment.
In this example, the worfklow performs the following tasks:
write the new configuration (parameter values recommended by Akamas AI) to the K8s deployment file
starts the load test (in this study this was done by leveraging a JMeter integration, as illustrated in the architecture overview)
calculate the cloud cost associated with this configuration.
It’s now time to select which parameters Akamas needs to tune in this study.
Select the jvm
component and add the following JVM parameters:
jvm_maxHeapSize
jvm_newSize
jvm_maxHeapFreeRatio
jvm_gcType
jvm_survivorRatio
jvm_maxTenuringThreshold
It is also possible to tell Akamas the range of values each parameter can have: this way, AI will suggest configurations that respect desired limits. For example, in the sandbox environment, the max heap size of the JVM has to be limited to 1GB.
Select the following range of values by clicking on the EDIT DOMAIN of the corresponding parameter:
jvm_maxHeapSize: from 32MB to 1024MB
jvm_newSize: from 32MB to 1024MB
jvm_maxHeapFreeRatio: from 41 to 100
Akamas supports constraints among parameters to avoid incorrect combinations of values. In this example, the JVM newSize parameter needs to be lower or equal to maxHeapSize. You can add a new constraint to tell Akamas to keep this relation in consideration during the optimization:
The study will run for 30 experiments or about 1 hour. After running the study, you can explore the results of your AI-driven performance optimization study.
Select your study from the UI Study menu.
The Summary tab displays high-level study information at a glance, including the best score obtained so far, and a summary of the tuned parameters and their values for the optimal configuration.
In this example, Akamas was able to cut the application response time by 40%. A significant result that was achieved by optimally configuring the application runtime, without any code changes!
What are the best JVM settings Akamas found that made the application run so much faster?
Without being told anything about how the application works, Akamas learned the best settings for some interesting JVM parameters:
the max heap size was slightly changed
the best garbage collector is Parallel
The Progress tab allows following the experiments and their workflow tasks execution (including logs for troubleshooting).
The Analysis tab shows the experiments' scores over time, plus a detailed table with key parameters and metrics for each experiment.
Properly tuning a modern JVM is a complex challenge, and might require weeks of time and effort even for performance experts. Akamas AI is designed to converge rapidly toward optimal configurations. In this example, Akamas was able to find the optimal JVM configuration after about 16 automated performance experiments:
The Configuration Analysis tab lets you explore the additional insights and benefits of the configurations Akamas explored with respect to other key metrics besides the goal.
Interestingly, another configuration Akamas found was able to cut CPU utilization by 33%, while still improving response time by 17%. So you improved the performance and reduced costs, at the same time.
The Metrics tab allows you to check the metrics that were collected by the telemetry for each experiment.
Congratulations! You have finished your first study! Continue your journey by following the third guide to learn how to optimize the resource efficiency of K8s deployments requests and limits using the CLI.
This guide describes how to create an Akamas study to optimize the performance of a Java application. To do that, you will create an Akamas study using the UI wizard.
The target application to optimize is Renaissance, a sample Java benchmark that is available in the sandbox environment.
The optimization goal is to make Renaissance run faster (minimize the response time of the benchmark iterations). You will leverage Akamas to automatically identify the optimal JVM parameters (like the size of the heap and inner pools, type of garbage collector, etc.) that can speed up the application.
How to create your first study using the Akamas UI
How to conduct an optimization with performance constraints
How to analyze and identify performance insights from a study results
Access to the Akamas-In-A-Sandbox (AIAS) environment with valid credentials.
Now, you can select the metrics of interest among all the metrics provided by your components.
You can do that by expanding the Choose metrics section and selecting the three metrics of the renaissance
component:
CPU used, representing the total CPU used by the JVM process running the benchmark
Memory used, representing the peak memory used by the JVM process running the benchmark
Response time, representing the response time of the benchmark
You can also click on the jvm
component to deselect its corresponding metrics, as they are not available in this example.
The "Analysis" tab you can monitor the optimization progress. Each dot is an experiment, e.g. a load test with a different configuration applied by Akamas. You can see the score of each experiment and understand how Akamas has explored the configuration space by identifying better configurations over time as a result of Akamas AI learning the application behaviour.
This chart shows how quickly better configurations were identified after a few experiments. The best configuration was discovered in experiment #12.
You can use the table below the chart to explore all experiments by analyzing the corresponding values for the different metrics and parameters.
Moreover, the "Metrics" tab allows you to analyze all the metrics and parameters over time and to compare their behavior. By default, the chart lets you compare the baseline and the best configuration, using a set of default metrics based on the selected KPIs. Feel free to explore the results by selecting any other experiment (trials) and metric you are interested in to evaluate the optimization benefits.
The next step is to define your optimization goal.
Leave the MINIMIZE
option, select the renaissance
component, and then choose the response_time
metric for the goal of reducing the benchmark execution time.
Finally, you can define the steps the study will go through.
Typically, an Akamas study is composed of:
a baseline
step, which is used to assess the system performance in the initial configuration (e.g. the settings currently in place for your system)
an optimize
step, which is used to execute the actual AI-driven performance optimization
The Akamas study wizard already creates these two steps for you.
In this study, we simply configure the baseline configuration by setting jvm_maxHeapSize
to 1024MB:
Then click on the Optimize step in the left panel to specify the optimization configuration.
For now, simply set the number of experiments to execute to 30:
Now click on Create Study.
You have now created your first study and you are ready to start the optimization!
You can review your study in the Definition tab or go and launch the study by pressing the Start button:
This guide describes how to optimize the cost efficiency of a Kubernetes microservice application.
The target application is Online Boutique, a popular app developed by Google for demo purposes. Online Boutique runs on a minimal Kubernetes cluster within the Akamas sandbox environment so you don't need to install anything on your side.
You will use the Akamas CLI to create this study, to familiarize with Akamas Optimization-as-Code approach.
All the configuration files and scripts referenced in this guide can be downloaded from Akamas' public GitHub repository.
To start, clone the repository.
How to use the Akamas CLI to create an Optimization Study and all its supporting artifacts
How to model the Online Boutique application (which is available in the environment)
How to configure Prometheus to let Akamas collect Kubernetes performance metrics
How to optimize a Kubernetes application using Akamas
Access to the Akamas-In-A-Sandbox (AIAS) environment with valid credentials.
Install the Akamas CLI and use your free trial instance URL as API address as follows: https://<your-free-trial-sandbox-address>:8443
Welcome to the Akamas Free Trial!
These guides are meant to support your Akamas Free Trial experience. We suggest to go through all of them in the presented order.
B2B Auth Service app
Kubernetes container resources and JVM options
Renaissance Java benchmark
JVM options
Online Boutique K8s microservice app
Kubernetes container resources
It's now time to define the optimization study. The overall objective is to increase the cost efficiency of the application, without impacting application reliability in terms of response time or error rates.
To achieve that objective, you create an Akamas study with the goal of maximizing the ratio between application throughput and cloud cost, where:
application throughput is the transactions per second as measured by the load-testing tool
cloud cost is the total cost to run Kubernetes microservices on the cloud, and is a function of the CPU and memory requests assigned to each container. We assume a sample pricing of $29 per CPU core/month and $3.2 per memory GB/month.
Hence, a good configuration is one that either increases throughput with the same cloud cost, or that keeps throughput constant but with a lower cloud cost.
To avoid impacting application reliability, you can define Akamas metric constraints on transaction response time lower than 500 milliseconds and error rate below 2%.
Here is the relevant section of the study:
As regard the parameters to optimize, in this example Akamas is tuning CPU and memory limits (requests are set equal to the limits) of each deployment in the Online Boutique application, for a total of 22 parameters. Here is the relevant section of the study:
You can review the complete optimization study by looking at the study.yaml
file in the akamas/studies
folder.
You can now create the optimization study:
and then start the optimization:
You can now explore this study from the Study menu in the Akamas UI and then move to the Analysis tab.
As the optimization study executes the different experiments, this chart will display more points and their associated score.
The next step is to create a workflow describing the steps executed in each experiment of your optimization study.
A workflow in an optimization study for Kubernetes is typically composed of the following tasks:
Update the Kubernetes deployment file of the Kubernetes workloads with the new values of the selected optimization parameters (e.g. CPU and memory requests and limits), using the FileConfigurator operator.
Apply the new deployment files via kubectl
. This triggers a rollout of the new deployment in Kubernetes.
Wait until the rollout is complete.
Run the performance test.
To create the workflow, launch the following command:
You can verify that this workflow has been created by accessing the "boutique" workflow in the Workflow menu:
All scripts and templates used in the steps of this workflow are available in the kubernetes-online-boutique/workflow-artifacts
folder.
To model the Online Boutique inside Akamas, we need to create a corresponding System with its components in Akamas, and also associate a Prometheus telemetry instance to the system to allow Akamas to collect the performance metrics.
Now you need to first login to Akamas with the following command:
Start by installing the necessary optimization packs:
Then, create the Online Boutique system using the artifacts you previously downloaded:
You should see a message like this:
Now, you can create all the components by running:
Lastly, create the telemetry instance:
At this point, you can access the Akamas UI and verify that the Online Boutique system and its components are created under the Systems menu:
Notice that this System leverages the following Optimization Packs:
Kubernetes: it provides a component type required to model each Kubernetes Pod - one for each Deployment in the Online Boutique.
Web Application: it models the end-to-end metrics of the Online Boutique, such as the application response time and throughput.
Let's now take a look at the results and benefits Akamas achieved in this optimization study.
First of all, the best configuration was quickly identified, providing a cost-efficiency increase of 17%, without affecting the response time.
Let's look at the best configuration in the Summary tab. Here you can see the right amount Akamas AI found for all CPU and memory requests & limits, considering the goal of maximizing cost efficiency and matching the application performance and reliability constraints.
It’s interesting to notice the best configuration Akamas found for every single microservice:
For some microservices (e.g., frontend), both the CPU and memory resources were increased.
For others (e.g., paymentservice), the memory was decreased while the CPU was slightly increased.
For some others (e.g., productcatalogservice), only the memory was decreased.
Let's navigate the Insights section, which provides details of the best experiment for each of the selected KPIs.
The best experiments according to the selected KPIs are automatically tagged and listed in the table. Interestingly, experiment 34 reached the best efficiency, while experiment 53 achieved the best throughput and a significant decrease in the application response time. Also, notice that a couple of identified configurations improved the application response time even more (up to 87%)!
The experiments can be plotted with the histogram icon to better analyze the impact of the selected configurations.
This optimization study shows how it is possible to tune a Kubernetes application made by several microservices, which represents a complex challenge, typically requiring days or weeks of time and effort even for expert performance engineers, developers, or SRE. With Akamas, the optimization study only took about 4 hours to automatically identify the optimal configuration for each Kubernetes microservice.
Congratulations! You have completed your first study to optimize a Kubernetes application! As a next step, take a look at all the other guides available in our free trial sandbox. Have you already completed all of the free trial guides? Get in touch with us and share your feedback!
The Online Boutique application is a cloud-native microservices application implemented by Google as a demo application for Kubernetes. It is a web-based sample, yet a fully-fledged, e-commerce application.
To stress the application and validate Akamas AI-based configurations under load, the setup includes the Locust load testing tool which simulates user activity on the application.
Prometheus is used to collect metrics related to the application (throughput and response times) as well as Kubernetes infrastructure (container resource usage).