It's now time to define the optimization study. The overall objective is to increase the cost efficiency of the application, without impacting application reliability in terms of response time or error rates.
To achieve that objective, you create an Akamas study with the goal of maximizing the ratio between application throughput and cloud cost, where:
application throughput is the transactions per second as measured by the load-testing tool
cloud cost is the total cost to run Kubernetes microservices on the cloud, and is a function of the CPU and memory requests assigned to each container. We assume a sample pricing of $29 per CPU core/month and $3.2 per memory GB/month.
Hence, a good configuration is one that either increases throughput with the same cloud cost, or that keeps throughput constant but with a lower cloud cost.
To avoid impacting application reliability, you can define Akamas metric constraints on transaction response time lower than 500 milliseconds and error rate below 2%.
Here is the relevant section of the study:
As regard the parameters to optimize, in this example Akamas is tuning CPU and memory limits (requests are set equal to the limits) of each deployment in the Online Boutique application, for a total of 22 parameters. Here is the relevant section of the study:
You can review the complete optimization study by looking at the study.yaml
file in the akamas/studies
folder.
You can now create the optimization study:
and then start the optimization:
You can now explore this study from the Study menu in the Akamas UI and then move to the Analysis tab.
As the optimization study executes the different experiments, this chart will display more points and their associated score.
To model the Online Boutique inside Akamas, we need to create a corresponding System with its components in Akamas, and also associate a Prometheus telemetry instance to the system to allow Akamas to collect the performance metrics.
Now you need to first login to Akamas with the following command:
Start by installing the necessary optimization packs:
Then, create the Online Boutique system using the artifacts you previously downloaded:
You should see a message like this:
Now, you can create all the components by running:
Lastly, create the telemetry instance:
At this point, you can access the Akamas UI and verify that the Online Boutique system and its components are created under the Systems menu:
Notice that this System leverages the following Optimization Packs:
Kubernetes: it provides a component type required to model each Kubernetes Pod - one for each Deployment in the Online Boutique.
Web Application: it models the end-to-end metrics of the Online Boutique, such as the application response time and throughput.
Let's now take a look at the results and benefits Akamas achieved in this optimization study.
First of all, the best configuration was quickly identified, providing a cost-efficiency increase of 17%, without affecting the response time.
Let's look at the best configuration in the Summary tab. Here you can see the right amount Akamas AI found for all CPU and memory requests & limits, considering the goal of maximizing cost efficiency and matching the application performance and reliability constraints.
It’s interesting to notice the best configuration Akamas found for every single microservice:
For some microservices (e.g., frontend), both the CPU and memory resources were increased.
For others (e.g., paymentservice), the memory was decreased while the CPU was slightly increased.
For some others (e.g., productcatalogservice), only the memory was decreased.
Let's navigate the Insights section, which provides details of the best experiment for each of the selected KPIs.
The best experiments according to the selected KPIs are automatically tagged and listed in the table. Interestingly, experiment 34 reached the best efficiency, while experiment 53 achieved the best throughput and a significant decrease in the application response time. Also, notice that a couple of identified configurations improved the application response time even more (up to 87%)!
The experiments can be plotted with the histogram icon to better analyze the impact of the selected configurations.
This optimization study shows how it is possible to tune a Kubernetes application made by several microservices, which represents a complex challenge, typically requiring days or weeks of time and effort even for expert performance engineers, developers, or SRE. With Akamas, the optimization study only took about 4 hours to automatically identify the optimal configuration for each Kubernetes microservice.
The is a cloud-native microservices application implemented by Google as a demo application for Kubernetes. It is a web-based sample, yet a fully-fledged, e-commerce application.
To stress the application and validate Akamas AI-based configurations under load, the setup includes the Locust load testing tool which simulates user activity on the application.
Prometheus is used to collect metrics related to the application (throughput and response times) as well as Kubernetes infrastructure (container resource usage).
Congratulations! You have completed your first study to optimize a Kubernetes application! As a next step, take a look at all the other available in our free trial sandbox. Have you already completed all of the free trial guides? Get in touch with us and share your feedback!
This guide describes how to optimize the cost efficiency of a Kubernetes microservice application.
The target application is Online Boutique, a popular app developed by Google for demo purposes. Online Boutique runs on a minimal Kubernetes cluster within the Akamas sandbox environment so you don't need to install anything on your side.
You will use the Akamas CLI to create this study, to familiarize with Akamas Optimization-as-Code approach.
All the configuration files and scripts referenced in this guide can be downloaded from Akamas' public GitHub repository.
To start, clone the repository.
How to use the Akamas CLI to create an Optimization Study and all its supporting artifacts
How to model the Online Boutique application (which is available in the environment)
How to configure Prometheus to let Akamas collect Kubernetes performance metrics
How to optimize a Kubernetes application using Akamas
Access to the Akamas-In-A-Sandbox (AIAS) environment with valid credentials.
Install the Akamas CLI and use your free trial instance URL as API address as follows: https://<your-free-trial-sandbox-address>:8443
The next step is to create a workflow describing the steps executed in each experiment of your optimization study.
A workflow in an optimization study for Kubernetes is typically composed of the following tasks:
Update the Kubernetes deployment file of the Kubernetes workloads with the new values of the selected optimization parameters (e.g. CPU and memory requests and limits), using the FileConfigurator operator.
Apply the new deployment files via kubectl
. This triggers a rollout of the new deployment in Kubernetes.
Wait until the rollout is complete.
Run the performance test.
To create the workflow, launch the following command:
You can verify that this workflow has been created by accessing the "boutique" workflow in the Workflow menu:
All scripts and templates used in the steps of this workflow are available in the kubernetes-online-boutique/workflow-artifacts
folder.