Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
There are two possible architectural configurations, depending on whether you are using a dedicated cluster or running the application on Minikube inside your Akamas-in-a-box machine:
In this scenario, you are running the application on a dedicated cluster; it will require at least one node with 4 CPUs and 8 GB of RAM. Akamas will run on a dedicated VM.
In this scenario, you are running the application on a Minikube cluster installed on your Akamas-in-a-box machine; it will need at least 8 CPUs and 16 GB of RAM (e.g., c5.xlarge on AWS EC2). In the following section, Build Minikube Cluster, you will learn to install it with a single command.
In both scenarios, you will install the following applications in the cluster:
the application to tune: Online Boutique
the telemetry provider: Prometheus, which provides performance metrics of the application under optimization
the load generator: Locust, used to run the load tests
Notice that you plan to use your own cluster, please can skip this section and move on to the section Setup Online Boutique.
This section describes how to build a local Kubernetes cluster for the following Akamas optimization study. The local cluster will be installed in the Akamas-in-a-box machine, so this machine needs at least 8 CPUs and 16 GB of RAM (e.g., c5.2xlarge on AWS EC2).
Before proceeding with the installation of the Minikube cluster, please ensure that your Akamas-in-a-box host has all the following prerequisites:
Install docker (if you have installed Akamas-in-a-Box, you should have it already installed).
Install kubectl.
Install minikube (you can stop before verifying the installation).
Add your user to the docker group, if it has not been added yet, with the command: sudo usermod -aG docker $USER && newgrp docker
.
Note down your machine's public IP, as it will be used later in this guide as your CLUSTER_IP
. On Linux you can run: dig +short myip.opendns.com @resolver1.opendns.com
.
Then make sure Akamas is running. You can verify if Akamas services are up and running by executing the following command:
aside negative If you have another minikube cluster running on the same machine, you need to clean your environment with the command
minikube delete
before creating the new cluster.
At this point, you can create the Minikube cluster with a single command by leveraging the script create-minikube-cluster.sh
in the scripts
folder of the cloned repo (as described in section Download artifacts) and passing the public IP of your machine as the CLUSTER_IP
:
The command may take a few minutes, and will output the message "Cluster created" once the installation process completes correctly.
Now the cluster should be up and running. You can verify this by running any kubectl
command, such as:
To get the target application (Online Boutique), the load generator (Locust), and the telemetry provider (Prometheus) installed, you need to use the three Kubernetes manifests available in the kube
folder of your cloned repo. The corresponding kubectl
commands must be issued from any terminal pointing to your cluster.
Notice: if you have installed the minikube cluster with the scripts provided in this guide, you can skip this command and proceed to the paragraph Install the target application.
Notice that all these three manifests refer to a label akamas/node=akamas
to ensure that the corresponding pods are scheduled on the same node. For the sake of simplicity, run the following command to assign this label to the node you want to use for these pods (this is not needed for the Minikube cluster, which already is correctly configured):
To install the Online Boutique application, you need to apply the boutique.yaml
manifest to your cluster with the following command:
This command will create the namespace akamas-demo
and all the Deployment and Services of the Online Boutique inside that namespace. You can verify that all the pods are up and running with the command:
You can wait until the output is similar to the following one, then proceed:
Then, to install Locust, you need to apply the loadgenerator.yaml
manifest to your cluster:
You can verify that all the pods are up and running with the following command:
The output should be similar to the following one:
Finally, to install Prometheus, you need to apply the prometheus.yaml
manifest:
You can verify that all the pods are up and running with the command:
The output should be similar to the following one:
This guide will walk you through the steps of optimizing an application running on a Kubernetes cluster using Akamas.
How to integrate your Kubernetes Cluster with Akamas
How to model the Online Boutique application inside Akamas
How to configure Prometheus to let Akamas collect Kubernetes performance metrics
How to optimize a Kubernetes application using Akamas
Basic understanding of the concepts - you may want to watch a quick (2m) video included in this Welcome to Akamas guide and get yourself familiar with Akamas' key concepts by reading the Learn Akamas key concepts guide.
An Akamas installation with a valid license - you may want to read the Akamas in-a-box guide to set up your Akamas instance.
A working Kubernetes cluster - if you do not have such a cluster available, you can easily create a local one following the next sections of this guide.
Please clone the repository in the home directory of the Akamas instance by running the command
You will optimize , a cloud-native microservices application implemented by Google as a demo application for Kubernetes. It is a web-based sample, yet a fully-fledged, e-commerce application.
You can find all the configuration files used in this guide on Akamas' .
Notice: if you installed the cluster with Minikube by following the steps documented in the section Build Minikube Cluster, then you do not need to set up Akamas. Thus, you can skip to the next section, Verify Akamas, to verify that everything is working fine.
As described in the section Architecture Overview, Akamas needs to communicate with the cluster to apply new configurations of the Online Boutique using the kubectl
tool. Therefore, if you are optimizing your own Kubernetes cluster, you need to make sure that Akamas can interact with it.
First, you need to go to your Akamas-in-a-box machine and copy your kubeconfig file (i.e., ~/.kube/config
) in the /akamas-config
folder. Then, you have to ensure that the container named benchmark has the credentials to access the cluster. The credentials to pass could differ depending on your cluster provider. The examples here below are specific to an EKS cluster and other providers.
Please copy & paste the following command and substitute your AWS credentials in place of the placeholders. It will create the file /akamas-config/envs
that contains the variables required to communicate with the cluster:
You might write a file envs
as above in the /akamas-config
directory and put there all the environment variables needed to connect to your cluster as in the example below.
The setup is now complete. You can now proceed to the next section to verify it.
Duration: 01:00
You need to make sure that Akamas can interact with the target cluster check that:
The benchmark container in your Akamas-in-a-box machine can run kubectl
commands against your cluster.
The benchmark container can reach your cluster through HTTP.
To check that the container can connect to your cluster run the following command and verify that you can see your Kubernetes namespaces:
Next, you need to check that you can reach your Prometheus and the loadgenerator from the benchmark container.
To verify that you can communicate with Prometheus try to run the following command, substituting the CLUSTER_IP
placeholder with your public cluster IP:
You should see the output:
Lastly, to verify that you can connect with the loadgenerator run the following command, substituting the CLUSTER_IP
placeholder with your public cluster IP:
You should see the output:
At this point, Akamas is correctly configured to interact with the target cluster, and you can start modeling and then optimizing your cluster in Akamas.
The next step is to create a workflow describing the steps executed in each experiment of your optimization study.
A workflow in an optimization study for Kubernetes is typically composed of the following tasks:
Write the manifest file corresponding to a new configuration of the selected optimization parameters to the target system using the FileConfigurator operator.
Apply the manifest with the new configurations via kubectl
.
Wait until the rollout of all services has completed.
Run the performance test.
You can create the Akamas workflow with a single command by leveraging the script create-workflow.sh
provided in the scripts
folder. Also, this script requires a parameter:
CLUSTER_IP
: the public IP address of the cluster
The artifact in your cloned repo contains the workflow.yaml file that creates the workflow by issuing the following command:
You can verify that this workflow has been created by accessing the corresponding Workflow menu in the Akamas UI:
All scripts and templates used in the steps of this workflow can be found in the kubernetes-online-boutique/workflow-artifacts
folder.
Duration: 02:00
To model the Online Boutique inside Akamas, we need to create a corresponding System with its components in Akamas, and also associate a Prometheus telemetry instance to the system to allow Akamas to collect the performance metrics.
The entire Akamas system, together with the Prometheus telemetry instance, can be installed with a single command by leveraging the create-system.sh
script provided in the scripts
folder.
This script requires, as an argument, the public IP address of the cluster CLUSTER_IP
.
Now you need to first login to Akamas with the following command:
and then run the above-mentioned script in a shell where you have the Akamas CLI installed:
The scripts should return a message similar to: System created correctly
.
At this point, you can access the Akamas UI under and verify that the System and its component are listed in the Systems menu:
Notice that this System leverages the following Optimization Packs:
Kubernetes: it provides a component type required to model each Kubernetes Pod - one for each Deployment in the Online Boutique.
Optimization Packs for Web Applications: it models the end-to-end metrics of the Online Boutique, such as the application response time and throughput.
For this study, we define a goal of increasing the application throughput, while enforcing the constraint of keeping the latency below 500 ms and error rate below 2%.
You can create the optimization study using the study.yaml
file in the akamas/studies
folder by issuing the Akamas command:
and then run it by issuing the Akamas command:
You can now explore this study from the Study menu in the Akamas UI and then move to the Analysis tab.
As the optimization study executes the different experiments, this chart will display more points and their associated score.
Let's now take a look at the results and benefits Akamas achieved in this optimization study. Mind that you might achieve different results as the actual best configuration may depend on your actual setup (i.e., operating systems, cloud or virtualization platform, and the hardware).
First of all, the best configuration was quickly identified, providing an application efficiency increase of 17%, without affecting the response time.
Let's look at the best configuration from the Summary tab: this configuration specifies the right amount of CPU and memory for each microservice.
It’s interesting to notice that Akamas did adjust the CPU and memory limits of every single microservice:
For some microservices (e.g., frontend), both the CPU and memory resources were increased.
For others (e.g., paymentservice), the memory was decreased while the CPU was slightly increased.
For some others (e.g., productcatalogservice), only the memory was decreased.
Let's navigate the Insights section, which provides details of the best experiment for each of the selected KPIs.
The best experiments according to the selected KPIs are automatically tagged and listed in the table. Experiment 34 reached the best efficiency, while experiment 53 achieves the best throughput and a decrease in the response time. Also, notice that a couple of identified configurations improved the application response time even more (up to 87%) while not representing the best configuration.
The experiments can be plotted and the results will be shown such as below.
This optimization study shows how it is possible to tune a Kubernetes application made by several microservices, which represents a complex challenge, typically requiring days or weeks of time and effort even for expert performance engineers, developers, or SRE. With Akamas, the optimization study only took about 4 hours to automatically identify the optimal configuration for each Kubernetes microservice.