Optimize cost of a Kubernetes deployment subject to Horizontal Pod Autoscaler
In this guide, you optimize the cost (or resource footprint) of a Kubernetes deployment where the number of replicas is controlled by the HPA. The study tunes both pod resource settings (CPU and memory requests and limits) and HPA options (target CPU utilization) at the same time, while also taking into account your application performance and reliability requirements (SLOs). This optimization happens in production, leveraging Akamas live optimization capabilities.
Prerequisites
an Akamas instance
a Kubernetes cluster, with a deployment to be optimized
a Horizontal Pod Autoscaler working on the desired deployment
a supported telemetry data source configured to collect metrics from the target Kubernetes cluster (see here for the full list)
a way to apply configuration changes recommended by Akamas to the target deployment and HPA. In this guide, Akamas interacts directly with the Kubernetes APIs via kubectl.You need a service account with permissions to update your deployment (see below for other integration options).
Optimization setup
In this guide, we assume the following setup:
the Kubernetes deployment to be optimized is called frontend (in the hipster-shop namespace)
in the deployment, there is a container named server, where the app runs
the HPA is called frontend-hpa
both Dynatrace and Prometheus are used as observability tools
Let's set up the Akamas optimization for this use case.
System
For this optimization, you need the following components to model the frontend tech stack:
The Kubernetes Workload, Container and Pod components, containing metrics like CPU used for the different objects and parameters to be tuned like CPU limits at the container levels (from the Kubernetes optimization pack)
An HPA component, which contains HPA parameters like the target CPU utilization
A Web Application component, which contains service-level metrics like throughput and response time of the microservice (from the Web Applicationoptimization pack)
Let's start by creating the system, which represents the Kubernetes deployment to be optimized. To create it, write a system.yaml manifest like this:
name: frontend
description: The frontend Kubernetes deployment
Then run:
akamascreatesystemsystem.yaml
Now create the three Kubernetes components. Create a workload.yaml manifest like the following:
Then create a container.yaml manifest like the following:
name: server
description: The server Kubernetes container
componentType: Kubernetes Container
properties:
prometheus:
namespace: hipster-shop
pod: frontend.*
container: server
And a pod.yaml manifest like the following:
name: pod_frontend
description: The frontend Kubernetes pod
componentType: Kubernetes Pod
properties:
prometheus:
namespace: hipster-shop
pod: frontend.*
Now create an application.yaml manifest like the following:
name: webapp
description: The web application of frontend deployment
componentType: Web Application
properties:
dynatrace:
id: SERVICE-80258F7AA97F2E4D
prometheus:
namespace: hipster-shop-2
pod: frontend.*
container: server
Notice the component includes properties that specify how Dynatrace telemetry will look up this container in the Kubernetes cluster.
These properties are dependent upon the telemetry provider you are using. See the reference for the full list of supported providers and relative configurations.
The run:
akamascreatecomponentapplication.yamlfrontend-2
Finally, create anhpa.yaml manifest like the following:
name: frontend_hpa
description: The HPA for the frontend
componentType: HPA
The HPA component does not provide any metric, so we do not need to specify anything about the workload.
Then run:
akamascreatecomponenthpa.yamlfrontend-2
Workflow
To optimize a Kubernetes microservice in production, you need to create a workflow that defines how the new configuration recommended by Akamas will be deployed in production.
Let's explore the high-level tasks required in this scenario and the options you have to adapt it to your environment:
1) Update the Kubernetes deployment and HPA configurations
The first step is to update the Kubernetes deployment and HPA with the new configuration. This can be done in several ways depending on your environment and processes:
A simple option is to let Akamas directly update the Kubernetes entities leveraging the Kubernetes APIs via kubectl commands.
Another option is to follow an Infrastructure-as-code approach, where the configuration change is managed via pull requests to a Git repository, leveraging your pipelines to deploy the change in production.
In this guide, we take the first option and use the kubectl patch and kubectl apply commands to configure the new deployment and the HPA.
These commands are executed from the toolbox, an Akamas utility that can be enabled in an Akamas installation on Kubernetes. Make sure that kubectl is configured correctly to connect to your Kubernetes cluster and can update your target deployment. See here for more details.
2) Wait for the new deployment to be rolled out in production
In a live optimization, Akamas needs to understand when the new deployment rollout is complete and whether it was completed successfully or not. This is key information for Akamas AI to observe and optimize your applications safely.
This task can be done in several ways depending on how you manage changes, as discussed in the previous task:
A simple option is to use thekubectl rollout command to wait for the deployment rollout completion. This is the approach used in this guide.
Another option is to follow an Infrastructure-as-code approach, where a change is managed via pull requests to a Git repository, leveraging your pipelines to deploy in production. In this situation, the deployment process is executed externally and is not controlled by Akamas. Hence, the workflow task will periodically poll the Kubernetes deployment to recognize when the new deployment has landed in production.
3) Wait for the appropriate time to start the experiment
When dealing with the HPA, it is important that Akamas always observes the same timeframe.
If the configuration change requires too much time (e.g., because it requires a manual step), the akamas experiments will see a different workload pattern (e.g., we could observe the night instead of the day). This would make the analysis quite complex, especially for humans.
Albeit Akamas handles different workload patterns, it's always better to run each experiment on the same time slot, so that each configuration is evaluated against a similar workload pattern.
In this example we assume that we want to evaluate a new configuration every hour, hence we will insert a workload step that waits for the end of the current hour.
Typically, this depends on the configuration process of your application.
4) Observe how the application behaves with the new configuration
In a live optimization, Akamas simply needs to wait for a given observation interval, while the application works in production with the new configuration. Telemetry metrics will be collected during this observation period and will be analyzed by Akamas AI to recommend the next configuration.
Since we decided to evaluate a configuration every hour, we use a 55 minute observation interval, leaving 5 minutes for the configuration process.
Let's now create a workflow.yaml manifest like the following:
It's now time to create the Akamas study to achieve your optimization objectives.
Let's explore how the study is designed by going through the main concepts. The complete study manifest is available at the bottom.
Goal
Your overall objective is to reduce the cost (or resource footprint) of a Kubernetes deployment. To do that, you need to define the goal, which is a metric (or combination of metrics) representing the deployment cost to be minimized.
There are different approaches to measuring the cost of Kubernetes deployments:
A simple approach is to consider that Kubernetes allocates infrastructure resources based on pod resource requests (CPU and memory). Hence, the cost of a deployment can be derived from the deployment aggregate CPU and memory requests. In this guide, we use this approach and define the study goal as the sum of CPU and memory requests of the container to be optimized.
Alternatively, the cost of a Kubernetes deployment can also be collected from external data sources that provide actual cost metrics like OpenCost. In this case, the study goal can be defined by leveraging the cost metric. See here for more information on how to integrate cost metrics.
Notice that weighting factors can be used in the goal formula to specify the importance of CPU vs memory resources. For example, the cloud price of 1 CPU is about 9 times that of 1 GB of RAM. You can customize those weights based on your requirements so that Akamas knows how to truly reach the most cost-efficient configuration in your specific context.
Constraints
When optimizing for cost reduction (or resource footprint), it's key not to impact application response time or introduce risks of availability and reliability issues. To ensure this, you can define your performance and reliability requirements (SLOs) as metric constraints.
In this study:
to ensure application performance, constraints are specified on application response times and error rate
to ensure application reliability, constraints are specified on container peak CPU and memory utilization, and container out-of-memory kills
Parameters
To achieve cost-efficient and reliable microservices, Kubernetes container resources and HPA scaling options must be configured optimally and tuned jointly, as they are heavily interconnected.
To do that, the study includes the following parameters:
Kubernetes container: CPU and memory requests and limits
HPA target CPU utilization
The study also includes parameter constraints to ensure that recommended configurations are safe and comply with best practices. In particular:
CPU limits must be at most 2x CPU requests, to avoid excessive over-commitment of CPU limits in the cluster.
Notice that the parameters and constraints can change depending on your policies. For example, it is a best practice to set memory requests == limits to avoid pod eviction, hence we are only tuning the memory limit in the study and set the request to the same value in the deployment file.
Workload
Akamas live optimization considers the application's workload to recommend new configurations that are optimal for the goal (e.g. reduce cost) while meeting all metric constraints (e.g., latency and error rates).
For Kubernetes microservices, the workload is typically the throughput (requests/sec) of the microservice API endpoints. This is the approach used in this guide.
Approval mode
In this live optimization, the manual approval is set to false, meaning that as soon as a new configuration gets generated, the workflow will be executed without any human involvement.
You can set it to true so that Akamas will ask for user approval when a new configuration gets generated. Once you approve it, the workflow will be executed, and the new configuration will be deployed to production according to the integration strategy you have defined above.
You can now create a study.yaml manifest like the following: