Optimizing Kubernetes
Last updated
Was this helpful?
Last updated
Was this helpful?
When optimizing Kubernetes applications, typically the goal is to find the configuration that assigns resources to containerized applications so as to minimize waste and ensure the quality of service.
Please refer to the for the list of component types, parameters, metrics, and constraints.
Akamas offers different operators to configure Kubernetes entities. In particular, you can use the to update the definition file of a resource and apply it with the .
The following example is the definition of a deployment, where the replicas and resources are templatized in order to work with the FileConfigurator:
A typical workflow to optimize a Kubernetes application is usually structured as the following:
Wait for the application to be ready: run a custom script to wait until the rollout is complete.
Run the test: execute the benchmark.
Here’s an example of a typical workflow for a system:
Here’s a configuration example for a telemetry provider instance that uses Prometheus to extract all the Kubernetes metrics defined in this optimization pack:
where the configuration of the monitored component provides the additional filters as in the following snippet:
Please keep in mind that some resources, such as pods belonging to deployments, require wildcards in order to match the auto-generated names.
Configure the Kubernetes artifacts: use the to create the definition files starting from a template.
Apply the new parameters: apply the updated definitions using the .
Akamas can access Kubernetes metrics using the This provider comes out of the box with a set of default queries to interrogate a Prometheus instance configured to fetch data from and .
See this for an example of a study leveraging the Kubernetes pack.