Optimizing Kubernetes

When optimizing Kubernetes applications, typically the goal is to find the configuration that assigns resources to containerized applications so as to minimize waste and ensure the quality of service.

Please refer to the Kubernetes optimization pack for the list of component types, parameters, metrics, and constraints.

Workflows

Applying parameters

Akamas offers different operators to configure Kubernetes entities. In particular, you can use the FileConfigurator operator to update the definition file of a resource and apply it with the Executor operator.

The following example is the definition of a deployment, where the replicas and resources are templatized in order to work with the FileConfigurator:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: ${deployment.k8s_workload_replicas}
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.14.2
          ports:
            - containerPort: 80
          resources:
            requests:
              cpu: ${container.cpu_request}
              memory: ${container.memory_request}
            limits:
              cpu: ${container.cpu_limit}
              memory: ${container.memory_limit}

A typical workflow

A typical workflow to optimize a Kubernetes application is usually structured as the following:

  1. Configure the Kubernetes artifacts: use the File Configurator operator to create the definition files starting from a template.

  2. Apply the new parameters: apply the updated definitions using the Executor operator.

  3. Wait for the application to be ready: run a custom script to wait until the rollout is complete.

  4. Run the test: execute the benchmark.

Here’s an example of a typical workflow for a system:

name: Kubernetes workflow
tasks:
  - name: Configure deployment parameters
    operator: FileConfigurator
    arguments:
      source:
        path: nginx-deployment.yaml.templ
        hostname: app.akamas.io
        username: akamas
        key: rsa-key
      target:
        path: nginx-deployment.yaml
        hostname: app.akamas.io
        username: akamas
        password: akamas

  - name: Apply parameters
    operator: Executor
    arguments:
      command: kubectl apply -f nginx-deployment.yaml
      host:
        hostname: app.akamas.io
        username: akamas
        password: akamas

  - name: Wait application ready
    operator: Executor
    arguments:
      command: bash /home/akamas/app/check-status.sh
      host:
        hostname: app.akamas.io
        username: akamas
        password: akamas

  - name: Run test
    operator: Executor
    arguments:
      command: bash /home/akamas/app/run-test.sh
      host:
        hostname: app.akamas.io
        username: akamas
        password: akamas

Telemetry Providers

Akamas can access Kubernetes metrics using the Prometheus provider. This provider comes out of the box with a set of default queries to interrogate a Prometheus instance configured to fetch data from cAdvisor and kube-state-metrics.

Here’s a configuration example for a telemetry provider instance that uses Prometheus to extract all the Kubernetes metrics defined in this optimization pack:

provider: Prometheus
config:
  address: monitoring.akamas.io
  port: 9090

where the configuration of the monitored component provides the additional filters as in the following snippet:

name: nginx_pod
description: Pod running Nginx
componentType: Kubernetes Pod

properties:
  prometheus:
    job: 'kubernetes-cadvisor|kube-state-metrics'
    namespace: akamas
    pod: nginx-*
name: cluster
description: Cluster
componentType: Kubernetes Cluster

properties:
  prometheus:
    job: 'kubernetes-cadvisor|kube-state-metrics'

Please keep in mind that some resources, such as pods belonging to deployments, require wildcards in order to match the auto-generated names.

Examples

See this page for an example of a study leveraging the Kubernetes pack.

Last updated