# Optimizing Kubernetes

When optimizing Kubernetes applications, typically the goal is to find the configuration that assigns resources to containerized applications so as to minimize waste and ensure the quality of service.

Please refer to the [Kubernetes optimization pack](https://docs.akamas.io/akamas-docs/3.1.2/akamas-reference/optimization-packs/kubernetes-pack) for the list of component types, parameters, metrics, and constraints.

### Workflows <a href="#workflow-design" id="workflow-design"></a>

#### Applying parameters <a href="#applying-parameters" id="applying-parameters"></a>

Akamas offers different operators to configure Kubernetes entities. In particular, you can use the [FileConfigurator operator](https://docs.akamas.io/akamas-docs/3.1.2/akamas-reference/workflow-operators/fileconfigurator-operator) to update the definition file of a resource and apply it with the [Executor operator](https://docs.akamas.io/akamas-docs/3.1.2/akamas-reference/workflow-operators/executor-operator).

The following example is the definition of a deployment, where the replicas and resources are templatized in order to work with the FileConfigurator:

{% code lineNumbers="true" %}

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: ${deployment.k8s_workload_replicas}
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.14.2
          ports:
            - containerPort: 80
          resources:
            requests:
              cpu: ${container.cpu_request}
              memory: ${container.memory_request}
            limits:
              cpu: ${container.cpu_limit}
              memory: ${container.memory_limit}apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: ${deployment.k8s_workload_replicas}
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.14.2
          ports:
            - containerPort: 80
          resources:
            requests:
              cpu: ${container.cpu_request}
              memory: ${container.memory_request}
            limits:
              cpu: ${container.cpu_limit}
              memory: ${container.memory_limit}
```

{% endcode %}

### A typical workflow <a href="#a-typical-workflow" id="a-typical-workflow"></a>

A typical workflow to optimize a Kubernetes application is usually structured as the following:

1. **Configure the Kubernetes artifacts:** use the [File Configurator operator](https://docs.akamas.io/akamas-docs/3.1.2/akamas-reference/workflow-operators/fileconfigurator-operator) to create the definition files starting from a template.
2. **Apply the new parameters:** apply the updated definitions using the [Executor operator](https://docs.akamas.io/akamas-docs/3.1.2/akamas-reference/workflow-operators/executor-operator).
3. **Wait for the application to be ready:** run a custom script to wait until the rollout is complete.
4. **Run the test:** execute the benchmark.

Here’s an example of a typical workflow for a system:

{% code lineNumbers="true" %}

```yaml
name: Kubernetes workflow
tasks:
  - name: Configure deployment parameters
    operator: FileConfigurator
    arguments:
      source:
        path: nginx-deployment.yaml.templ
        hostname: app.akamas.io
        username: akamas
        key: rsa-key
      target:
        path: nginx-deployment.yaml
        hostname: app.akamas.io
        username: akamas
        password: akamas

  - name: Apply parameters
    operator: Executor
    arguments:
      command: kubectl apply -f nginx-deployment.yaml
      host:
        hostname: app.akamas.io
        username: akamas
        password: akamas

  - name: Wait application ready
    operator: Executor
    arguments:
      command: bash /home/akamas/app/check-status.sh
      host:
        hostname: app.akamas.io
        username: akamas
        password: akamas

  - name: Run test
    operator: Executor
    arguments:
      command: bash /home/akamas/app/run-test.sh
      host:
        hostname: app.akamas.io
        username: akamas
        password: akamas
```

{% endcode %}

### Telemetry Providers <a href="#telemetry-providers" id="telemetry-providers"></a>

Akamas can access Kubernetes metrics using the [Prometheus provider.](https://docs.akamas.io/akamas-docs/3.1.2/integrating-akamas/integrating-telemetry-providers/prometheus-provider) This provider comes out of the box with a set of default queries to interrogate a Prometheus instance configured to fetch data from [cAdvisor](https://github.com/google/cadvisor) and [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics).

Here’s a configuration example for a telemetry provider instance that uses Prometheus to extract all the Kubernetes metrics defined in this optimization pack:

{% code lineNumbers="true" %}

```yaml
provider: Prometheus
config:
  address: monitoring.akamas.io
  port: 9090
```

{% endcode %}

where the configuration of the monitored component provides the additional filters as in the following snippet:

{% code lineNumbers="true" %}

```yaml
name: nginx_pod
description: Pod running Nginx
componentType: Kubernetes Pod

properties:
  prometheus:
    job: 'kubernetes-cadvisor|kube-state-metrics'
    namespace: akamas
    pod: nginx-*
```

{% endcode %}

{% code lineNumbers="true" %}

```yaml
name: cluster
description: Cluster
componentType: Kubernetes Cluster

properties:
  prometheus:
    job: 'kubernetes-cadvisor|kube-state-metrics'
```

{% endcode %}

Please keep in mind that some resources, such as pods belonging to deployments, require wildcards in order to match the auto-generated names.

### Examples <a href="#examples-of-studies" id="examples-of-studies"></a>

See this [page](https://docs.akamas.io/akamas-docs/3.1.2/knowledge-base/optimizing-a-kubernetes-application) for an example of a study leveraging the Kubernetes pack.
