Management container/pod

Akamas provides a Management Container (also referred to as Management Pod, when deployed on Kubernetes) that contains Akamas CLI executable, along with some other useful command-line tools such as kubectl, Helm, vim, docker cli, jq, yq, git, gzip, zip, OpenSSH, ping, cURL, wget, psql, and k9s. It runs in the same network as akamas services, for docker-compose installation, or in the akamas namespace for Kubernetes installations.

This management container aims to:

  • allowing users to interact with Akamas without the need to install Akamas CLI on their systems

  • providing the Akamas' workflows with an environment where to run scripts and persist artifacts when no other options (e.g. a dedicated host) are available

Docker compose installation

Just add the following block of code inside the services: block of your Akamas docker-compose.yml after editing the following environment variables:

  • ALLOW_PASSWORD: when set to true, enable SSH password authentication

  • CUSTOM_PASSWORD: specify the login password. If not provided, a random one will be generated

  management-container:
    image: 485790562880.dkr.ecr.us-east-2.amazonaws.com/akamas/management-container:1.2.2
    container_name: management-container
    environment:
      - BASH_ENV=/home/akamas/.bashrc
      - ALLOW_PASSWORD=true
      - CUSTOM_PASSWORD=your_password
    expose:
      - 22
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      - akamas
    restart: unless-stopped

Kubernetes installation

Follow the usual guide for installing Akamas on Kubernetes, adding the following variables to your akamas.yaml file:

managementPod:
  enabled: true
  sshPassword:
    # enable SSH password authentication. If 'false', only key-based access
    # will be allowed
    enabled: false
    # configure the password for the management-pod user. If not provided, an
    # autogenerated password will be used
    override:

Accessing Management Pod on Kubernetes

When it's deployed to Kubernetes, you may access this management pod in two ways:

  • via kubectl

  • via SSH command

NOTE: both methods require kubectl to be installed and configured for this cluster.

Kubectl access

Accessing is as simple as:

kubectl exec -it deploy/management-pod -- bash

SSH access

For this type of access, you need to retrieve the SSH login password (if enabled) or key. To fetch them, run the following commands:

# Get the password
kubectl exec deploy/management-pod -- cat /home/akamas/password
# Get the key
kubectl exec deploy/management-pod -- cat /home/akamas/.ssh/id_rsa

At this point, you should launch this command to port-forward the management port to your local terminal (number 2222 can be any other number: it should be an unused port on your machine):

kubectl port-forward service/management-pod 2222:22 &

then, on another terminal, you may launch:

ssh akamas@localhost -p 2222

and answer yes to the question, then insert the akamas password to successfully SSH access the management pod (see example below):

$ ssh akamas@localhost -p 2222
The authenticity of host '[localhost]:2222 ([127.0.0.1]:2222)' can't be established.
ED25519 key fingerprint is SHA256:34GXnmRz1YjWr2TTpUpJmRoHYck0NzeAxni2L857Exs.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '[localhost]:2222' (ED25519) to the list of known hosts.
akamas@localhost's password:
Welcome to Ubuntu 20.04.6 LTS (GNU/Linux 5.10.178-162.673.amzn2.x86_64 x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

This system has been minimized by removing packages and content that are
not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

akamas@management-pod-6dd8b7f898-8xwzf:~$

Work directory

If you need to store Akamas artifacts, scripts, or any other file that needs persistence, you can use the /work directory, which persists across restarts. This is the default folder at login time. It contains the akamas_password file mentioned above, the Kubernetes and SSH configuration files, which will be symlinked to your home folder.

Integrating management pod with Akamas resources

When running Akamas from a kubernetes cluster, the suggested patterns are:

  • to use the management pod to store all YAML resource files needed to create all systems, components, telemetry instances, workflows, and studies. These files should be saved in a subfolder inside the persisted /work folder.

  • use the management pod as a worker machine whenever you need to connect to an internal machine in the same Kubernetes namespace to perform some tasks. The important thing to note is that to successfully connect to the management-pod from other resources such as workflow tasks or telemetry instances, you must use the hostname management-pod. This is crucial: it's not the pod name such as management-pod-6dd8b7f898-8xwzf. Just use hostname: management-pod or the workflow/telemetry instance will fail.

Integrating with workflow

A typical kubernetes scenario is Akamas running in a namespace different from the customer application. In such a scenario you will probably need to create an Akamas workflow (running from the akamas namespace) that applies a new configuration on the customer application (running in the customer namespace) then Akamas collects new metrics for a while and then calculates a new configuration based on the score of the previous configuration.

What follows is a typical workflow example:

  • uses a FileConfigurator to create a new helm file that applies the new configuration computed by Akamas on a single service named adservice.FileConfigurator recreates a new adservice.yaml file by using the template adservice.yaml.templ. Just make sure that adservice.yaml.templ contains namespace: boutique (the customer namespace, in our example)

  • uses an Executor that launches kubectl apply with the new helm file adservice.yaml you just saved to apply the new configuration

  • uses another Executor to wait for the new configuration to be rolled out by launching kubectl rollout status

  • waits for half an hour to observe the changes in metrics

name: adservice
tasks:
  - name: configure
    operator: FileConfigurator
    arguments:
      source:
        hostname: management-pod
        username: akamas
        password: <your-management-pod-password>
        # instead of password you can an SSH key file such as /work/akamas/key.rsa
        # key: <your-key-file>         
        path: adservice.yaml.templ
      target:
        hostname: management-pod
        username: akamas
        password: <your-management-pod-password>
        # instead of password you can an SSH key file such as /work/akamas/key.rsa
        # key: <your-key-file>         
        path: adservice.yaml

  - name: apply
    operator: Executor
    arguments:
      timeout: 5m
      host:
        hostname: management-pod
        username: akamas
        password: <your-management-pod-password>
        # instead of password you can an SSH key file such as /work/akamas/key.rsa
        # key: <your-key-file>         
      command: kubectl apply -f adservice.yaml

  - name: verify
    operator: Executor
    arguments:
      timeout: 5m
      host:
        hostname: management-pod
        username: akamas
        password: <your-management-pod-password>
        # instead of password you can an SSH key file such as /work/akamas/key.rsa
        # key: <your-key-file>         
      command: kubectl rollout status --timeout=5m deployment/adservice -n boutique;
      
  - name: observe
    operator: Sleep
    arguments:
      seconds: 1800

Integrating with telemetry instances

Similarly to the workflow example above, if your telemetry instance needs to connect to some internal machine to create/process some local file (this may happen with a CSV provider, for example), you should use the address of the management pod (which is exactly management-pod), authType password and provide akamas password in auth field. See example below:

# CSV Telemetry Provider Instance
provider: CSV File
config:
  address: management-pod
  authType: password
  username: akamas
  auth: <your-management-pod-password>
  remoteFilePattern: /work/monitoring/result-*.csv
  componentColumn: COMPONENT
  timestampColumn: TS
  timestampFormat: YYYY-MM-dd'T'HH:mm:ss
metrics:
  - metric: cpu_util
    datasourceMetric: user%

Last updated