Management container/pod

Akamas provides a Management Container (also referred to as Management Pod, when deployed on Kubernetes) that contains Akamas CLI executable, along with some other useful command-line tools such as kubectl, Helm, vim, docker cli, jq, yq, git, gzip, zip, OpenSSH, ping, cURL, wget, psql, and k9s. It runs in the same network as akamas services, for docker-compose installation, or in the akamas namespace for Kubernetes installations.

This management container aims to:

  • allowing users to interact with Akamas without the need to install Akamas CLI on their systems

  • providing the Akamas' workflows with an environment where to run scripts and persist artifacts when no other options (e.g. a dedicated host) are available

Docker compose installation

Just add the following block of code inside the services: block of your Akamas docker-compose.yml after editing the following environment variables:

  • ALLOW_PASSWORD: when set to true, enable SSH password authentication

  • CUSTOM_PASSWORD: specify the login password. If not provided, a random one will be generated

  management-container:
    image: 485790562880.dkr.ecr.us-east-2.amazonaws.com/akamas/management-container:1.2.2
    container_name: management-container
    environment:
      - BASH_ENV=/home/akamas/.bashrc
      - ALLOW_PASSWORD=true
      - CUSTOM_PASSWORD=your_password
    expose:
      - 22
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      - akamas
    restart: unless-stopped

Launch docker compose up -d as described in Start Akamas (online) or Run installation (offline) to run the management container.

Kubernetes installation

Follow the usual guide for installing Akamas on Kubernetes, adding the following variables to your akamas.yaml file:

Then, you can launch the usual helm upgrade --install ... command to run the pod, as described in the Start the installation (online) or Start the installation (offline) sections.

Accessing Management Pod on Kubernetes

When it's deployed to Kubernetes, you may access this management pod in two ways:

  • via kubectl

  • via SSH command

NOTE: both methods require kubectl to be installed and configured for this cluster.

Kubectl access

Accessing is as simple as:

SSH access

For this type of access, you need to retrieve the SSH login password (if enabled) or key. To fetch them, run the following commands:

At this point, you should launch this command to port-forward the management port to your local terminal (number 2222 can be any other number: it should be an unused port on your machine):

then, on another terminal, you may launch:

and answer yes to the question, then insert the akamas password to successfully SSH access the management pod (see example below):

Work directory

If you need to store Akamas artifacts, scripts, or any other file that needs persistence, you can use the /work directory, which persists across restarts. This is the default folder at login time. It contains the akamas_password file mentioned above, the Kubernetes and SSH configuration files, which will be symlinked to your home folder.

Integrating management pod with Akamas resources

When running Akamas from a kubernetes cluster, the suggested patterns are:

  • to use the management pod to store all YAML resource files needed to create all systems, components, telemetry instances, workflows, and studies. These files should be saved in a subfolder inside the persisted /work folder.

  • use the management pod as a worker machine whenever you need to connect to an internal machine in the same Kubernetes namespace to perform some tasks. The important thing to note is that to successfully connect to the management-pod from other resources such as workflow tasks or telemetry instances, you must use the hostname management-pod. This is crucial: it's not the pod name such as management-pod-6dd8b7f898-8xwzf. Just use hostname: management-pod or the workflow/telemetry instance will fail.

Integrating with workflow

A typical kubernetes scenario is Akamas running in a namespace different from the customer application. In such a scenario you will probably need to create an Akamas workflow (running from the akamas namespace) that applies a new configuration on the customer application (running in the customer namespace) then Akamas collects new metrics for a while and then calculates a new configuration based on the score of the previous configuration.

What follows is a typical workflow example:

  • uses a FileConfigurator to create a new helm file that applies the new configuration computed by Akamas on a single service named adservice.FileConfigurator recreates a new adservice.yaml file by using the template adservice.yaml.templ. Just make sure that adservice.yaml.templ contains namespace: boutique (the customer namespace, in our example)

  • uses an Executor that launches kubectl apply with the new helm file adservice.yaml you just saved to apply the new configuration

  • uses another Executor to wait for the new configuration to be rolled out by launching kubectl rollout status

  • waits for half an hour to observe the changes in metrics

Integrating with telemetry instances

Similarly to the workflow example above, if your telemetry instance needs to connect to some internal machine to create/process some local file (this may happen with a CSV provider, for example), you should use the address of the management pod (which is exactly management-pod), authType password and provide akamas password in auth field. See example below:

Last updated

Was this helpful?