All pages
Powered by GitBook
1 of 10

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Prerequisites

Before installing the Akamas please make sure to review all the following requirements:

  • Cluster requirements

  • Software requirements

Kubernetes installation

This section describes how to install Akamas on a Kubernetes cluster.

Preliminary steps

Before installing Akamas, please follow these steps:

Software Requirements

This page describes the requirements that should be fulfilled by the user when installing or managing an Akamas installation on Kubernetes. The software below is usually installed on the user's workstation or laptop.

Kubectl

Kubectl must be installed and configured to interact with the desired cluster. Refer to the to set up the client.

To interact with the Kubernetes APIs, you will need , preferably with a version matching the cluster. To check both the client and cluster versions, run the following:

Install the software requirements

Installation steps

Please follow these steps to install the Akamas application:

  1. Install the application

  2. Install the CLI

  3. Verify the installation

  4. Install the license

Please also read the section on how to manage Akamas. Finally, read the relevant sections of Integrating Akamas to integrate Akamas into your specific ecosystem.

Review the cluster requirements

Install Akamas

Akamas is deployed on your Kubernetes cluster through a Helm chart, and all the required images can be downloaded from the AWS ECR repository.

Two installation modes are available:

  • online installation, in case the Kubernetes cluster can access the Internet.

  • offline installation, in case the Kubernetes cluster does not have access to the Internet or you need to use a private image registry.

Helm

Installing Akamas requires Helm 3.0 or higher. To check the version, run the following:

Privileged access

Akamas uses Elasticsearch to store logs and time series. When running Akamas on Kubernetes, Elasticsearch is installed automatically using the official Elasticsearch helm chart. This chart required running an init container with privileged access to set up a configuration on the Elasticsearch pod host. If running such a container is not permitted in your environment, you can add the following snippet to the akamas.yaml file when installing Akamas to disable this feature.

kubectl version --short
official kubectl documentation
kubectl
helm version --short
# Disable ES privileged initialization container. 
elasticsearch:
  sysctlInitContainer:
    enabled: false

Cluster Requirements

Kubernetes version

Running Akamas requires a cluster running Kubernetes version 1.24 or higher.

Resources requirements

Akamas can be deployed in three different sizes depending on the number of concurrent optimization studies that will be executed. If you are unsure about which size is appropriate for your environment we suggest you start with the small one and upgrade to bigger ones as you expand the optimization activity to more applications.

The tables below report the required resources both for requests and limits that should be available in the cluster to use Akamas.

The resources specified on this page have been defined by considering using a dedicated namespace to run only Akamas components. If your cluster has additional tools (E.g. a service mesh or a monitoring agent) that inject containers in the Akamas pods we suggest either disabling them or increasing the sizing considering their overhead. Also if you plan to deploy other software inside the Akamas namespace and resource quotas are enabled you should increase the size considering the resources required by the specific software.

Small

The small tier is suited for environments that need to support up to 3 concurrent optimization studies

Resource
Requests
Limits

CPU

4 Cores

15 Cores

Memory

28 GB

28 GB

Disk Space

70 GB

70 GB

Medium

The medium tier is suited for environments that need to support up to 50 concurrent optimization studies

Resource
Requests
Limits

CPU

8 Cores

20 Cores

Memory

50 GB

50 GB

Disk Space

100 GB

100 GB

Large

The large tier is suited for environments that need to support up to 100 concurrent optimization studies. If you plan to run more concurrent studies, please contact Akamas support to plan your installation.

Resource
Requests
Limits

CPU

10 Cores

25 Cores

Memory

60 GB

60 GB

Disk Space

150 GB

150 GB

Storage requirements

The cluster must define a Storage Class so that the application installation can leverage Persistent Volume Claims to dynamically provision the volumes required to persist data.

For more information on this topic refer to Kubernetes' official documentation.

Permissions

To install and run Akamas cluster level permissions are not required. This is the minimal set of namespaced rules.

Networking

Networking requirements depend on how users interact with Akamas. Services can be exposed via Ingress or using kubectl as a proxy. Refer to Accessing Akamas for a more detailed description of the available options.

- apiGroups: ["", "apps", "policy", "batch", "networking.k8s.io", "events.k8s.io/v1", "rbac.authorization.k8s.io"]
  resources:
    - configmaps
    - cronjobs
    - deployments
    - events
    - ingresses
    - jobs
    - persistentvolumeclaims
    - poddisruptionbudgets
    - pods
    - pods/log
    - rolebindings
    - roles
    - secrets
    - serviceaccounts
    - services
    - statefulsets
  verbs: ["get", "list", "create", "delete", "patch", "update", "watch"]

Useful commands

You may find helpful some of the commands listed in the sections below.

Read database passwords

By default, access to each service database is assigned to a user with randomly generated passwords. For example, to read the campaign service database password, execute the following command:

The username for the campaign service can be found in the configuration file under each service section. To read the username for the campaign service set during the installation, launch the following command:

You can connect to the campaign_service

database with the user and password above.

If you want to show all the passwords, execute this command:

kubectl get secret database-user-credentials -o go-template='{{ .data.campaign | base64decode }}'
helm get values akamas --all --output json | jq '.campaign.database.user'
kubectl get secret database-user-credentials -o go-template='{{range $k,$v := .data}} {{printf "%s: %s\n" $k ( $v |base64decode ) }}{{end}}'

Accessing Akamas

To interact with your Akamas instance, you need the UI and API Gateway to be accessible from outside the cluster.

Kubernetes offers different options to expose a service outside of the cluster. The following is a list of the supported ones, with examples of how to configure them to work in your chart release:

  • Port Forwarding

  • Ingress

While changing the access mode of your Akamas installation, you must also update the value of the akamasBaseUrl option of the Helm Values file to match the new endpoint used.

Port Forwarding

By default, Akams uses Cluster IPs for its services, allowing communication only inside the cluster. Still, you can leverage Kubectl's port-forward to create a private connection and expose any internal service on your local machine.

This solution is suggested to perform quick tests without exposing the application or in scenarios where cluster access to the public is not allowed.

Set akamasBaseUrl to http://localhost:9000 in your Helm Values file, and install or update your Akamas deployment using the Helm command. Once the rollout is complete, open a tunnel to the UI with the following command:

As long as the port-forwarding is running, you will be able to interact with the UI through the tunnel; you can also interact through the Akamas CLI by configuring the URL http://localhost:9000/akapi.

Refer to the official for more details about port-forwarding.

Ingress

An Ingress is a Kubernetes object that provides service access, load balancing, and SSL termination to Kubernetes services.

To expose the Akamas UI through an Ingress, configure the Helm Values file by configuring akamasBaseUrl with the host of the Ingress (e.g.: https://akamas.kube.example.com), and by adding the snippet below:

Here is a description of the fields:

  • enabled: set to true to enable the Ingress

  • tls: configure secretName with the name of the Secret containing the TLS certificate for the hostname configured in akamasBaseUrl. This secret must be created manually before applying the configuration (see on the Kubernetes documentation) or managed by a certificate issuer configured in the namespace.

Re-run to update the configuration. Once the rollout is complete, you will be able to access the UI using the URL specified in akamasBaseUrl and interact with the CLI using ${akamasBaseUrl}/api.

Refer to the for more details on Ingresses.

annotations: optional, provide any additional annotation required in your deployment. If your cluster leverages any certificate issuer (such as cert-manager), you can add here the annotations required to interact with the issuer.

kubernetes documentation
TLS Secrets
the install command
official kubernetes documentation
kubectl port-forward service/ui 9000:http
ingress:
  enabled: true
  tls:
    - secretName: "<SECRET_NAME>"  # secret containing the certificate and key data
  annotations: {}  # optional

Installing on OpenShift

Running Akamas on OpenShift requires some Helm configurations to be applied.

The installation is provided as a set of templates packaged in a chart archive managed by Helm. Custom values are applied to ensure Akamas complies with the default restricted-v2 security context constraints.

OpenShift requirements

OpenShift version 4.x.

Before proceeding with the installation make sure you meet the

Installation

The installation can be done offline and online as described in the section . Choose the one that better suits your cluster access policies.

The following snippet must be added to the akamas.yaml to install Akamas on OpenShift.

Access Akamas - Ingress to route

Besides the methods described in , you can use the OpenShift default ingress controller to create the required routes. Add the following snippet to the akamas.yaml file.

Once the Helm command is invoked, ensure the routes have been created by running:

The output must list the Akamas routes with different paths.

Toolbox

The toolbox optional component requires privileged access to run on OpenShift; the toolbox uses a dedicated service account, named toolbox by default. You can grant privileged access by issuing the following command.

Online Installation

Before starting the installation, make sure the are met.

Create the configuration file

Akamas on Kubernetes is provided as a set of templates packaged in a chart archive managed by .

To proceed with the installation, you need to create a Helm Values file, called akamas.yaml in this guide, containing the mandatory configuration values required to customize your application. The following template contains the minimal set required to install Akamas:

Kubernetes requirements
Install Akamas
Accessing Akamas
akamas.yaml
airflow:
  uid: null
  gid: null

postgresql:
  primary:
    containerSecurityContext:
      enabled: false

    podSecurityContext:
      enabled: false

  shmVolume:
    enabled: false

kibana:
  podSecurityContext:
    fsGroup: null

  securityContext:
    runAsUser: null

elasticsearch:
  sysctlInitContainer:
    enabled: false

  securityContext:
    runAsUser: null

  podSecurityContext:
    fsGroup: null
    runAsUser: null
akamas.yaml
ingress:
  enabled: true
  
  annotations:
    route.openshift.io/termination: edge
    haproxy.router.openshift.io/timeout: 1200s

  className: ""

  tls:
    - {}
oc get routes
#This command assumes the akamas namespace is named "akamas" 
# and the service account default name "toolbox" is used
oc adm policy add-scc-to-user privileged system:serviceaccount:akamas:toolbox
You can also download the template file running the following snippet:

Replace in the file the following placeholders:

  • AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY: the AWS credentials for pulling the Akamas images

  • CUSTOMER_NAME: customer name provided with the Akamas license

  • ADMIN_PASSWORD: initial administrator password

  • INSTANCE_HOSTNAME: the URL that will be used to expose the Akamas installation, for example https://akamas.k8s.example.com when using an Ingress, or http://localhost:9000 when using port-forwarding. Refer to for the list of the supported access methods and a reference for any additional configuration required.

Define Size

Akamas can be installed in three sizes Small, Medium, and Large as explained in the cluster prerequisite section. By default, the chart installs the Small size. If you want to install a specific size add the following snippet to your values.yaml file.

Medium

Large

Start the installation

With the configuration file you just created (and the new variables you added to override the defaults), you can start the installation with the following command:

This command will create the Akamas resources within the specified namespace. You can define a different namespace by changing the argument --namespace <your-namespace>

An example output of a successful installation is the following:

Check the installation

To monitor the application startup, run the command kubectl get pods. After a few minutes, the expected output should be similar to the following:

At this point, you should be able to access the Akamas UI using the endpoint specified in the akamasBaseUrl, and interact through the Akamas CLI with the path /api.

If you haven't already, you can update your configuration file to use a different type of service to expose Akamas' endpoints. To do so, pick from the Accessing Akamas the configuration snippet for the service type of your choice, add it to the akamas.yaml file, update the akamasBaseUrl value, and re-run the installation command to update your Helm release.

requirements
Helm
# AWS credentials to fetch ECR images (required)
awsAccessKeyId: <AWS_ACCESS_KEY_ID>
awsSecretAccessKey: <AWS_SECRET_ACCESS_KEY>

# Akamas customer name. Must match the value in the license (required)
akamasCustomer: <CUSTOMER_NAME>

# Akamas administrator password. If not set a random password will be generated
akamasAdminPassword: <ADMIN_PASSWORD>

# The URL that will be used to access Akamas, for example 'http://akamas.kube.example.com' (required)
akamasBaseUrl: <INSTANCE_HOSTNAME>
curl -so akamas.yaml  http://helm.akamas.io/templates/1.5.4/akamas.yaml.template
#Medium
airflow:
  config:
    core:
      parallelism: 102
  scheduler:
    resources:
      limits:
        cpu: 2500m         
        memory: 21000Mi    
      requests:
        cpu: 1000m         
        memory: 21000Mi   
#Large
airflow:
  config:
    core:
      parallelism: 202
  scheduler:
    resources:
      limits:
        cpu: 2500m         
        memory: 28000Mi    
      requests:
        cpu: 1000m         
        memory: 28000Mi    
telemetry:
  parallelism: 50
helm upgrade --install \
  --create-namespace --namespace akamas \
  --repo http://helm.akamas.io/charts \
  --version '1.5.4' \
  -f akamas.yaml \
  akamas akamas
Release "akamas" does not exist. Installing it now.
NAME: akamas
LAST DEPLOYED: Thu Sep 21 10:39:01 2023
NAMESPACE: akamas
STATUS: deployed
REVISION: 1
NOTES:
Akamas has been installed

NOTES:
Akamas has been installed

To get the initial password use the following command:

kubectl get secret akamas-admin-credentials -o go-template='{{ .data.password | base64decode }}'
NAME                           READY   STATUS    RESTARTS   AGE
airflow-6ffbbf46d8-dqf8m       3/3     Running   0          5m
analyzer-67cf968b48-jhxvd      1/1     Running   0          5m
campaign-666c5db96-xvl2z       1/1     Running   0          5m
database-0                     1/1     Running   0          5m
elasticsearch-master-0         1/1     Running   0          5m
keycloak-66f748d54-7l6wb       1/1     Running   0          5m
kibana-6d86b8cbf5-6nz9v        1/1     Running   0          5m
kong-7d6fdd97cf-c2xc9          1/1     Running   0          5m
license-54ff5cc5d8-tr64l       1/1     Running   0          5m
log-5974b5c86b-4q7lj           1/1     Running   0          5m
logstash-8697dd69f8-9bkts      1/1     Running   0          5m
metrics-577fb6bf8d-j7cl2       1/1     Running   0          5m
optimizer-5b7576c6bb-96w8n     1/1     Running   0          5m
orchestrator-95c57fd45-lh4m6   1/1     Running   0          5m
store-5489dd65f4-lsk62         1/1     Running   0          5m
system-5877d4c89b-h8s6v        1/1     Running   0          5m
telemetry-8cf448bf4-x68tr      1/1     Running   0          5m
ui-7f7f4c4f44-55lv5            1/1     Running   0          5m
users-966f8f78-wv4zj           1/1     Running   0          5m
Accessing Akamas

Offline Installation - Private registry

Before starting the installation, make sure the requirements are met.

Configure the registry

If your cluster is in an air-gapped network or is unable to reach the Akamas image repository, you need to copy the required images to your private registry.

The procedure described here leverages your local environment to upload the images. Thus, to interact between the Akamas and private registry, it requires Docker to be installed and configured.

Transfer the Docker images

The offline installation requires you to pull the images and migrate them to your private registry. In the following command replace the chart version to download the related list of images:

Once the import is complete, you must re-tag and upload the images. Run the following snippet, replacing <REGISTRY_URL> with the actual URL of the private registry:

This process could last several minutes, once the upload is complete, you can proceed with the next steps.

Create the configuration file

Akamas on Kubernetes is provided as a set of templates packaged in a chart archive managed by .

To proceed with the installation, you must create a Helm Values file, called akamas.yaml in this guide, containing the mandatory configuration values required to customize your application. The following template contains the minimal set required to install Akamas:

Replace in the file the following placeholders:

  • CUSTOMER_NAME: customer name provided with the Akamas license

  • ADMIN_PASSWORD: initial administrator password

  • INSTANCE_HOSTNAME: the URL that will be used to expose the Akamas installation, for example https://akamas.k8s.example.com when using an Ingress, or

Configure the authentication

This section describes how to configure the authentication to your private registry. If your registry does not require any authentication, skip directly to the .

To authenticate to your private registry, you must manually create the Secret required to pull the images. If the registry uses basic authentication, you can create the credentials in the namespace by running the following command:

Otherwise, you can leverage any credential already configured on your machine by running the following command:

Define Size

Akamas can be installed in three sizes Small, Medium, and Large as explained in the section. By default, the chart installs the Small size. If you want to install a specific size add the following snippet to your values.yaml file.

Medium

Large

Start the installation

If the host you are using to install akamas can reach helm.akamas.io you can follow the instructions in the . Otherwise, follow the instructions below to download the chart content locally.

From a machine that can reach the endpoint, run the following command to download the chart:

The command downloads the latest version chart version as an archive named akamas-<version>.tgz. The file can be transferred to the machine where the installation will be run. Replace akamas/akamas with the download package in the following commands.

If you wish to see and override the values that Helm will use to install Akamas, you may execute the following command.

Now, with the configuration file you just created (and the new variables you added to override the defaults), you can start the installation with the following command:

This command will create the Akamas resources within the specified namespace. You can define a different namespace by changing the argument --namespace <your-namespace>

An example output of a successful installation is the following:

Check the installation

To monitor the application startup, run the command kubectl get pods. After a few minutes, the expected output should be similar to the following:

At this point, you should be able to access the Akamas UI using the endpoint specified in the akamasBaseUrl, and interact through the Akamas CLI with the path /api.

If you haven't already, you can update your configuration file to use a different type of service to expose Akamas' endpoints. To do so, pick from the the configuration snippet for the service type of your choice, add it to the akamas.yaml file, update the akamasBaseUrl value, and re-run the installation command to update your Helm release.

Installing telemetry providers

During online installation, a set of out-of-the-box telemetry providers are automatically installed. For offline installation, this step has to be executed manually. To install the telemetry providers required for your environment proceed to section.

http//:localhost:9000
when using port-forwarding. Refer to
for the list of the supported access methods and a reference for any additional configuration required.
  • REGISTRY_URL: the URL for the private registry used in the transfer process above

  • Helm
    installation section
    cluster prerequisite
    online installation guide
    Accessing Akamas
    Integrating Telemetry Providers
    Accessing Akamas
    curl -sO  http://helm.akamas.io/images/1.5.4/image-list
    NEW_REGISTRY="<REGISTRY_URL>"
    
    while read IMAGE; do
        REGISTRY=$(echo "$IMAGE" | cut -d '/' -f 1)
        REPOSITORY=$(echo "$IMAGE" | cut -d ':' -f 1 | cut -d "/" -f2-)
        TAG=$(echo "$IMAGE" | cut -d ':' -f 2)
    
        NEW_IMAGE="$NEW_REGISTRY/$REPOSITORY:$TAG"
        echo "Migrating $IMAGE to $NEW_IMAGE"
    
        docker pull "$IMAGE"
        docker tag "$IMAGE" "$NEW_IMAGE"
        docker push "$NEW_IMAGE"
    done <image-list
    akamas.yaml
    # Akamas customer name. Must match the value in the license (required)
    akamasCustomer: <CUSTOMER_NAME>
    
    # Akamas administrator password. If not set a random password will be generated
    akamasAdminPassword: <ADMIN_PASSWORD>
    
    # The URL that will be used to access Akamas, for example 'http://akamas.kube.example.com' (required)
    akamasBaseUrl: <INSTANCE_HOSTNAME>
    
    
    global:
      imageRegistry: gitlab-runner-new.dev.akamas.io
    
    elasticsearch:
      image: gitlab-runner-new.dev.akamas.io/akamas/elastic/elasticsearch
      
    kibana:
      image: gitlab-runner-new.dev.akamas.io/akamas/elastic/kibana
      
    airflow:
      images:
        airflow:
          repository: gitlab-runner-new.dev.akamas.io/akamas/airflow_service
          tag: 2.8.0
        pgbouncer:
          repository: gitlab-runner-new.dev.akamas.io/akamas/airflow_service
          tag: ~
        pgbouncerExporter:
          repository: gitlab-runner-new.dev.akamas.io/akamas/airflow_service
          tag: ~
      webserver:   
        extraInitContainers:
          - name: wait-logstash
            image: gitlab-runner-new.dev.akamas.io/akamas/utils:0.1.5
            command:
              - "sh"
              - "-c"
              - "until ./wait-for-it.sh -h logstash -p 9600 -t 120 -e _node/pipelines -j '.pipelines|length' -r 10 ; do echo Waiting for Logstash; sleep 10; done; echo Connected"
            resources:
              limits:
                cpu: 100m
                memory: 50Mi
              requests:
                cpu: 10m
                memory: 50Mi
      scheduler:
        podAnnotations:
          k8s.akamas.com/imageName: gitlab-runner-new.dev.akamas.io/akamas/airflow_service
        env:
          - name: CONTAINER_NAME
            value: airflow
          - name: SERVICE
            value: airflow
          - name: LOGTYPE
            value: airflow
          - name: IMAGE_NAME
            value: gitlab-runner-new.dev.akamas.io/akamas/airflow_service
          - name: AIRFLOW_CONN_HTTP_SYSTEM
            value: "http://:@system:8080"
          - name: AIRFLOW_CONN_HTTP_CAMPAIGN
            value: "http://:@campaign:8080"
          - name: AIRFLOW_CONN_HTTP_ORCHESTRATOR
            value: "http://:@orchestrator:8080"
          - name: KEYCLOAK_ENDPOINT
            value: "http://keycloak:8080"
    
        extraInitContainers:
          - name: wait-logstash
            image: gitlab-runner-new.dev.akamas.io/akamas/utils:0.1.5
            command:
              - "sh"
              - "-c"
              - "until ./wait-for-it.sh -h logstash -p 9600 -t 120 -e _node/pipelines -j '.pipelines|length' -r 10 ; do echo Waiting for Logstash; sleep 10; done; echo Connected"
            resources:
              limits:
                cpu: 100m
                memory: 50Mi
              requests:
                cpu: 10m
                memory: 50Mi
    kubectl create secret docker-registry registry-token \
      --namespace akamas \
      --docker-server=<REGISTRY_URL> \
      --docker-username=<USER> \
      --docker-password=<PASSWORD>
    kubectl create secret docker-registry registry-token \
      --namespace akamas \
      --from-file=.dockerconfigjson=<PATH/TO/.docker/config.json>
    #Medium
    airflow:
      config:
        core:
          parallelism: 102
      scheduler:
        resources:
          limits:
            cpu: 2500m         
            memory: 21000Mi    
          requests:
            cpu: 1000m         
            memory: 21000Mi   
    
    #Large
    airflow:
      config:
        core:
          parallelism: 202
      scheduler:
        resources:
          limits:
            cpu: 2500m         
            memory: 28000Mi    
          requests:
            cpu: 1000m         
            memory: 28000Mi    
    telemetry:
      parallelism: 50
    helm pull --repo http://helm.akamas.io/charts --version '1.5.4' akamas
    helm show values akamas-<version>.tgz
    helm upgrade --install \
      --create-namespace --namespace akamas \
      -f akamas.yaml \
      akamas akamas-<version>.tgz
    Release "akamas" does not exist. Installing it now.
    NAME: akamas
    LAST DEPLOYED: Thu Sep 21 10:39:01 2023
    NAMESPACE: akamas
    STATUS: deployed
    REVISION: 1
    NOTES:
    Akamas has been installed
    
    NOTES:
    Akamas has been installed
    
    To get the initial password use the following command:
    
    kubectl get secret akamas-admin-credentials -o go-template='{{ .data.password | base64decode }}'
    NAME                           READY   STATUS    RESTARTS   AGE
    airflow-6ffbbf46d8-dqf8m       3/3     Running   0          5m
    analyzer-67cf968b48-jhxvd      1/1     Running   0          5m
    campaign-666c5db96-xvl2z       1/1     Running   0          5m
    database-0                     1/1     Running   0          5m
    elasticsearch-master-0         1/1     Running   0          5m
    keycloak-66f748d54-7l6wb       1/1     Running   0          5m
    kibana-6d86b8cbf5-6nz9v        1/1     Running   0          5m
    kong-7d6fdd97cf-c2xc9          1/1     Running   0          5m
    license-54ff5cc5d8-tr64l       1/1     Running   0          5m
    log-5974b5c86b-4q7lj           1/1     Running   0          5m
    logstash-8697dd69f8-9bkts      1/1     Running   0          5m
    metrics-577fb6bf8d-j7cl2       1/1     Running   0          5m
    optimizer-5b7576c6bb-96w8n     1/1     Running   0          5m
    orchestrator-95c57fd45-lh4m6   1/1     Running   0          5m
    store-5489dd65f4-lsk62         1/1     Running   0          5m
    system-5877d4c89b-h8s6v        1/1     Running   0          5m
    telemetry-8cf448bf4-x68tr      1/1     Running   0          5m
    ui-7f7f4c4f44-55lv5            1/1     Running   0          5m
    users-966f8f78-wv4zj           1/1     Running   0          5m