Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Before installing the Akamas please make sure to review all the following requirements:
This page describes the requirements that should be fulfilled by the user when installing or managing an Akamas installation on Kubernetes. The software below is usually installed on the user's workstation or laptop.
Kubectl must be installed and configured to interact with the desired cluster. Refer to the to set up the client.
To interact with the Kubernetes APIs, you will need , preferably with a version matching the cluster. To check both the client and cluster versions, run the following:
Please follow these steps to install the Akamas application:
Please also read the section on how to manage Akamas. Finally, read the relevant sections of Integrating Akamas to integrate Akamas into your specific ecosystem.
Akamas is deployed on your Kubernetes cluster through a Helm chart, and all the required images can be downloaded from the AWS ECR repository.
Two installation modes are available:
online installation, in case the Kubernetes cluster can access the Internet.
offline installation, in case the Kubernetes cluster does not have access to the Internet or you need to use a private image registry.
Installing Akamas requires Helm 3.0 or higher. To check the version, run the following:
Akamas uses Elasticsearch to store logs and time series. When running Akamas on Kubernetes, Elasticsearch is installed automatically using the official Elasticsearch helm chart. This chart required running an init container with privileged access to set up a configuration on the Elasticsearch pod host. If running such a container is not permitted in your environment, you can add the following snippet to the akamas.yaml file when installing Akamas to disable this feature.
kubectl version --shorthelm version --short# Disable ES privileged initialization container.
elasticsearch:
sysctlInitContainer:
enabled: falseThe tables below report the required resources both for requests and limits that should be available in the cluster to use Akamas.
The resources specified on this page have been defined by considering using a dedicated namespace to run only Akamas components. If your cluster has additional tools (E.g. a service mesh or a monitoring agent) that inject containers in the Akamas pods we suggest either disabling them or increasing the sizing considering their overhead. Also if you plan to deploy other software inside the Akamas namespace and resource quotas are enabled you should increase the size considering the resources required by the specific software.
The small tier is suited for environments that need to support up to 3 concurrent optimization studies
CPU
4 Cores
15 Cores
Memory
28 GB
28 GB
Disk Space
70 GB
70 GB
The medium tier is suited for environments that need to support up to 50 concurrent optimization studies
CPU
8 Cores
20 Cores
Memory
50 GB
50 GB
Disk Space
100 GB
100 GB
The large tier is suited for environments that need to support up to 100 concurrent optimization studies. If you plan to run more concurrent studies, please contact Akamas support to plan your installation.
CPU
10 Cores
25 Cores
Memory
60 GB
60 GB
Disk Space
150 GB
150 GB
The cluster must define a Storage Class so that the application installation can leverage Persistent Volume Claims to dynamically provision the volumes required to persist data.
For more information on this topic refer to Kubernetes' official documentation.
To install and run Akamas cluster level permissions are not required. This is the minimal set of namespaced rules.
Networking requirements depend on how users interact with Akamas. Services can be exposed via Ingress or using kubectl as a proxy. Refer to Accessing Akamas for a more detailed description of the available options.
- apiGroups: ["", "apps", "policy", "batch", "networking.k8s.io", "events.k8s.io/v1", "rbac.authorization.k8s.io"]
resources:
- configmaps
- cronjobs
- deployments
- events
- ingresses
- jobs
- persistentvolumeclaims
- poddisruptionbudgets
- pods
- pods/log
- rolebindings
- roles
- secrets
- serviceaccounts
- services
- statefulsets
verbs: ["get", "list", "create", "delete", "patch", "update", "watch"]You may find helpful some of the commands listed in the sections below.
By default, access to each service database is assigned to a user with randomly generated passwords. For example, to read the campaign service database password, execute the following command:
The username for the campaign service can be found in the configuration file under each service section. To read the username for the campaign service set during the installation, launch the following command:
You can connect to the campaign_service
If you want to show all the passwords, execute this command:
kubectl get secret database-user-credentials -o go-template='{{ .data.campaign | base64decode }}'helm get values akamas --all --output json | jq '.campaign.database.user'kubectl get secret database-user-credentials -o go-template='{{range $k,$v := .data}} {{printf "%s: %s\n" $k ( $v |base64decode ) }}{{end}}'To interact with your Akamas instance, you need the UI and API Gateway to be accessible from outside the cluster.
Kubernetes offers different options to expose a service outside of the cluster. The following is a list of the supported ones, with examples of how to configure them to work in your chart release:
While changing the access mode of your Akamas installation, you must also update the value of the akamasBaseUrl option of the Helm Values file to match the new endpoint used.
By default, Akams uses Cluster IPs for its services, allowing communication only inside the cluster. Still, you can leverage Kubectl's port-forward to create a private connection and expose any internal service on your local machine.
This solution is suggested to perform quick tests without exposing the application or in scenarios where cluster access to the public is not allowed.
Set akamasBaseUrl to http://localhost:9000 in your Helm Values file, and install or update your Akamas deployment using the Helm command. Once the rollout is complete, open a tunnel to the UI with the following command:
As long as the port-forwarding is running, you will be able to interact with the UI through the tunnel; you can also interact through the Akamas CLI by configuring the URL http://localhost:9000/akapi.
Refer to the official for more details about port-forwarding.
An Ingress is a Kubernetes object that provides service access, load balancing, and SSL termination to Kubernetes services.
To expose the Akamas UI through an Ingress, configure the Helm Values file by configuring akamasBaseUrl with the host of the Ingress (e.g.: https://akamas.kube.example.com), and by adding the snippet below:
Here is a description of the fields:
enabled: set to true to enable the Ingress
tls: configure secretName with the name of the Secret containing the TLS certificate for the hostname configured in akamasBaseUrl. This secret must be created manually before applying the configuration (see on the Kubernetes documentation) or managed by a certificate issuer configured in the namespace.
Re-run to update the configuration. Once the rollout is complete, you will be able to access the UI using the URL specified in akamasBaseUrl and interact with the CLI using ${akamasBaseUrl}/api.
Refer to the for more details on Ingresses.
annotations: optional, provide any additional annotation required in your deployment. If your cluster leverages any certificate issuer (such as cert-manager), you can add here the annotations required to interact with the issuer.
kubectl port-forward service/ui 9000:httpingress:
enabled: true
tls:
- secretName: "<SECRET_NAME>" # secret containing the certificate and key data
annotations: {} # optionalRunning Akamas on OpenShift requires some Helm configurations to be applied.
The installation is provided as a set of templates packaged in a chart archive managed by Helm. Custom values are applied to ensure Akamas complies with the default restricted-v2 security context constraints.
OpenShift version 4.x.
The installation can be done offline and online as described in the section . Choose the one that better suits your cluster access policies.
The following snippet must be added to the akamas.yaml to install Akamas on OpenShift.
Besides the methods described in , you can use the OpenShift default ingress controller to create the required routes. Add the following snippet to the akamas.yaml file.
Once the Helm command is invoked, ensure the routes have been created by running:
The output must list the Akamas routes with different paths.
The toolbox optional component requires privileged access to run on OpenShift; the toolbox uses a dedicated service account, named toolbox by default. You can grant privileged access by issuing the following command.
Before starting the installation, make sure the are met.
Akamas on Kubernetes is provided as a set of templates packaged in a chart archive managed by .
To proceed with the installation, you need to create a Helm Values file, called akamas.yaml in this guide, containing the mandatory configuration values required to customize your application. The following template contains the minimal set required to install Akamas:
airflow:
uid: null
gid: null
postgresql:
primary:
containerSecurityContext:
enabled: false
podSecurityContext:
enabled: false
shmVolume:
enabled: false
kibana:
podSecurityContext:
fsGroup: null
securityContext:
runAsUser: null
elasticsearch:
sysctlInitContainer:
enabled: false
securityContext:
runAsUser: null
podSecurityContext:
fsGroup: null
runAsUser: nullingress:
enabled: true
annotations:
route.openshift.io/termination: edge
haproxy.router.openshift.io/timeout: 1200s
className: ""
tls:
- {}oc get routes#This command assumes the akamas namespace is named "akamas"
# and the service account default name "toolbox" is used
oc adm policy add-scc-to-user privileged system:serviceaccount:akamas:toolboxReplace in the file the following placeholders:
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY: the AWS credentials for pulling the Akamas images
CUSTOMER_NAME: customer name provided with the Akamas license
ADMIN_PASSWORD: initial administrator password
INSTANCE_HOSTNAME: the URL that will be used to expose the Akamas installation, for example https://akamas.k8s.example.com when using an Ingress, or http://localhost:9000 when using port-forwarding. Refer to for the list of the supported access methods and a reference for any additional configuration required.
Akamas can be installed in three sizes Small, Medium, and Large as explained in the cluster prerequisite section. By default, the chart installs the Small size. If you want to install a specific size add the following snippet to your values.yaml file.
Medium
Large
With the configuration file you just created (and the new variables you added to override the defaults), you can start the installation with the following command:
This command will create the Akamas resources within the specified namespace. You can define a different namespace by changing the argument --namespace <your-namespace>
An example output of a successful installation is the following:
To monitor the application startup, run the command kubectl get pods. After a few minutes, the expected output should be similar to the following:
At this point, you should be able to access the Akamas UI using the endpoint specified in the akamasBaseUrl, and interact through the Akamas CLI with the path /api.
If you haven't already, you can update your configuration file to use a different type of service to expose Akamas' endpoints. To do so, pick from the Accessing Akamas the configuration snippet for the service type of your choice, add it to the akamas.yaml file, update the akamasBaseUrl value, and re-run the installation command to update your Helm release.
# AWS credentials to fetch ECR images (required)
awsAccessKeyId: <AWS_ACCESS_KEY_ID>
awsSecretAccessKey: <AWS_SECRET_ACCESS_KEY>
# Akamas customer name. Must match the value in the license (required)
akamasCustomer: <CUSTOMER_NAME>
# Akamas administrator password. If not set a random password will be generated
akamasAdminPassword: <ADMIN_PASSWORD>
# The URL that will be used to access Akamas, for example 'http://akamas.kube.example.com' (required)
akamasBaseUrl: <INSTANCE_HOSTNAME>curl -so akamas.yaml http://helm.akamas.io/templates/1.5.4/akamas.yaml.template#Medium
airflow:
config:
core:
parallelism: 102
scheduler:
resources:
limits:
cpu: 2500m
memory: 21000Mi
requests:
cpu: 1000m
memory: 21000Mi
#Large
airflow:
config:
core:
parallelism: 202
scheduler:
resources:
limits:
cpu: 2500m
memory: 28000Mi
requests:
cpu: 1000m
memory: 28000Mi
telemetry:
parallelism: 50helm upgrade --install \
--create-namespace --namespace akamas \
--repo http://helm.akamas.io/charts \
--version '1.5.4' \
-f akamas.yaml \
akamas akamasRelease "akamas" does not exist. Installing it now.
NAME: akamas
LAST DEPLOYED: Thu Sep 21 10:39:01 2023
NAMESPACE: akamas
STATUS: deployed
REVISION: 1
NOTES:
Akamas has been installed
NOTES:
Akamas has been installed
To get the initial password use the following command:
kubectl get secret akamas-admin-credentials -o go-template='{{ .data.password | base64decode }}'NAME READY STATUS RESTARTS AGE
airflow-6ffbbf46d8-dqf8m 3/3 Running 0 5m
analyzer-67cf968b48-jhxvd 1/1 Running 0 5m
campaign-666c5db96-xvl2z 1/1 Running 0 5m
database-0 1/1 Running 0 5m
elasticsearch-master-0 1/1 Running 0 5m
keycloak-66f748d54-7l6wb 1/1 Running 0 5m
kibana-6d86b8cbf5-6nz9v 1/1 Running 0 5m
kong-7d6fdd97cf-c2xc9 1/1 Running 0 5m
license-54ff5cc5d8-tr64l 1/1 Running 0 5m
log-5974b5c86b-4q7lj 1/1 Running 0 5m
logstash-8697dd69f8-9bkts 1/1 Running 0 5m
metrics-577fb6bf8d-j7cl2 1/1 Running 0 5m
optimizer-5b7576c6bb-96w8n 1/1 Running 0 5m
orchestrator-95c57fd45-lh4m6 1/1 Running 0 5m
store-5489dd65f4-lsk62 1/1 Running 0 5m
system-5877d4c89b-h8s6v 1/1 Running 0 5m
telemetry-8cf448bf4-x68tr 1/1 Running 0 5m
ui-7f7f4c4f44-55lv5 1/1 Running 0 5m
users-966f8f78-wv4zj 1/1 Running 0 5mBefore starting the installation, make sure the requirements are met.
If your cluster is in an air-gapped network or is unable to reach the Akamas image repository, you need to copy the required images to your private registry.
The procedure described here leverages your local environment to upload the images. Thus, to interact between the Akamas and private registry, it requires Docker to be installed and configured.
The offline installation requires you to pull the images and migrate them to your private registry. In the following command replace the chart version to download the related list of images:
Once the import is complete, you must re-tag and upload the images. Run the following snippet, replacing <REGISTRY_URL> with the actual URL of the private registry:
This process could last several minutes, once the upload is complete, you can proceed with the next steps.
Akamas on Kubernetes is provided as a set of templates packaged in a chart archive managed by .
To proceed with the installation, you must create a Helm Values file, called akamas.yaml in this guide, containing the mandatory configuration values required to customize your application. The following template contains the minimal set required to install Akamas:
Replace in the file the following placeholders:
CUSTOMER_NAME: customer name provided with the Akamas license
ADMIN_PASSWORD: initial administrator password
INSTANCE_HOSTNAME: the URL that will be used to expose the Akamas installation, for example https://akamas.k8s.example.com when using an Ingress, or
This section describes how to configure the authentication to your private registry. If your registry does not require any authentication, skip directly to the .
To authenticate to your private registry, you must manually create the Secret required to pull the images. If the registry uses basic authentication, you can create the credentials in the namespace by running the following command:
Otherwise, you can leverage any credential already configured on your machine by running the following command:
Akamas can be installed in three sizes Small, Medium, and Large as explained in the section. By default, the chart installs the Small size. If you want to install a specific size add the following snippet to your values.yaml file.
Medium
Large
If the host you are using to install akamas can reach helm.akamas.io you can follow the instructions in the . Otherwise, follow the instructions below to download the chart content locally.
From a machine that can reach the endpoint, run the following command to download the chart:
The command downloads the latest version chart version as an archive named akamas-<version>.tgz. The file can be transferred to the machine where the installation will be run. Replace akamas/akamas with the download package in the following commands.
If you wish to see and override the values that Helm will use to install Akamas, you may execute the following command.
Now, with the configuration file you just created (and the new variables you added to override the defaults), you can start the installation with the following command:
This command will create the Akamas resources within the specified namespace. You can define a different namespace by changing the argument --namespace <your-namespace>
An example output of a successful installation is the following:
To monitor the application startup, run the command kubectl get pods. After a few minutes, the expected output should be similar to the following:
At this point, you should be able to access the Akamas UI using the endpoint specified in the akamasBaseUrl, and interact through the Akamas CLI with the path /api.
If you haven't already, you can update your configuration file to use a different type of service to expose Akamas' endpoints. To do so, pick from the the configuration snippet for the service type of your choice, add it to the akamas.yaml file, update the akamasBaseUrl value, and re-run the installation command to update your Helm release.
During online installation, a set of out-of-the-box telemetry providers are automatically installed. For offline installation, this step has to be executed manually. To install the telemetry providers required for your environment proceed to section.
http//:localhost:9000REGISTRY_URL: the URL for the private registry used in the transfer process above
curl -sO http://helm.akamas.io/images/1.5.4/image-listNEW_REGISTRY="<REGISTRY_URL>"
while read IMAGE; do
REGISTRY=$(echo "$IMAGE" | cut -d '/' -f 1)
REPOSITORY=$(echo "$IMAGE" | cut -d ':' -f 1 | cut -d "/" -f2-)
TAG=$(echo "$IMAGE" | cut -d ':' -f 2)
NEW_IMAGE="$NEW_REGISTRY/$REPOSITORY:$TAG"
echo "Migrating $IMAGE to $NEW_IMAGE"
docker pull "$IMAGE"
docker tag "$IMAGE" "$NEW_IMAGE"
docker push "$NEW_IMAGE"
done <image-list# Akamas customer name. Must match the value in the license (required)
akamasCustomer: <CUSTOMER_NAME>
# Akamas administrator password. If not set a random password will be generated
akamasAdminPassword: <ADMIN_PASSWORD>
# The URL that will be used to access Akamas, for example 'http://akamas.kube.example.com' (required)
akamasBaseUrl: <INSTANCE_HOSTNAME>
global:
imageRegistry: gitlab-runner-new.dev.akamas.io
elasticsearch:
image: gitlab-runner-new.dev.akamas.io/akamas/elastic/elasticsearch
kibana:
image: gitlab-runner-new.dev.akamas.io/akamas/elastic/kibana
airflow:
images:
airflow:
repository: gitlab-runner-new.dev.akamas.io/akamas/airflow_service
tag: 2.8.0
pgbouncer:
repository: gitlab-runner-new.dev.akamas.io/akamas/airflow_service
tag: ~
pgbouncerExporter:
repository: gitlab-runner-new.dev.akamas.io/akamas/airflow_service
tag: ~
webserver:
extraInitContainers:
- name: wait-logstash
image: gitlab-runner-new.dev.akamas.io/akamas/utils:0.1.5
command:
- "sh"
- "-c"
- "until ./wait-for-it.sh -h logstash -p 9600 -t 120 -e _node/pipelines -j '.pipelines|length' -r 10 ; do echo Waiting for Logstash; sleep 10; done; echo Connected"
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 10m
memory: 50Mi
scheduler:
podAnnotations:
k8s.akamas.com/imageName: gitlab-runner-new.dev.akamas.io/akamas/airflow_service
env:
- name: CONTAINER_NAME
value: airflow
- name: SERVICE
value: airflow
- name: LOGTYPE
value: airflow
- name: IMAGE_NAME
value: gitlab-runner-new.dev.akamas.io/akamas/airflow_service
- name: AIRFLOW_CONN_HTTP_SYSTEM
value: "http://:@system:8080"
- name: AIRFLOW_CONN_HTTP_CAMPAIGN
value: "http://:@campaign:8080"
- name: AIRFLOW_CONN_HTTP_ORCHESTRATOR
value: "http://:@orchestrator:8080"
- name: KEYCLOAK_ENDPOINT
value: "http://keycloak:8080"
extraInitContainers:
- name: wait-logstash
image: gitlab-runner-new.dev.akamas.io/akamas/utils:0.1.5
command:
- "sh"
- "-c"
- "until ./wait-for-it.sh -h logstash -p 9600 -t 120 -e _node/pipelines -j '.pipelines|length' -r 10 ; do echo Waiting for Logstash; sleep 10; done; echo Connected"
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 10m
memory: 50Mikubectl create secret docker-registry registry-token \
--namespace akamas \
--docker-server=<REGISTRY_URL> \
--docker-username=<USER> \
--docker-password=<PASSWORD>kubectl create secret docker-registry registry-token \
--namespace akamas \
--from-file=.dockerconfigjson=<PATH/TO/.docker/config.json>#Medium
airflow:
config:
core:
parallelism: 102
scheduler:
resources:
limits:
cpu: 2500m
memory: 21000Mi
requests:
cpu: 1000m
memory: 21000Mi
#Large
airflow:
config:
core:
parallelism: 202
scheduler:
resources:
limits:
cpu: 2500m
memory: 28000Mi
requests:
cpu: 1000m
memory: 28000Mi
telemetry:
parallelism: 50helm pull --repo http://helm.akamas.io/charts --version '1.5.4' akamashelm show values akamas-<version>.tgzhelm upgrade --install \
--create-namespace --namespace akamas \
-f akamas.yaml \
akamas akamas-<version>.tgzRelease "akamas" does not exist. Installing it now.
NAME: akamas
LAST DEPLOYED: Thu Sep 21 10:39:01 2023
NAMESPACE: akamas
STATUS: deployed
REVISION: 1
NOTES:
Akamas has been installed
NOTES:
Akamas has been installed
To get the initial password use the following command:
kubectl get secret akamas-admin-credentials -o go-template='{{ .data.password | base64decode }}'NAME READY STATUS RESTARTS AGE
airflow-6ffbbf46d8-dqf8m 3/3 Running 0 5m
analyzer-67cf968b48-jhxvd 1/1 Running 0 5m
campaign-666c5db96-xvl2z 1/1 Running 0 5m
database-0 1/1 Running 0 5m
elasticsearch-master-0 1/1 Running 0 5m
keycloak-66f748d54-7l6wb 1/1 Running 0 5m
kibana-6d86b8cbf5-6nz9v 1/1 Running 0 5m
kong-7d6fdd97cf-c2xc9 1/1 Running 0 5m
license-54ff5cc5d8-tr64l 1/1 Running 0 5m
log-5974b5c86b-4q7lj 1/1 Running 0 5m
logstash-8697dd69f8-9bkts 1/1 Running 0 5m
metrics-577fb6bf8d-j7cl2 1/1 Running 0 5m
optimizer-5b7576c6bb-96w8n 1/1 Running 0 5m
orchestrator-95c57fd45-lh4m6 1/1 Running 0 5m
store-5489dd65f4-lsk62 1/1 Running 0 5m
system-5877d4c89b-h8s6v 1/1 Running 0 5m
telemetry-8cf448bf4-x68tr 1/1 Running 0 5m
ui-7f7f4c4f44-55lv5 1/1 Running 0 5m
users-966f8f78-wv4zj 1/1 Running 0 5m