Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
To interact with your Akamas instance you need the UI and API Gateway to be accessible from outside the cluster. This means you need to expose the ui
and kong
service respectively (although a minimal configuration only requires exposing the ui
service, since it can forward requests to the API Gateway through the path /akapi
).
Kubernetes offers different options to expose a service outside of the cluster. The following is a list of the supported ones, with examples of how to configure them to work in your chart release:
By default, Akams uses Cluster IPs for its services, which only allow communication inside the cluster. Still, you can leverage Kubectl's port-forward to create a private connection and expose any internal service on your local machine.
This solution is suggested to perform quick tests without the need of exposing the application, or in scenarios where cluster access to the public is not allowed.
To make the Akamas UI accessible on http://localhost:8000
, run the following command:
To interact with the Akamas CLI you can use the URL http://localhost:8000/akapi
, or expose the kong
service in the same way.
Refer to the official kubernetes documentation for more details about port-forwarding.
Load Balancers expose services outside the cluster. This solution is often used with clusters managed by cloud providers such as Amazon EKS or Google Kubernetes Engine (GKE).
You can expose the Akamas UI through a Load Balancer by adding the snippet below to the akamas.yaml
file from the previous section, and re-running the install command to update the configuration.
To get the address of the load balancer run the command:
For more details on Load Balancers refer to the official kubernetes documentation.
An Ingress is a Kubernetes object that provides service access, load balancing, and SSL termination to kubernetes services.
You can expose the Akamas UI through an Ingress by adding the snippet below to the akamas.yaml
file from the previous section. After adding to className
one of the ingress controllers available on the cluster, re-run the install command to update the configuration.
You can also configure a certificate on the Ingress: refer to HTTPS configuration section for instructions.
Refer to the official kubernetes documentation for more details on Ingresses.
Node Ports make services accessible on specific ports of any node of the cluster.
You can expose the Akamas UI through a NodePort by adding the snippet below to the akamas.yaml
file from the previous section, and re-running the install command to update the configuration.
The Akamas UI will be accessible on any cluster node at http://<cluster-node>:30010
. You can also omit the http.nodePort
field and let Kubernetes automatically select a random port.
Refer to the official kubernetes documentation for more information on Node Ports.
Before installing the Akamas please make sure to review all the following requirements:
This section describes how to install Akamas on a Kubernetes cluster.
Before installing Akamas, please follow these steps:
Please follow these steps to install the Akamas application:
Please also read the section on how to manage Akamas. Finally, read the relevant sections of Integrating Akamas to integrate Akamas into your specific ecosystem.
This page describes the requirements that should be fulfilled by the user when installing or managing an Akamas installation on Kubernetes. The software listed below is usually installed on the user's workstation or laptop.
Kubectl must be installed and configured to interact with the desired cluster. Refer to the to set up the client.
To interact with the Kubernetes API server you will need , preferably with a version matching the cluster. To check both the client and cluster versions, run:
Installing Akamas requires or higher. To check the version run:
Akamas uses Elasticsearch to store logs and time-series. When running Akamas on Kubernetes, Elasticsearch is installed, automatically using the official Elasticsearch helm chart. This chart required running an init container with privileged access to set up a configuration on the host running the Elasticsearch pod. If running such a container is not permitted in your environment you can add the following snippet to the akamas.yaml
file when installing Akamas to disable this feature. \
Running Akamas requires a cluster running Kubernetes version 1.23 or higher.
Akamas can be deployed in three different sizes depending on the number of concurrent optimization studies that will be executed. If you are unsure about which size is appropriate for your environment we suggest you start with the small one and upgrade to bigger ones as you expand the optimization activity to more applications.
The tables below report the required resources both for requests and limits that should be available in the cluster to use Akamas.
Small
The small tier is suited for environments that needs to support up to 10 concurrent optimization studies
Resource | Requests | Limits |
---|---|---|
The cluster must provide the definition of a Storage Class so that the application installation can leverage Persistent Volume Claims to dynamically provision the volumes required to persist data.
For more information on this topic refer to Kubernetes' official documentation.
To work properly, Akamas needs to manage some resources inside the Namespace. For this reason, it is recommended to run Akamas in a dedicated Namespace.
To manage resources, Akamas uses a ServiceAccount bound to the application's pods, which must be created either manually by the cluster administrator or automatically by the provided Helm chart.
This snippet describes the namespaced required permissions for the service account:
Networking requirements depend on how users interact with Akamas. Services can be exposed via Ingress, LoadBalancers, NodePorts, or using kubectl as a proxy. Refer to Accessing Akamas for a more detailed description of the available options.
Before starting the installation, make sure the are met.
Akamas on Kubernetes is provided as a set of templates packaged in a chart archive managed by .
To proceed with the installation, you need to create a file, called akamas.yaml
in this guide, containing the mandatory configuration values required to customize your application. The following template contains the minimal set of values required to install Akamas:
You can also download the template file running the following snippet:
This minimal configuration is enough to have Akamas up and running on your cluster, even though the endpoint will only be accessible through Kubectl's .
The page provides some configuration examples using different types of services: edit the akamas.yaml
file using the strategy that best suits your needs, or continue directly with the next sections and configure the endpoints at a later time.
Add the Akamas' repository to the Helm client with the following command:
If you wish to see the values that Helm will use to install Akamas and override some of them, you may execute the following command:
Now, with the configuration file you just created (and the new variables you added to override the defaults), you can start the installation with the following command:
This command will create the Akamas resources within the specified namespace. You can define a different namespace by changing the argument --namespace <your-namespace>
An example output of a successful installation is the following:
To monitor the application startup, run the command kubectl get pods
. After a few minutes, the expected output should be similar to the following:
At this point, you should be able to access the Akamas UI on http://localhost:8000
and the Akamas CLI http://localhost:8000/akapi
by running Kubectl's port forwarding command:
Mind that, before logging in, you need to and .
If you haven't already, you can update your configuration file to use a different type of service to expose Akamas' endpoints. To do so, pick from the the configuration snippet for the service type of your choice, add it to the akamas.yaml
file, and re-run the installation command to update your Helm release.
CPU
3 Cores
6 Cores
Memory
16 GB
18GB
Disk Space
70 GB
70 GB
Before starting the installation, make sure the requirements are met.
If your cluster is in an air-gapped network or is unable to reach the default repository, you need to mirror the required images on a private repository.
The procedure described here leverages your local environment to upload the images, so it requires that Docker is installed and configured to interact with the private registry.
Get in contact with Akamas Customer Services to get the latest versions of the Akamas artifacts. This will include:
images.tar.gz
: a tarball containing the Akamas images.
akamas
: the binary file of the Akamas CLI that will be used to verify the installation.
The offline installation mode requires importing the shipped Docker images into your local environment. Run the following command in the same directory where the tar file is stored:
Once the import is complete, you need to re-tag and upload the images. Run the following snippet, replacing <REGISTRY_URL>
with the actual URL of the private registry:
Once the upload is complete, you can proceed with the next steps.
Akamas on Kubernetes is provided as a set of templates packaged in a chart archive managed by Helm.
To proceed with the installation, you need to create a file, called akamas.yaml
in this guide, containing the mandatory configuration values required to customize your application. The following template contains the minimal set of values required to install Akamas:
This minimal configuration is enough to have Akamas up and running on your cluster, even though the endpoint will only be accessible through Kubectl's port forwarding.
The page Accessing Akamas provides some configuration examples using different types of services: edit the akamas.yaml
file using the strategy that better suits your needs, or continue directly with the sections and configure the endpoints at a later time.
This section describes how to configure the authentication to your private registry. If your registry does not require any authentication, skip directly to the installation section.
To authenticate to your private registry, you must manually create the Secret required to pull the images. If the registry uses basic authentication, you can create the credentials in the namespace by running the following command:
Otherwise, you can leverage any credential already configured on your machine by running the following command:
From a machine that can reach the endpoint, add the Akamas' repository to the Helm client with the following command:
If you can not reach helm.akamas.io
from the machine where the installation will be run, pull the chart by running:
The command downloads the latest version chart version as an archive named akamas-<version>.tgz
. The file can be transferred to the machine where the installation will be run. Replace akamas/akamas
with the download package in the following commands.
If you wish to see and override the values that Helm will use to install Akamas, you may execute the following command.
Now, with the configuration file you just created (and the new variables you added to override the defaults), you can start the installation with the following command:
This command will create the Akamas resources within the specified namespace. You can define a different namespace by changing the argument --namespace <your-namespace>
An example output of a successful installation is the following:
To monitor the application startup, run the command kubectl get pods
. After a few minutes, the expected output should be similar to the following:
At this point, you should be able to access the Akamas UI on http://localhost:8000
and the Akamas CLI http://localhost:8000/akapi
by running Kubectl's port forwarding command:
Mind that, before logging in, you need to configure the Akamas CLI and install a valid license.
If you haven't already, you can update your configuration file to use a different type of service to expose Akamas' endpoints. To do so, pick from the Accessing Akamas the configuration snippet for the service type of your choice, add it to the akamas.yaml
file, and re-run the installation command to update your Helm release.
HTTPS configuration can be set in the Akamas services (UI and Kong) or in the Ingress definition.
Declare the certificate secret by adding a tls
section to the Ingress definition:
You can apply the same configuration to the kong
service to add a certificate to the API Gateway.
For more information regarding the TLS definition refer to the official documentation.
To add a certificate to both the UI and API Gateway you need to generate the akamas.key
and akamas.pem
files, and create a secret in Akamas' namespace with the following command:
To complete the update, restart the deployments:
Akamas is deployed on your Kubernetes cluster through a Helm chart, and all the required images can be downloaded from the AWS ECR repository.
Two installation modes are available:
online installation, in case the Kubernetes cluster can access the Internet.
offline installation, in case the Kubernetes cluster does not have access to the Internet or you need to use a private image registry.