Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This section describes how to get Akamas installed.
Please make sure to read the Getting Started section before installing Akamas.
Before installing Akamas, please follow these steps:
Please follow these steps to install the Akamas Server:
Please also read the section on how to troubleshoot the installation and how to manage the Akamas Server. Finally, read the relevant sections of Integrating Akamas to integrate Akamas into your specific ecosystem.
This section lists all the connectivity settings required to operate and manage Akamas
Internet access is required for Akamas online installation and updated procedures and allows retrieving the most updated Akamas container images from the Akamas private Amazon Elastic Container Registry (ECR).
If internet access is not available for policies or security reasons, Akamas installation and updates can be executed offline.
Internet access from the Akamas server is not mandatory but it’s strongly recommended.
The following table provides a list of the ports on the Akamas server that have to be reachable by Akamas administrators and users to properly operate the system.
In the specific case of AWS instance and customer instances sharing the same VPC/Subnet inside AWS, you should:
open all of the ports listed in the table above for all inbound URLs (0.0.0.0/32) on your AWS security group
open outbound rules to all traffic and then attach this AWS security group (which must reside inside a private subnet) to the Akamas machine and all customer application AWS machines
Source
Destination
Port
Reason
Akamas admin
Akamas server
22
ssh
Akamas admin/user
Akamas server
80, 443
Akamas web UI access
Akamas admin/user
Akamas server
8000, 8443
Akamas API access
Akamas is deployed as a set of containerized services running on Docker and managed via Docker Compose. The latest version of the Akamas Docker Compose file and all the images required by Docker can be downloaded from the AWS ECR repository.
Two installation modes are available:
online installation mode, in case the Akamas Server has access to the Internet - installation behind a proxy server is also supported.
offline installation mode, in case the Akamas Server does not have access to the Internet.
Akamas is deployed as a set of containerized services running on Docker and managed via Docker Compose. In online installation mode, the latest version of the Akamas Docker Compose file and all the images required by Docker can be downloaded from the AWS ECR repository.
In case the Akamas Server is behind a proxy server please also read how to setup Akamas behind a Proxy.
It is suggested to first create a directory akamas
in the home directory of your user, and then run the following command to get the latest compose file:
To configure Akamas, you should set the following environment variables:
AKAMAS_CUSTOMER
: this is the customer name matching the one referenced in the Akamas license.
AKAMAS_BASE_URL
: this is the endpoint in the Akamas APIs that will be used to interact with the CLI, typically http://<akamas server dns address>:8000
You can export the variables using the following snippet:
It is recommended to save these exported variables in your ~/.bashrc file for convenience.
In order to login into AWS ECR and pull the most recent Akamas containers images you also need to set the AWS authentication variables to the appropriate values provided by Akamas Customer Support Services by running the following command:
At this point, you can start installing Akamas server by running the following AWS CLI commands:
Before installing the Akamas Server please make sure to review all the following requirements:
This page will guide you through the installation of software components that are required to get the Akamas Server installed on a machine. Please read the Akamas dependencies for a detailed list of these software components for each specific OS.
While some links to official documentation and installation resources are provided here, please make sure to refer to your internal system engineering department to ensure that your company deployment processes and best practices are correctly matched.
As a preliminary step before installing any dependency, it is strongly suggested to create a user named akamas on your machine hosting Akamas Server.
Follow the reference documentation to install docker on your system.
Docker installation guide: https://docs.docker.com/engine/install
Docker compose is already installed since Docker 23+. To install it on previous versions of Docker follow this installation guide: https://docs.docker.com/compose/install/
AWS CLI v2: https://docs.aws.amazon.com/cli/latest/userguide
To run docker with a non-root user, such as the akamas
user, you should add it to the docker group. You can follow the guide at: https://docs.docker.com/engine/install/linux-postinstall/
As a quick check to verify that all dependencies have been correctly installed, you can run the following commands
Docker:
For offline installations, you can check docker with docker ps
command
Docker compose :
Docker versions older than 23 must usedocker-compose
command instead of docker compose
AWS CLI:
This section describes how to setup an Akamas Server behind a proxy server and to allow Docker to connect to the Akamas repository on AWS ECR.
First, create the /etc/systemd/system/docker.service.d
directory if it does not already exists. Then create or update the /etc/systemd/system/docker.service.d/http-proxy.conf
file with the variables listed below, taking care of replacing <PROXY>
with the address and port (and credentials if needed) of your target proxy server:
Once configured, flush the changes and restart Docker with the following commands:
For more details, refer to the official documentation page: Control Docker with systemd.
To allow the Akamas services to connect to addresses outside your intranet, the Docker instance needs to be configured to forward the proxy configuration to the Akamas containers.
Update the ~/.docker/config.json
file adding the following field to the JSON, taking care to replace <PROXY>
with the address (and credentials if needed) of your target proxy server:
For more details, refer to the official documentation page: Configure Docker to use a proxy server.
Set the following variables to configure your working environment, taking care to replace <PROXY>
with the address (and credentials if needed) of your target proxy server:
Once configured, you can log into the ECR repository through the AWS CLI and start the Akamas services manually.
The following table provides the minimal hardware requirements for the virtual or physical machine used to install the Akamas server in your data center.
To run Akamas on an AWS Instance you need to create a new virtual machine based on one of the supported operating systems. You can refer to AWS documentation for step-by-step instructions on creating the instance.
As shown in the following diagram, you can create the Akamas instance in the same AWS region, Virtual Private Cloud (VPC), and private subnet as your own already existing EC2 machines and by creating/configuring a new security group that allows communication between your application instances and Akamas instance. The inbound/outbound rules of this security group must be configured as explained in the Networking Requirements section of this page.
It is recommended to use an m6a.xlarge
instance with at least 70GB of disks of type GP2
or GP3
and select the latest LTS version of Ubuntu.
Akamas can be run in any EC2 region.
You can find the latest version supported for your preferred region here.
Before installing Akamas on an AWS Instance please make sure to meet your AWS service limits (please refer to the official AWS documentation here).
This special case is also referred to as "Akamas-in-a-box" and is covered by the akamas-in-a-box installation guide.
The following table provides a list of the supported operating systems and their versions.
On RHEL systems Akamas containers might need to be run in privileged mode depending on how Docker was installed on the system.
The following table provides a list of the required Software Packages (also referred to as Akamas dependencies) together with their versions.
The exact version of these prerequisites is listed in the following table:
Read more about how to set up Akamas dependencies.
To install and run Akamas it is recommended to create a dedicated user (usually "akamas"). The Akamas user is not required to be in the sudoers list but can be added to the docker (dockeroot) group so it can run docker and docker-compose commands.
Make sure that the Akamas user has the read, write, and execute permissions on /tmp
. If your environment does not allow writing to the whole /tmp
folder, please create a folder /tmp/build
and assign read and write permission to the Akamas user on that folder.
Akamas is based on a microservices architecture where each service is deployed as a Docker container and communicates with other services via REST APIs on a dedicated machine (Akamas Server).
The following figure represents the high-level Akamas architecture.
Users can interact with Akamas via either the Graphical User Interface (GUI), Command-Line Interface (CLI), or via Application Programmatic Interface (API).
Both the GUI and CLI leverage HTTP/S APIs which pass through an API gateway (based on Kong), which also takes care of authenticating users by interacting with Akamas access management and routing requests to the different services.
The Akamas CLI can be invoked on either the Akamas Server itself or on a different machine (e.g. a laptop or another server) where the Akamas CLI has been installed.
Akamas data is securely stored in different databases:
time series data gathered from telemetry providers are stored in Elasticsearch;
application logs are also stored in Elasticsearch;
data related to systems, studies, workflows, and other user-provided data are stored in a Postgres database.
Notice: both Postgres and Elasticsearch and any other service included within Akamas are provided by Akamas as a Docker container image as part of the Akamas installation package.
The following Spring-based microservices represent Akamas core services:
System Service: holds information about metrics, parameters, and systems that are being optimized
Campaign Service: holds information about optimization studies, including configurations and experiments
Metrics Service: stores raw performance metrics (in Elasticsearch)
Analyzer Service: automates the analysis of load tests and provides related functionalities such as smart windowing
Telemetry Service: takes care of integrating different data sources by supporting multiple Telemetry Providers
Optimizer Service: combines different optimization engines to generate optimized configurations using ML techniques
Orchestrator Service: manages the execution of user-defined workflows to drive load tests
User Service: takes care of user management activities such as user creation or password changes
License Service: takes care of license management activities, optimization pack, and study export.
Akamas also provides advanced management features like logging, self-monitoring, licensing, user management, and more.
The Akamas CLI can be accessed by simply running the akamas
command.
You can verify that the CLI have been installed by running this command:
which should show an output similar to this one
At any time, you can use the following command to see available commands and options.
For the full list of Akamas commands please refer to the section of the Akamas Reference guide.
Akamas APIs and UI use plain HTTP when they are first installed. To enable the use of HTTPS you will need to:
Ask your security team to provide you with a valid certificate for your server. The certificate usually consists of two files with ".key" and ".pem" extension. You will need to provide the Akamas server DNS name.
Create a folder named "certs" in the same directory of Akamas docker-compose file;
Copy the ".key" and ".pem" files in the created "certs" folder and rename them to "akamas.key" and "akamas.pem" respectively. Make sure that the files belong to the same user and group you use to run Akamas.
Restart two Akamas services by running the following commands:
After the containers reboot is complete you will be able to access the UI over https from your browser:
Now that your Akamas server is configured to use HTTPS you can update the Akamas CLI configuration in order to use the secure protocol.
If you have not yet installed the Akamas CLI follow the in order to install it. If you already have the CLI available, you can run the following command:
You will be prompted to enter some input, please value it as follows:
You can test the connection by running:
It should return ‘OK’, meaning that Akamas has been properly configured to work over HTTPS.
By default, Akamas uses the following ports for its UI:
80 (HTTP)
443 (HTTPS)
Depending on the configuration of your environment, you may want to change the default settings: in order to do so, you’ll have to update the Akamas docker-compose file.
Inside the docker-compose.yml file, scroll down until you come across the akamas-ui
service.
There you will find a specification as follows:
Update the yaml by remapping the UI ports to the desired ports of the host.
In case you were running Akamas with host networking, you are allowed to bind different ports in the container itself. In order to do so you can expand the docker-compose service by adding a couple of environment variables like this:
Operating System
Version
Ubuntu Linux
18.04+
CentOS
7.6+
RedHat Enterprise Linux
7.6+
Software Package
Notes
Docker
Akamas is deployed as a set of containerized services running on Docker. During its operation, Akamas launches different containers so access to the docker socket with enough permissions to run the container is required.
Docker Compose
Akamas containerized services are managed via Docker Compose. Docker compose is usually already shipped with Docker starting from version 23.
AWS CLI
Akamas container images are published in a private Amazon Elastic Container Registry (ECR) and are automatically downloaded during the online installation procedure.
AWS CLI is required only during the installation phase if the server has internet access and can be skipped during an offline installation.
Software Package
Ubuntu
CentOS
RHEL
Docker
19.03+
1.13+
1.13+
Docker-compose
2.0+
2.0+
2.0+
AWS CLI
2.0.0+
2.0.0+
2.0.0+
This section describes how to install an Akamas workstation
The Akamas CLI allows users to invoke commands against the Akamas dedicated machine (Akamas Server). The Akamas CLI can also be installed on a different system than the Akamas Server.
Linux and Windows operating systems are supported for installing Akamas CLI.
The Akamas CLI can be installed and configured in three simple steps:
You can also read the section Change CLI config to modify the CLI ports the Akamas Server is listening to.
The CLI is used to interact with an akamas server. To initialize the configuration of the Akamas CLI you can run the command:
and follow the wizard to provide the required information such as the server IP.
Here is a summary of the configuration wizard options.
This configuration can be changed at any time (see how to change the CLI config).
After this step, the Akamas CLI can be used to login to the Akamas server, by issuing the following command:
and providing the credentials as requested.
Logging into Akamas requires a valid license. If you have not installed your license yet refer to the page Install the Akamas license.
Logging into Akamas requires a valid Akamas license.
To install a license get in touch with Akamas Customer Service to receive:
the Akamas license file
your assigned values for the AKAMAS_CUSTOMER
and AKAMAS_BASE_URL
variables referenced in the license file
login credentials
Once you have this information, you can issue the following commands:
To get Akamas CLI installed on Linux, run the following commands:
You can now run the Akamas CLI following by running the akamas
command.
In some installations, the /usr/local/bin
folder is not present in the PATH
environment variable. This prevents you from using akamas without specifying the complete file location. To fix this issue you can add an entry to the PATH
system environment variable or move the executable to another folder in your PATH
.
To enable auto-completion on Linux systems with a bash shell (requires bash 4.4+), run the following commands:
To install the Akamas CLI on Windows run the following command from the Powershell:
You can now run the Akamas CLI following by running .\akamas
in the same folder.
To invoke the akamas
CLI from any folder, create a akamas
folder (such as C:\Program Files\akamas
), and move there the akamas.exe
file. Then, add an entry to the PATH
system environment variable with the value C:\Program Files\akamas
. Now, you can invoke the CLI from any folder, by simply running the akamas
command.
Akamas is deployed as a set of containerized services running on Docker and managed via Docker Compose. In offine installation mode, the latest version of the Akamas Docker Compose file and all the images required by Docker cannot be downloaded from the AWS ECR repository.
Get in contact with Akamas Customer Services to get the latest versions of the Akamas artifacts to be uploaded to a location of your choice on the dedicated Akamas Server.
Akamas installation artifacts will include:
images.tar.gz
: Akamas main images
docker-compose.yml
: docker-compose file for Akamas
a binary file named akamas
: this is the binary file of the akamas CLI that will be used to verify the installation.
A preliminary step in offline installation mode is to import the shipped Docker images by running the following commands in the same directory where the tar files have been stored:
Notice that this import procedure could take quite some time!
To configure Akamas, the following environment variables are required to be set:
AKAMAS_CUSTOMER
: this is the customer name matching the one referenced in the Akamas license.
AKAMAS_BASE_URL
: this is the endpoint in the Akamas APIs that will be used to interact with the CLI, typically http://<akamas server dns address>:8000
Environment variables creation is performed by the snippet below:
It is recommended to save these exported variables in your ~/.bashrc
file for convenience.
To start Akamas you can now simply navigate into the akamas
folder and run a docker-compose
command as follows:
Notice that you may get the following error:
This is a documented docker bug (see this link) that can be solved by installing the "pass" package:
Ubuntu
RHEL
This section is a collection of different topics related to how to manage the Akamas Server.
This section covers some topics on how to manage the Akamas Server:
Resource
Requirement
CPU
4 cores @ 2 GHz
Memory
16 GB
Disk Space
70 GB
Run the following command to verify the correct startup and initialization of Akamas:
When all services have been started this command will return an "OK" message. Please notice that it might take a few minutes for Akamas to start all services.
To check that also UI is properly working please access the following URL:
You will see the Akamas login form:
Please notice that it is impossible to log into Akamas before a license has been installed. Read here how to Install an Akamas license.
The CLI configuration contains the information required to communicate with the akamas server. It can be easily created and updated with a configuration wizard. This page describes the main options of the Akamas CLI and how to modify them.
The CLI, as well as the UI, interacts with the akamas server via APIs. The apiAddress
configuration contains the information required in order to communicate with the server.
The Akamas Server provides two different listeners to interact with APIs:
a HTTP listener on port 8000
a HTTPS listener on port 8443
For improved security, it is recommended to configure CLI communications with the Akamas Server over HTTPS. Notice that you need to have a valid certificate installed on your Akamas server (at least a self-signed one) in order to enable HTTPS communication between CLI and the Akamas Server.
The CLI can be configured either directly via the CLI itself or via the YAML configuration file akamasconf
.
Issue the following command to change the configuration of the Akamas CLI:
and then follow the wizard to provide the required CLI configuration:
enable HTTPS communications:
enable HTTP communications:
Please notice that Verify SSL
must be set to True only if you are using a valid certificate. If you are using a self-signed one, please set it to False
. This will mimic the behavior of accepting a not valid HTTPS certificate on your favorite browser.
akamasconf
fileCreate a file and name it akamasconf
to be located in the following location:
Linux: ~/.akamas/akamasconf
Windows: C:\Users\<username>\.akamas
(where C: is the drive where the OS is installed)
The file location can be customized by setting an $AKAMASCONF
environment variable.
Here is an example akamasconf
file provided as a sample:
The SSL certificate is only required if verifySsl is set to true. In this case the SSL certificate requires an external CA to be validated.
Akamas allows dumping log entries from a specific service, workspace, workflow, study, trial, and experiment, for a specific timeframe and at different log levels.
Akamas logs can be dumped via the following CLI command:
This command provides many filters which can be retrieved with the following command:
which should return
For example, to get the list of the most recent Akamas errors:
which should return something similar to:
Akamas stores all its logs into an internal Elasticsearch instance: some of these logs are reported to the user in the GUI in order to ease the monitoring of workflow executions, while other logs are only accessible via CLI and are mostly used to provide more context and information to support requests.
Audit access can be performed by using the CLI in order to extract logs related to UI or API access. For instance, to extract audit logs from the last hour use the following commands:
UI Logs
API Logs
Notice: to visualize the system logs unrelated to the execution of workflows bound to workspaces, you need an account with administrative privileges.
To ease the integration with external logging systems, Akamas can be configured to store access logs into files. To enable this feature you should:
Create a logs
folder next to the Akamas docker-compose.yml
file
Edit the docker-compose.yml
file by modifying the line FILE_LOG: "false"
to FILE_LOG: "true"
If Akamas is already running issue the following command
otherwise, start Akamas first.
When the user interacts with the UI or the API Akamas will report detailed access logs both on the internal database and in a file in the logs
folder. To ease log rolling and management every day Akamas will create a new file named according to the pattern access-%{+YYYY-MM-dd}.log
.
Akamas might collect anonymized usage information on running optimizations. Collection and tracking are disabled by default and can be manually enabled.
External tracking is managed through the following environment variables:
AKAMAS_TRACKER_URL: the target URL for all tracking info.
AKAMAS_TRACKING_OPT_OUT: when set to 1, disables anonymous data collection.
Tracking for a running instance can be disabled by executing this simple command in the folder where the Akamas compose file is located:
As usual with environment variables, it is strongly suggested to export the desired value to your ~/.bashrc file to ensure persistence.
This section describes some of the most common issues found during the Akamas installation.
Notice: this distro features a known issue since Docker default execution group is named dockerroot
instead of docker
. To make docker work edit (or create) /etc/docker/daemon.json
to include the following fragment:
After editing or creating the file, please restart Docker and then check the group permission of the Docker socket (/var/run/docker.sock
), which should show dockerroot
as a group:
Then, add the newly created akamas
user to the dockerroot
group so that it can run docker containers:
and check the akamas
user has been correctly added to dockerroot
group by running:
In case of issues in logging in through AWS CLI, when executing the following command:
Please check that:
Environment variables AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
, AWS_DEFAULT_REGION
are correctly set
AWS CLI version is 2.0+
Please notice that the very first time Akamas is started, up to 30 minutes might be required to initialize the environment.
In case the issue persists you can run the following command to identify which service is not able to start up correctly
In some systems, the Docker socket, usually located in /var/run/docker.sock
can not be accessed within a container. This causes Akamas to signal this behavior by reporting the Access Denied error in the license service logs.
To overcome this limitation edit the docker-compose.yaml
file adding the line privileged: true
to the following services:
License
Optimizer
Telemetry
Airflow
The following is a sample configuration where this change is applied to the license service:
Finally, you can issue the following command to apply these changes
You can easily inspect which value of this variable has been used when starting Akamas by running the following command on the Akamas server:
If you find out that the value is not the one you expect you can change it by running the following command on the Akamas server:
Once Akamas is up and running you can re-install your license.
Akamas patches and upgrades need to be installed by following the specific instructions specified in the package provided. In case of new releases, it is recommended to read the related Release Notes. Under normal circumstances, this usually requires the user to update the docker-compose configuration, as described in the next section.
When using docker compose to install Akamas, there’s a folder usually named akamas
in the user home folder that contains a docker-compose.yml file. This is a YAML text file that contains a list of docker services with the URLs/version pointing to the ECR repo hosting all docker images needed to launch Akamas.
Here’s an excerpt of such a docker-compose.yml file (this example contains 3 services only):
The relevant lines that usually have to be patched during an upgrade are the lines with key "image" like:
In order to update to a new version you should replace the versions (1.7.0 or 2.3.0) after the colon with the new versions (ask your Akamas support for the correct service versions for a specific Akamas release) then you should restart Akamas with the following console commands: First login to Akamas CLI with:
and type username and password as in the example below
Now make sure you have the following AWS variables with the proper value in your Linux user environment:
Then log in to AWS with the following command:
Then pull all new ECR images for the new service versions you just changed (this should be done from when inside the same folder where file docker-compose.yml resides, usually $HOME/akamas/
) with the following command:
It should return an output like the following:
Finally, relaunch all services with:
(usage example below)
Wait for a few minutes and check the Akamas services are back up by running the command:
The expected output should be like the following (repeat the command after a minute or two if the last line is not "OK" as expected):
We recommend using the for a smoother experience.
When installing Akamas it’s mandatory to export the AKAMAS_CUSTOMER variable as illustrated in the . This variable must match the one provided by Akamas representatives when issuing a license. If the variable is not properly exported license installation will fail with an error message indicating that the name of the customer installation does not match the one provided in the license.
For any other issues please contact Akamas .
The process of backing up an Akamas server can be divided in two parts, that is system backup and otherwise start Akamas. Backup can be performed in any way you see fit: they’re just regular files so you can use any backup tool.
System services are hosted on AWS ECR repo so the only thing that fully defines a working Akamas application is the docker-compose.yml file. Performing a backup of the Akamas application is as simple as copying this single file to your backup location. you may schedule any script that performs this weekly or at any frequency you see fit
You may list all existing Akamas studies via the Akamas CLI command:
Then you can export all existing studies one by one via the CLI command
where UUID is the UUID of a single study. This command exports into a single archive file (tar.gz). These archive files can be backed up to your favorite backup folder.
Akamas server recovery involves recovering the system backup, restarting the Akamas service then re-importing the studies.
To restore the system you must recover the original docker-compose.yml
then launch the command
from the folder where you placed this YAML file and then wait for the system to come up, by checking it with the command
All studies can be re-imported singularly with the CLI command (referring to the correct pathname of the archive):