Running Akamas requires a cluster running Kubernetes version 1.24 or higher.
Akamas can be deployed in three different sizes depending on the number of concurrent optimization studies that will be executed. If you are unsure about which size is appropriate for your environment we suggest you start with the small one and upgrade to bigger ones as you expand the optimization activity to more applications.
The tables below report the required resources both for requests and limits that should be available in the cluster to use Akamas.
The resources specified on this page have been defined by considering using a dedicated namespace to run only Akamas components. If your cluster has additional tools (E.g. a service mesh or a monitoring agent) that inject containers in the Akamas pods we suggest either disabling them or increasing the sizing considering their overhead. Also if you plan to deploy other software inside the Akamas namespace and resource quotas are enabled you should increase the size considering the resources required by the specific software.
The small tier is suited for environments that need to support up to 3 concurrent optimization studies
Resource | Requests | Limits |
---|---|---|
The medium tier is suited for environments that need to support up to 50 concurrent optimization studies
Resource | Requests | Limits |
---|---|---|
The large tier is suited for environments that need to support up to 100 concurrent optimization studies. If you plan to run more concurrent studies, please contact Akamas support to plan your installation.
The cluster must define a Storage Class so that the application installation can leverage Persistent Volume Claims to dynamically provision the volumes required to persist data.
For more information on this topic refer to Kubernetes' official documentation.
To install and run Akamas cluster level permissions are not required. This is the minimal set of namespaced rules.
Networking requirements depend on how users interact with Akamas. Services can be exposed via Ingress or using kubectl as a proxy. Refer to Accessing Akamas for a more detailed description of the available options.
Resource | Requests | Limits |
---|---|---|
CPU
4 Cores
15 Cores
Memory
28 GB
28 GB
Disk Space
70 GB
70 GB
CPU
8 Cores
20 Cores
Memory
50 GB
50 GB
Disk Space
100 GB
100 GB
CPU
10 Cores
25 Cores
Memory
60 GB
60 GB
Disk Space
150 GB
150 GB