Akamas Docs
3.6
3.6
  • Home
  • Getting started
    • Introduction
    • Insights for Kubernetes
    • Free Trial
    • Licensing
    • Deployment
      • Cloud Hosting
    • Security
    • Maintenance & Support (M&S) Services
      • Customer Support Services
      • Support levels for Customer Support Services
      • Support levels for software versions
      • Support levels with Akamas
  • Installing
    • Architecture
    • Docker compose installation
      • Prerequisites
        • Hardware Requirements
        • Software Requirements
        • Network requirements
      • Install Akamas dependencies
      • Install the Akamas Server
        • Online installation mode
          • Online installation behind a Proxy server
        • Offline installation mode
        • Changing UI Ports
        • Setup HTTPS configuration
      • Troubleshoot Docker installation issues
    • Kubernetes installation
      • Prerequisites
        • Cluster Requirements
        • Software Requirements
      • Install Akamas
        • Online Installation
        • Offline Installation - Private registry
      • Installing on OpenShift
      • Accessing Akamas
      • Useful commands
      • Selecting Cluster Nodes
    • Install the CLI
      • Setup the CLI
      • Initialize the CLI
      • Change CLI configuration
      • Use a proxy server
    • Verify the installation
    • Installing the toolbox
    • Install the license
    • Manage anonymous data collection
  • Managing Akamas
    • Akamas logs
    • Audit logs
    • Upgrade Akamas
      • Docker compose
      • Kubernetes
    • Monitor Akamas status
    • Backup & Recover of the Akamas Server
    • Users management
      • Accessing Keycloak admin console
      • Configure an external identity provider
        • Azure Active Directory
        • Google
      • Limit users sessions
        • Local users
        • Identity provider users
    • Collecting support information
  • Using
    • System
    • Telemetry
    • Workflow
    • Study
      • Offline Study
      • Live Study
        • Analyzing results of live optimization studies
      • Windowing
      • Parameters and constraints
  • Optimization Guides
    • Optimize application costs and resource efficiency
      • Kubernetes microservices
        • Optimize cost of a Kubernetes deployment subject to Horizontal Pod Autoscaler
        • Optimize cost of a Kubernetes microservice while preserving SLOs in production
        • Optimize cost of a Java microservice on Kubernetes while preserving SLOs in production
      • Application runtime
        • Optimizing a sample Java OpenJDK application
        • Optimizing cost of a Node.js application with performance tests
        • Optimizing cost of a Golang application with performance tests
        • Optimizing cost of a .NET application with performance tests
      • Applications running on cloud instances
        • Optimizing a sample application running on AWS
      • Spark applications
        • Optimizing a Spark application
    • Optimize application performance and reliability
      • Kubernetes microservices
        • Optimizing cost of a Kubernetes microservice while preserving SLOs in production
        • Optimizing cost of a Java microservice on Kubernetes while preserving SLOs in production
  • Integrating
    • Integrating Telemetry Providers
      • CSV provider
        • Install CSV provider
        • Create CSV telemetry instances
      • Dynatrace provider
        • Install Dynatrace provider
        • Create Dynatrace telemetry instances
          • Import Key Requests
      • Prometheus provider
        • Install Prometheus provider
        • Create Prometheus telemetry instances
        • CloudWatch Exporter
        • OracleDB Exporter
      • Spark History Server provider
        • Install Spark History Server provider
        • Create Spark History Server telemetry instances
      • NeoLoadWeb provider
        • Install NeoLoadWeb telemetry provider
        • Create NeoLoadWeb telemetry instances
      • LoadRunner Professional provider
        • Install LoadRunner Professional provider
        • Create LoadRunner Professional telemetry instances
      • LoadRunner Enterprise provider
        • Install LoadRunner Enterprise provider
        • Create LoadRunner Enterprise telemetry instances
      • AWS provider
        • Install AWS provider
        • Create AWS telemetry instances
    • Integrating Configuration Management
    • Integrating with pipelines
    • Integrating Load Testing
      • Integrating NeoLoad
      • Integrating LoadRunner Professional
      • Integrating LoadRunner Enterprise
  • Reference
    • Glossary
      • System
      • Component
      • Metric
      • Parameter
      • Component Type
      • Workflow
      • Telemetry Provider
      • Telemetry Instance
      • Optimization Pack
      • Goals & Constraints
      • KPI
      • Optimization Study
      • Workspace
      • Safety Policies
    • Construct templates
      • System template
      • Component template
      • Parameter template
      • Metric template
      • Component Types template
      • Telemetry Provider template
      • Telemetry Instance template
      • Workflows template
      • Study template
        • Goal & Constraints
        • Windowing policy
          • Trim windowing
          • Stability windowing
        • Parameter selection
        • Metric selection
        • Workload selection
        • KPIs
        • Steps
          • Baseline step
          • Bootstrap step
          • Preset step
          • Optimize step
        • Parameter rendering
        • Optimizer Options
    • Workflow Operators
      • General operator arguments
      • Executor Operator
      • FileConfigurator Operator
      • LinuxConfigurator Operator
      • WindowsExecutor Operator
      • WindowsFileConfigurator Operator
      • Sleep Operator
      • OracleExecutor Operator
      • OracleConfigurator Operator
      • SparkSSHSubmit Operator
      • SparkSubmit Operator
      • SparkLivy Operator
      • NeoLoadWeb Operator
      • LoadRunner Operator
      • LoadRunnerEnteprise Operator
    • Telemetry metric mapping
      • Dynatrace metrics mapping
      • Prometheus metrics mapping
      • NeoLoadWeb metrics mapping
      • Spark History Server metrics mapping
      • LoadRunner metrics mapping
    • Optimization Packs
      • Linux optimization pack
        • Amazon Linux
        • Amazon Linux 2
        • Amazon Linux 2022
        • CentOS 7
        • CentOS 8
        • RHEL 7
        • RHEL 8
        • Ubuntu 16.04
        • Ubuntu 18.04
        • Ubuntu 20.04
      • DotNet optimization pack
        • DotNet Core 3.1
      • Java OpenJDK optimization pack
        • Java OpenJDK 8
        • Java OpenJDK 11
        • Java OpenJDK 17
      • OpenJ9 optimization pack
        • IBM J9 VM 6
        • IBM J9 VM 8
        • Eclipse Open J9 11
      • Node JS optimization pack
        • Node JS 18
      • GO optimization pack
        • GO 1
      • Web Application optimization pack
        • Web Application
      • Docker optimization pack
        • Container
      • Kubernetes optimization pack
        • Horizontal Pod Autoscaler v1
        • Horizontal Pod Autoscaler v2
        • Kubernetes Pod
        • Kubernetes Container
        • Kubernetes Workload
        • Kubernetes Namespace
        • Kubernetes Cluster
      • WebSphere optimization pack
        • WebSphere 8.5
        • WebSphere Liberty ND
      • AWS optimization pack
        • EC2
        • Lambda
      • PostgreSQL optimization pack
        • PostgreSQL 11
        • PostgreSQL 12
      • Cassandra optimization pack
        • Cassandra
      • MySQL Database optimization pack
        • MySQL 8.0
      • Oracle Database optimization pack
        • Oracle Database 12c
        • Oracle Database 18c
        • Oracle Database 19c
        • RDS Oracle Database 11g
        • RDS Oracle Database 12c
      • MongoDB optimization pack
        • MongoDB 4
        • MongoDB 5
      • Elasticsearch optimization pack
        • Elasticsearch 6
      • Spark optimization pack
        • Spark Application 2.2.0
        • Spark Application 2.3.0
        • Spark Application 2.4.0
    • Command Line commands
      • Administration commands
      • User and Workspace management commands
      • Authentication commands
      • Resource management commands
      • Optimizer options commands
    • Release Notes
  • Knowledge Base
    • Performing load testing to support optimization activities
    • Creating custom optimization packs
    • Setting up a Konakart environment for testing Akamas
    • Modeling a sample Java-based e-commerce application (Konakart)
    • Optimizing a web application
    • Optimizing a sample Java OpenJ9 application
    • Optimizing a sample Linux system
    • Optimizing a MongoDB server instance
    • Optimizing a Kubernetes application
    • Leveraging Ansible to automate AWS instance management
    • Guidelines for optimizing AWS EC2 instances
    • Optimizing an Oracle Database server instance
    • Optimizing an Oracle Database for an e-commerce service
    • Guidelines for optimizing Oracle RDS
    • Optimizing a MySQL server database running Sysbench
    • Optimizing a MySQL server database running OLTPBench
    • Optimizing a live full-stack deployment (K8s + JVM)
    • Setup Instana integration
    • Setup Locust telemetry via CSV
    • Setup AppDynamics integration
Powered by GitBook
On this page
  • General information
  • Common options
  • Akamas aliases
  • Build command
  • Build optimization pack
  • Build scaffolding
  • Create command
  • Create by file/folder
  • Delete command
  • Delete entity command
  • List command
  • Describe command
  • Update command
  • Update experiment command
  • Install command
  • Uninstall command
  • Start command
  • Stop command
  • Export command
  • Import command

Was this helpful?

Export as PDF
  1. Reference
  2. Command Line commands

Resource management commands

Last updated 10 months ago

Was this helpful?

This page describes all commands that allow to be managed with their options (see also available for all commands).

Command
Description

build a resource from a file or directory

create a resource from a file

delete a resource

list a set of resources

describe a resource

update a resource

install a resource from a file

uninstall a resource

start a study

stop a study

export a study

import a study

General information

Common options

The following table describes the common options available for all commands:

Option
Short option
Type
Description

--debug

-d

Flag

Print detailed information in case of errors

--workspace

-w

String

Overrides the workspace defined in the configuration file when interacting with resources such as systems, workflows and studies

--help

Flag

Print command line help

Akamas aliases

The Akamas CLI allows using a set of aliases or shortcuts for many resources.

Any resource can be specified using either the singular or plural form. Furthermore, the shortcuts listed below are available:

   component            [comp, co, components, cmp]
   system               [systems, sys, sy]
   component-type       [ctype, comp-type, ct, component-types]
   optimization-pack    [optimization-packs, op, opt-pack, opack]
   metric               [metr, metrics, me]
   parameter            [parameters, pa, param, par]
   study                [st, studies, sty]
   workflow             [wf, wkfl, workflows, wo]
   telemetry-instance   [ti, tel-instance, telemetry-instances, tel-inst]
   telemetry-provider   [telemetry-providers, tel-prov, tp, tel-provider]
   kpi                  [kpis]
   workspace            [workspaces, ws]
   trial                [tr, trials]
   user                 [us, users]
   license              [licenses, lic]
   experiment           [experiments, exp]
   log                  [logs]
   step                 [steps]

You can print the list of available aliases with the following command

akamas list alias

Build command

This command builds either a new optimization pack or a new scaffolding hierarchy.

# build optimization pack
akamas build optimization-pack <folder-with-resource-files>

# build scaffolding
akamas build scaffold <folder-with-resource-files>

Build optimization pack

In this case, you supply a folder with a specific hierarchy: the needed folders are metrics, component-types and parameters, and each of them contains a set of yaml resource files listing the supported resources. Then the command akamas build optimization-pack FOLDER_NAME creates a full JSON file with all the optimization pack contents inside it.

Build scaffolding

In this case, you supply two folders: one for variables, named variables and one for templates named templates. The variables folder must contain one yaml file for each desired output set and the templates folder should hold all generic templates that can contain variable parameters specified in the variables file.

This command is available since version 2.8.0 of the CLI

For example, the variables folder could contain two files named test.yaml and prod.yaml. The contents of these files are:

test.yaml
study:
  name: my study

k8s:
  namespace: online-boutique
  deployment: adservice
  container: server

k8s_deployment: test
prod.yaml
study:
  name: Production study

k8s:
  namespace: online-boutique-prod
  deployment: adservice-prod
  container: server-prod

k8s_deployment: prod

Then suppose that the template folder contains some YAML files and one of them the is following file named template-study.yaml:

template-study.yaml
kind: study

name: {{ study.name }}
system: {{ k8s.deployment }}
workflow: {{ k8s.deployment }}

goal:
  name: Cost
  objective: minimize
  function:
    formula: (({{ k8s.container }}.container_cpu_limit)/1000)*29 + (((({{ k8s.container }}.container_memory_limit)/1024)/1024)/1024)*3

Create command

Create the Akamas resource described in the provided YAML file.

akamas create <resource-type|-f> <resource-file> [<parent-resource-id|parent-resource-name>]

Create by file/folder

You can also omit the resource type from the create command and use the -f flag instead to create most of the resources with a YAML file in a single command. The supported resources are component, system, optimization-pack, study, workflow, telemetry-instance and telemetry-provider. To use this feature, it's required to add the kind key inside each YAML file. Also, the system key must be added when the resource is required to be attached to a system (applies to telemetry instances and system components).

For example, to create a new telemetry instance, you should add the following to your YAML file:

kind: telemetry-instance
system: SYSTEM_NAME

Then you can use the akamas create -f <filename.yaml> command instead of akamas create telemetry-instance <filename.yaml> SYSTEM_NAME.

Similarly, to create a new telemetry-provider (which does not need the system attribute), you just need to specify the kind to your YAML file:

kind: telemetry-provider

This also works for optimization packs. For standard optimization packs provided by akamas, you need to write a YAML file such as:

kind: optimization-pack
name: OPTIMIZATION_PACK_NAME

If you want to install a custom optimization pack, you can also supply a JSON file. In this case, there is no need to specify the kind attribute.

Finally, if you supply a folder to the command akamas create, it will process all files inside this folder and create all the requested resources. You can, for example, use one of the output folders created by the command akamas build scaffold. Let's assume we have a folder named scaffold that contains the following files:

component.yaml
opt_pack.yaml
provider.yaml
study.yaml
system.yaml
telemetry_instance.yaml
workflow.yaml

the following command will process all of the files above and (if correct) create all resources described inside of them:

akamas create -f scaffold/

Delete command

Delete an Akamas resource, identified by UUID or name.

akamas delete [options] <resource-type|-f> <resource-id|resource-name> [<parent-resource-id|parent-resource-name>]

with the following options:

Option
Short option
Type
Description

--force

-f

Flag

Force the deletion of the resource(s)

Delete entity command

All resources created with the command akamas create -f <folder> can also be deleted by using the opposite command akamas delete -f <folder>. The only difference is that the command akamas delete -f has an additional flag --complete. When supplied, it deletes all supported objects including optimization packs and telemetry providers. When the --complete flag is missing, however, optimization packs and telemetry providers are not deleted.

List command

List the resources for the selected type with their id, name, and description. Additional resource-specific fields can be shown.

akamas list [flags] <resource-type> <resource-id|resource-name> [<parent-resource-id|parent-resource-name>]

with the following options:

Option
Short option
Type
Values
Default
Description

--no-pagination

-no-pag

Flag

Show all resources without pagination

--use-seconds

-u-s

Flag

If durations should be output in seconds

--sort-asc, --sort-desc

-s-asc, -s-desc

Flag

Sort items by creation time

--output

-o

Choice

  • table

  • json

  • yaml

table

Switch the output to table (default), json or yaml

Describe command

Describe an Akamas resource with all its fields.

akamas describe [flags] <resource-type> <resource-type> <resource-id|resource-name> [<parent-resource-id|parent-resource-name>]

with the following options:

Option
Short option
Type
Values
Default
Description

--output

-o

Choice

  • table

  • json

  • yaml

table

Switch the output to table (default), json or yaml

Notice that this command does not support the resource type System.

Update command

Update an Akamas resource, identified by UUID or name.

akamas update <resource-type> <resource-id|resource-name>

with the following options:

Option
Short option
Type
Values
Default
Description

--output

-o

Choice

  • table

  • json

  • yaml

table

Switch the output to table (default), json or yaml

Update experiment command

Update an experiment, identified by the ID of the study and the experiment.

akamas update experiment <study-id|study-name> <experiment-id>

with the following options:

Option
Type
Values
Description

--approve-configuration

Flag

Approve a waiting experiment.

--parameter

String

list of key-value pairs

Updated the experiment's configuration with the values provided in the key-value pairs.

Install command

Install a License or an Optimization Pack

akamas install <resource-type> <file>

with the following options:

Option
Short option
Type
Description

--force

-f

Flag

Force the installation of the resource

Uninstall command

Uninstall a License or an Optimization Pack

akamas uninstall <resource-type> <id>

with the following options:

Option
Short option
Type
Description

--force

-f

Flag

Force the uninstall of the resource

Start command

Start the execution of a Study.

akamas start study <id|name>

Stop command

Stop the execution of a Study. Once stopped, the execution cannot be resumed.

akamas stop study <id|name>

Export command

To export a study, the study name or the study UUID can be used from the command line.

An optional filename can be specified, with a relative or absolute path:

akamas export study <UUID>|"<NAME>" [FILENAME]

The exported information will be saved in tar.gz format.

The following entities are exported:

  • The Study

  • The Steps of the Study

  • The Experiments of the Study

  • The Trials of the Study

  • The Workflow to which the Study refers

  • The Timeseries collected during the study run

  • The System to which the Study refers

  • The Component related to the Study's System

  • The ComponentType of each Component

  • The Metrics definitions of each ComponentTypes

  • The Parameters definitions of each ComponentTypes

Notice: this operation can require a long time, depending on the quantity of data to be collected. During this time the CLI will wait for Akamas to send the exported package. Do not interrupt the CLI during this phase, as otherwise, the process will need to restart from the beginning.

Import command

Notice: please make sure that you have installed the latest versions of the optimization packs before starting the import: this way, the import procedure will bind the studies to the latest optimization packs version (i.e., the installed ones) instead of importing the (possibly) old ones from the source system.

Use the following command to import a study into an existing Akamas instance:

akamas import study FILENAME

Where FILENAME refers to the file of a previously exported study.

When imported, the following entities will have a new UUID:

  • Study

  • Workflow

  • System

  • Component

  • ComponentType

  • Metrics

  • Parameters

In case a resource that is being imported has the same name as an existing one, the existing entity will not be deleted. The existing entity (with its UUID) will be used instead of the imported one.

All steps, experiments, and trials will maintain the same id and, therefore, the same execution order as the original exported study.

Notice: this operation can require a long time. If the CLI shows a timeout error or if the operation is interrupted, the import will continue on the Akamas server.

When launching the command akamas build scaffold SCAFFOLDING_DIR_NAME/, a new folder outputis created inside SCAFFOLDING_DIR_NAME along with two sub-folders test and prod. Each sub-folder now contains the templates rendered with the values set in the variables files. The command makes it easier to create bulk entities.

Similarly to the create command, you can use the flag -f to delete the supplied resources. See the section command for instructions on the supported resources and the additional required fields.

Akamas resources
common options
akamas create entity
Create by file/folder
build
create
delete
list
describe
update
install
uninstall
start
stop
export
import