The third step in optimizing a new application is to create a workflow to instruct Akamas on the actions required to apply a configuration to the target application.
A workflow defines the actions that must be executed to evaluate the performance of a given configuration. These actions usually depend on the application architecture, technology stack, and deployment practices which might vary between environments and organizations (e.g. Deploying a microservice application in a staging environment on Kubernetes and performing a load test might be very different than applying an update to a monolith running in production).
Akamas provide several general-purpose and specialized workflow operators that allow users to perform common actions such as running a command on a Linux instance via SSH as well as integrate enterprise tools such as LoadRunner to run performance tests or Spark to launch Big Data analysis. More information and usage examples are on the Workflow Operators reference page.
If you are using GitOps practices and deployment pipeline you are probably already familiar with most of the elements used in Akamas workflows. Workflows can also trigger existing pipelines and re-use all the automation already in place.
Workflows are not tightly coupled to a study and can be re-used across studies and systems so you can change the optimization scope and target without the need to re-create a specific workflow.
The structure of the workflow heavily depends on deployment practices and the kind of optimization. In our example, we are dealing with a microservice application deployed in a test environment which is tested by injecting some load using Locust, a popular open-source performance testing tool.
The workflow that we will create to allow Akamas to evaluate the configurations comprises the following actions:
Create a deployment file from a template
Apply the file via kubectl
command
Wait for the deployment to be ready
Start the load test via locust APIs
Even if the integrations of this workflow are specific to the technology used by our test application (e.g. using kubectl
CLI to deploy the application), the general structure of the workflow could fit most of the applications subject to offline optimization in a test environment.
You can find more workflow examples for different use cases on the Optimization Guides section and references to technology-specific operators (e.g. Loadrunner, Spark) on the Workflow Operators reference page.
Here is the YAML definition of the workflow described above.
In this workflow, we used two operators: the FileConfigurator operator which creates a configuration file starting from a template by inserting the configuration values decided by Akamas, and the Executor operator which runs a command on a remote instance (named mgmserver
in this case, via ssh).
Save it to a file named, as an example, workflow.yaml
and then issue the creation command:
Here is what the workflow looks like in the UI: