Workflow

The third step in optimizing a new application is to create a workflow to instruct Akamas on the actions required to apply a configuration to the target application.

A workflow defines the actions that must be executed to evaluate the performance of a given configuration. These actions usually depend on the application architecture, technology stack, and deployment practices which might vary between environments and organizations (e.g. Deploying a microservice application in a staging environment on Kubernetes and performing a load test might be very different than applying an update to a monolith running in production).

Akamas provide several general-purpose and specialized workflow operators that allow users to perform common actions such as running a command on a Linux instance via SSH as well as integrate enterprise tools such as LoadRunner to run performance tests or Spark to launch Big Data analysis. More information and usage examples are on the Workflow Operators reference page.

If you are using GitOps practices and deployment pipeline you are probably already familiar with most of the elements used in Akamas workflows. Workflows can also trigger existing pipelines and re-use all the automation already in place.

Workflows are not tightly coupled to a study and can be re-used across studies and systems so you can change the optimization scope and target without the need to re-create a specific workflow.

Creating the workflow for Online Boutique

The structure of the workflow heavily depends on deployment practices and the kind of optimization. In our example, we are dealing with a microservice application deployed in a test environment which is tested by injecting some load using Locust, a popular open-source performance testing tool.

The workflow that we will create to allow Akamas to evaluate the configurations comprises the following actions:

  1. Create a deployment file from a template

  2. Apply the file via kubectl command

  3. Wait for the deployment to be ready

  4. Start the load test via locust APIs

Even if the integrations of this workflow are specific to the technology used by our test application (e.g. using kubectl CLI to deploy the application), the general structure of the workflow could fit most of the applications subject to offline optimization in a test environment.

You can find more workflow examples for different use cases on the Optimization Guides section and references to technology-specific operators (e.g. Loadrunner, Spark) on the Workflow Operators reference page.

Here is the YAML definition of the workflow described above.

name: Configure and Test Online Boutique
tasks:
  # 1 - Create a deployment file from a template
  - name: Configure Online Boutique
    operator: FileConfigurator
    arguments:
      source:
        hostname: mgmserver
        username: akamas
        password: ******
        path: /work/boutique/boutique.yaml.templ
      target:
        hostname: mgmserver
        username: akamas
        password: *******
        path: /work/boutique/boutique.yaml
 
  # 2 - Apply the file via the kubectl command
  - name: Apply new configuration to the Online Boutique
    operator: Executor
    arguments:
      host:
        hostname: mgmserver
        username: akamas
        password: *******
      command: kubectl apply -f /work/boutique/boutique.yaml
  
  # 3 - Wait for the deployment to be ready
  - name: Check Online Boutique is up
    operator: Executor
    arguments:
      retries: 0
      host:
        hostname: mgmserver
        username: akamas
        password: *******
      command: kubectl rollout status --timeout=3m deployment ak-adservice 
  
  # 4 - Start the load test via locust APIs
  - name: Start Locust Test
    operator: Executor
    arguments:
      host:
        hostname: mgmserver
        username: akamas
        password: *******
      command: bash /work/boutique/run-test.sh

In this workflow, we used two operators: the FileConfigurator operator which creates a configuration file starting from a template by inserting the configuration values decided by Akamas, and the Executor operator which runs a command on a remote instance (named mgmserver in this case, via ssh).

Save it to a file named, as an example, workflow.yaml and then issue the creation command:

akamas create workflow workflow.yaml

Here is what the workflow looks like in the UI:

Last updated