Akamas Docs
3.6
3.6
  • Home
  • Getting started
    • Introduction
    • Insights for Kubernetes
    • Free Trial
    • Licensing
    • Deployment
      • Cloud Hosting
    • Security
    • Maintenance & Support (M&S) Services
      • Customer Support Services
      • Support levels for Customer Support Services
      • Support levels for software versions
      • Support levels with Akamas
  • Installing
    • Architecture
    • Docker compose installation
      • Prerequisites
        • Hardware Requirements
        • Software Requirements
        • Network requirements
      • Install Akamas dependencies
      • Install the Akamas Server
        • Online installation mode
          • Online installation behind a Proxy server
        • Offline installation mode
        • Changing UI Ports
        • Setup HTTPS configuration
      • Troubleshoot Docker installation issues
    • Kubernetes installation
      • Prerequisites
        • Cluster Requirements
        • Software Requirements
      • Install Akamas
        • Online Installation
        • Offline Installation - Private registry
      • Installing on OpenShift
      • Accessing Akamas
      • Useful commands
      • Selecting Cluster Nodes
    • Install the CLI
      • Setup the CLI
      • Initialize the CLI
      • Change CLI configuration
      • Use a proxy server
    • Verify the installation
    • Installing the toolbox
    • Install the license
    • Manage anonymous data collection
  • Managing Akamas
    • Akamas logs
    • Audit logs
    • Upgrade Akamas
      • Docker compose
      • Kubernetes
    • Monitor Akamas status
    • Backup & Recover of the Akamas Server
    • Users management
      • Accessing Keycloak admin console
      • Configure an external identity provider
        • Azure Active Directory
        • Google
      • Limit users sessions
        • Local users
        • Identity provider users
    • Collecting support information
  • Using
    • System
    • Telemetry
    • Workflow
    • Study
      • Offline Study
      • Live Study
        • Analyzing results of live optimization studies
      • Windowing
      • Parameters and constraints
  • Optimization Guides
    • Optimize application costs and resource efficiency
      • Kubernetes microservices
        • Optimize cost of a Kubernetes deployment subject to Horizontal Pod Autoscaler
        • Optimize cost of a Kubernetes microservice while preserving SLOs in production
        • Optimize cost of a Java microservice on Kubernetes while preserving SLOs in production
      • Application runtime
        • Optimizing a sample Java OpenJDK application
        • Optimizing cost of a Node.js application with performance tests
        • Optimizing cost of a Golang application with performance tests
        • Optimizing cost of a .NET application with performance tests
      • Applications running on cloud instances
        • Optimizing a sample application running on AWS
      • Spark applications
        • Optimizing a Spark application
    • Optimize application performance and reliability
      • Kubernetes microservices
        • Optimizing cost of a Kubernetes microservice while preserving SLOs in production
        • Optimizing cost of a Java microservice on Kubernetes while preserving SLOs in production
  • Integrating
    • Integrating Telemetry Providers
      • CSV provider
        • Install CSV provider
        • Create CSV telemetry instances
      • Dynatrace provider
        • Install Dynatrace provider
        • Create Dynatrace telemetry instances
          • Import Key Requests
      • Prometheus provider
        • Install Prometheus provider
        • Create Prometheus telemetry instances
        • CloudWatch Exporter
        • OracleDB Exporter
      • Spark History Server provider
        • Install Spark History Server provider
        • Create Spark History Server telemetry instances
      • NeoLoadWeb provider
        • Install NeoLoadWeb telemetry provider
        • Create NeoLoadWeb telemetry instances
      • LoadRunner Professional provider
        • Install LoadRunner Professional provider
        • Create LoadRunner Professional telemetry instances
      • LoadRunner Enterprise provider
        • Install LoadRunner Enterprise provider
        • Create LoadRunner Enterprise telemetry instances
      • AWS provider
        • Install AWS provider
        • Create AWS telemetry instances
    • Integrating Configuration Management
    • Integrating with pipelines
    • Integrating Load Testing
      • Integrating NeoLoad
      • Integrating LoadRunner Professional
      • Integrating LoadRunner Enterprise
  • Reference
    • Glossary
      • System
      • Component
      • Metric
      • Parameter
      • Component Type
      • Workflow
      • Telemetry Provider
      • Telemetry Instance
      • Optimization Pack
      • Goals & Constraints
      • KPI
      • Optimization Study
      • Workspace
      • Safety Policies
    • Construct templates
      • System template
      • Component template
      • Parameter template
      • Metric template
      • Component Types template
      • Telemetry Provider template
      • Telemetry Instance template
      • Workflows template
      • Study template
        • Goal & Constraints
        • Windowing policy
          • Trim windowing
          • Stability windowing
        • Parameter selection
        • Metric selection
        • Workload selection
        • KPIs
        • Steps
          • Baseline step
          • Bootstrap step
          • Preset step
          • Optimize step
        • Parameter rendering
        • Optimizer Options
    • Workflow Operators
      • General operator arguments
      • Executor Operator
      • FileConfigurator Operator
      • LinuxConfigurator Operator
      • WindowsExecutor Operator
      • WindowsFileConfigurator Operator
      • Sleep Operator
      • OracleExecutor Operator
      • OracleConfigurator Operator
      • SparkSSHSubmit Operator
      • SparkSubmit Operator
      • SparkLivy Operator
      • NeoLoadWeb Operator
      • LoadRunner Operator
      • LoadRunnerEnteprise Operator
    • Telemetry metric mapping
      • Dynatrace metrics mapping
      • Prometheus metrics mapping
      • NeoLoadWeb metrics mapping
      • Spark History Server metrics mapping
      • LoadRunner metrics mapping
    • Optimization Packs
      • Linux optimization pack
        • Amazon Linux
        • Amazon Linux 2
        • Amazon Linux 2022
        • CentOS 7
        • CentOS 8
        • RHEL 7
        • RHEL 8
        • Ubuntu 16.04
        • Ubuntu 18.04
        • Ubuntu 20.04
      • DotNet optimization pack
        • DotNet Core 3.1
      • Java OpenJDK optimization pack
        • Java OpenJDK 8
        • Java OpenJDK 11
        • Java OpenJDK 17
      • OpenJ9 optimization pack
        • IBM J9 VM 6
        • IBM J9 VM 8
        • Eclipse Open J9 11
      • Node JS optimization pack
        • Node JS 18
      • GO optimization pack
        • GO 1
      • Web Application optimization pack
        • Web Application
      • Docker optimization pack
        • Container
      • Kubernetes optimization pack
        • Horizontal Pod Autoscaler v1
        • Horizontal Pod Autoscaler v2
        • Kubernetes Pod
        • Kubernetes Container
        • Kubernetes Workload
        • Kubernetes Namespace
        • Kubernetes Cluster
      • WebSphere optimization pack
        • WebSphere 8.5
        • WebSphere Liberty ND
      • AWS optimization pack
        • EC2
        • Lambda
      • PostgreSQL optimization pack
        • PostgreSQL 11
        • PostgreSQL 12
      • Cassandra optimization pack
        • Cassandra
      • MySQL Database optimization pack
        • MySQL 8.0
      • Oracle Database optimization pack
        • Oracle Database 12c
        • Oracle Database 18c
        • Oracle Database 19c
        • RDS Oracle Database 11g
        • RDS Oracle Database 12c
      • MongoDB optimization pack
        • MongoDB 4
        • MongoDB 5
      • Elasticsearch optimization pack
        • Elasticsearch 6
      • Spark optimization pack
        • Spark Application 2.2.0
        • Spark Application 2.3.0
        • Spark Application 2.4.0
    • Command Line commands
      • Administration commands
      • User and Workspace management commands
      • Authentication commands
      • Resource management commands
      • Optimizer options commands
    • Release Notes
  • Knowledge Base
    • Performing load testing to support optimization activities
    • Creating custom optimization packs
    • Setting up a Konakart environment for testing Akamas
    • Modeling a sample Java-based e-commerce application (Konakart)
    • Optimizing a web application
    • Optimizing a sample Java OpenJ9 application
    • Optimizing a sample Linux system
    • Optimizing a MongoDB server instance
    • Optimizing a Kubernetes application
    • Leveraging Ansible to automate AWS instance management
    • Guidelines for optimizing AWS EC2 instances
    • Optimizing an Oracle Database server instance
    • Optimizing an Oracle Database for an e-commerce service
    • Guidelines for optimizing Oracle RDS
    • Optimizing a MySQL server database running Sysbench
    • Optimizing a MySQL server database running OLTPBench
    • Optimizing a live full-stack deployment (K8s + JVM)
    • Setup Instana integration
    • Setup Locust telemetry via CSV
    • Setup AppDynamics integration
Powered by GitBook
On this page
  • Export test results to CSV
  • Preprocess the CSV file
  • Setup the Telemetry instance
  • Explore the results
  • Appendix

Was this helpful?

Export as PDF
  1. Knowledge Base

Setup Locust telemetry via CSV

Last updated 25 days ago

Was this helpful?

Locust () is a popular Python-based load-testing tool. If you use Locust to run your load tests you might want to follow these guidelines to import its metrics (throughput, errors, and response time) using the .

Export test results to CSV

Locust can export the results of a test in a variety of formats, including CSV files.

To generate a csv file from locust add the --csv results/results argument to the locust command line used to invoke the test as in this example.

locust --headless --users 2 -t 3m --spawn-rate 1 -H http://my-site.io --csv results/results -f test.py

This will make locust generate some CSV files in the results folder; we are interested in the file name results_stats_history.csv which contains some time-series with the core performance metrics.

We also suggest adding the following lines at the beginning of your locust file to reduce the sampling frequency reported in the CSV from the default of 1 second to 30 seconds as described .

import locust.stats
locust.stats.CSV_STATS_INTERVAL_SEC = 30

Preprocess the CSV file

To import the CSV into Akamas we still need to do a bit of pre-processing to:

  • Update the timestamp in a more friendly format

  • Add a column with the name of the Akamas component

This can be done by running the following script, make sure to change application on line 10 with the name of your Web Application component.

#!/bin/bash
cd "$(dirname "$0")"
test_csv=results/results_stats_history.csv

echo 'Formatting locust test'

tr -d '\r' < $test_csv > temp && mv temp $test_csv
sed -i '/,N\/A/d' $test_csv                     # remove lines without metrics
sed -i '1s/$/,COMPONENT/' $test_csv             # add component header
sed -i '2,$s/$/,application/' $test_csv         # add component value
awk -F, 'NR>1 { $1=strftime("%Y-%m-%d %H:%M:%S", $1); print } NR==1 { print }' OFS=, $test_csv > temp && mv temp $test_csv # format timestamp

echo 'Locust test formatted'

You can easily add this as an operator to the akamas workflow so that it gets executed at the end of every test run or integrated into the script that launches your locus test.

Setup the Telemetry instance

Now you can create a telemetry instance such as the following one to import the metrics.

Save this snippet in a YAML file by editing the following sections:

  • Host, username, and authentication to connect to the instance where the CSV file is hosted (lines 7-11)

  • remotefilePattern with the path to the CSV file to load on the instance

kind: telemetry-instance
system: system
name: csv
provider: CSV File             # this is an instance of the CSV provider
config:    
  logLevel: DETAILED               # the level of logging
  address: toolbox             # the address of the host with the CSV files
  port: 22                     # the port used to connect
  authType: key           # the authentication method
  username: akamas             # the username used to connect
  auth: ./toolbox.key                 # the authentication credential
  protocol: scp                # the protocol used to retrieve the file
  fieldSeparator: ","          # the character used as field separator in the CSV files
  remoteFilePattern: /work/results/results_stats_history.csv    # the path of the CSV files to import
  componentColumn: COMPONENT                     # the header of the column with component names
  timestampColumn: Timestamp                            # the header of the column with the time stamp
  timestampFormat: yyyy-MM-dd HH:mm:ss           # the format of the timestamp
metrics:
  - metric: transactions_throughput
    datasourceMetric: Requests/s
  - metric: transactions_response_time_p90
    datasourceMetric: 90%
  - metric: transactions_response_time
    datasourceMetric: 50%
  - metric: transactions_error_throughput
    datasourceMetric: Failures/s

Explore the results

Now you can use the imported metrics in your study goal and constraints and explore them from the UI.

Appendix

Here you can find a collection of sample artifacts to be used to setup a workflow that runs the test and prepares the csv file using the toolbox as the target host.

https://locust.io/
CSV file telemetry provider
here
2KB
locust.zip
archive