Akamas Docs
3.6
3.6
  • Home
  • Getting started
    • Introduction
    • Insights for Kubernetes
    • Free Trial
    • Licensing
    • Deployment
      • Cloud Hosting
    • Security
    • Maintenance & Support (M&S) Services
      • Customer Support Services
      • Support levels for Customer Support Services
      • Support levels for software versions
      • Support levels with Akamas
  • Installing
    • Architecture
    • Docker compose installation
      • Prerequisites
        • Hardware Requirements
        • Software Requirements
        • Network requirements
      • Install Akamas dependencies
      • Install the Akamas Server
        • Online installation mode
          • Online installation behind a Proxy server
        • Offline installation mode
        • Changing UI Ports
        • Setup HTTPS configuration
      • Troubleshoot Docker installation issues
    • Kubernetes installation
      • Prerequisites
        • Cluster Requirements
        • Software Requirements
      • Install Akamas
        • Online Installation
        • Offline Installation - Private registry
      • Installing on OpenShift
      • Accessing Akamas
      • Useful commands
      • Selecting Cluster Nodes
    • Install the CLI
      • Setup the CLI
      • Initialize the CLI
      • Change CLI configuration
      • Use a proxy server
    • Verify the installation
    • Installing the toolbox
    • Install the license
    • Manage anonymous data collection
  • Managing Akamas
    • Akamas logs
    • Audit logs
    • Upgrade Akamas
      • Docker compose
      • Kubernetes
    • Monitor Akamas status
    • Backup & Recover of the Akamas Server
    • Users management
      • Accessing Keycloak admin console
      • Configure an external identity provider
        • Azure Active Directory
        • Google
      • Limit users sessions
        • Local users
        • Identity provider users
    • Collecting support information
  • Using
    • System
    • Telemetry
    • Workflow
    • Study
      • Offline Study
      • Live Study
        • Analyzing results of live optimization studies
      • Windowing
      • Parameters and constraints
  • Optimization Guides
    • Optimize application costs and resource efficiency
      • Kubernetes microservices
        • Optimize cost of a Kubernetes deployment subject to Horizontal Pod Autoscaler
        • Optimize cost of a Kubernetes microservice while preserving SLOs in production
        • Optimize cost of a Java microservice on Kubernetes while preserving SLOs in production
      • Application runtime
        • Optimizing a sample Java OpenJDK application
        • Optimizing cost of a Node.js application with performance tests
        • Optimizing cost of a Golang application with performance tests
        • Optimizing cost of a .NET application with performance tests
      • Applications running on cloud instances
        • Optimizing a sample application running on AWS
      • Spark applications
        • Optimizing a Spark application
    • Optimize application performance and reliability
      • Kubernetes microservices
        • Optimizing cost of a Kubernetes microservice while preserving SLOs in production
        • Optimizing cost of a Java microservice on Kubernetes while preserving SLOs in production
  • Integrating
    • Integrating Telemetry Providers
      • CSV provider
        • Install CSV provider
        • Create CSV telemetry instances
      • Dynatrace provider
        • Install Dynatrace provider
        • Create Dynatrace telemetry instances
          • Import Key Requests
      • Prometheus provider
        • Install Prometheus provider
        • Create Prometheus telemetry instances
        • CloudWatch Exporter
        • OracleDB Exporter
      • Spark History Server provider
        • Install Spark History Server provider
        • Create Spark History Server telemetry instances
      • NeoLoadWeb provider
        • Install NeoLoadWeb telemetry provider
        • Create NeoLoadWeb telemetry instances
      • LoadRunner Professional provider
        • Install LoadRunner Professional provider
        • Create LoadRunner Professional telemetry instances
      • LoadRunner Enterprise provider
        • Install LoadRunner Enterprise provider
        • Create LoadRunner Enterprise telemetry instances
      • AWS provider
        • Install AWS provider
        • Create AWS telemetry instances
    • Integrating Configuration Management
    • Integrating with pipelines
    • Integrating Load Testing
      • Integrating NeoLoad
      • Integrating LoadRunner Professional
      • Integrating LoadRunner Enterprise
  • Reference
    • Glossary
      • System
      • Component
      • Metric
      • Parameter
      • Component Type
      • Workflow
      • Telemetry Provider
      • Telemetry Instance
      • Optimization Pack
      • Goals & Constraints
      • KPI
      • Optimization Study
      • Workspace
      • Safety Policies
    • Construct templates
      • System template
      • Component template
      • Parameter template
      • Metric template
      • Component Types template
      • Telemetry Provider template
      • Telemetry Instance template
      • Workflows template
      • Study template
        • Goal & Constraints
        • Windowing policy
          • Trim windowing
          • Stability windowing
        • Parameter selection
        • Metric selection
        • Workload selection
        • KPIs
        • Steps
          • Baseline step
          • Bootstrap step
          • Preset step
          • Optimize step
        • Parameter rendering
        • Optimizer Options
    • Workflow Operators
      • General operator arguments
      • Executor Operator
      • FileConfigurator Operator
      • LinuxConfigurator Operator
      • WindowsExecutor Operator
      • WindowsFileConfigurator Operator
      • Sleep Operator
      • OracleExecutor Operator
      • OracleConfigurator Operator
      • SparkSSHSubmit Operator
      • SparkSubmit Operator
      • SparkLivy Operator
      • NeoLoadWeb Operator
      • LoadRunner Operator
      • LoadRunnerEnteprise Operator
    • Telemetry metric mapping
      • Dynatrace metrics mapping
      • Prometheus metrics mapping
      • NeoLoadWeb metrics mapping
      • Spark History Server metrics mapping
      • LoadRunner metrics mapping
    • Optimization Packs
      • Linux optimization pack
        • Amazon Linux
        • Amazon Linux 2
        • Amazon Linux 2022
        • CentOS 7
        • CentOS 8
        • RHEL 7
        • RHEL 8
        • Ubuntu 16.04
        • Ubuntu 18.04
        • Ubuntu 20.04
      • DotNet optimization pack
        • DotNet Core 3.1
      • Java OpenJDK optimization pack
        • Java OpenJDK 8
        • Java OpenJDK 11
        • Java OpenJDK 17
      • OpenJ9 optimization pack
        • IBM J9 VM 6
        • IBM J9 VM 8
        • Eclipse Open J9 11
      • Node JS optimization pack
        • Node JS 18
      • GO optimization pack
        • GO 1
      • Web Application optimization pack
        • Web Application
      • Docker optimization pack
        • Container
      • Kubernetes optimization pack
        • Horizontal Pod Autoscaler v1
        • Horizontal Pod Autoscaler v2
        • Kubernetes Pod
        • Kubernetes Container
        • Kubernetes Workload
        • Kubernetes Namespace
        • Kubernetes Cluster
      • WebSphere optimization pack
        • WebSphere 8.5
        • WebSphere Liberty ND
      • AWS optimization pack
        • EC2
        • Lambda
      • PostgreSQL optimization pack
        • PostgreSQL 11
        • PostgreSQL 12
      • Cassandra optimization pack
        • Cassandra
      • MySQL Database optimization pack
        • MySQL 8.0
      • Oracle Database optimization pack
        • Oracle Database 12c
        • Oracle Database 18c
        • Oracle Database 19c
        • RDS Oracle Database 11g
        • RDS Oracle Database 12c
      • MongoDB optimization pack
        • MongoDB 4
        • MongoDB 5
      • Elasticsearch optimization pack
        • Elasticsearch 6
      • Spark optimization pack
        • Spark Application 2.2.0
        • Spark Application 2.3.0
        • Spark Application 2.4.0
    • Command Line commands
      • Administration commands
      • User and Workspace management commands
      • Authentication commands
      • Resource management commands
      • Optimizer options commands
    • Release Notes
  • Knowledge Base
    • Performing load testing to support optimization activities
    • Creating custom optimization packs
    • Setting up a Konakart environment for testing Akamas
    • Modeling a sample Java-based e-commerce application (Konakart)
    • Optimizing a web application
    • Optimizing a sample Java OpenJ9 application
    • Optimizing a sample Linux system
    • Optimizing a MongoDB server instance
    • Optimizing a Kubernetes application
    • Leveraging Ansible to automate AWS instance management
    • Guidelines for optimizing AWS EC2 instances
    • Optimizing an Oracle Database server instance
    • Optimizing an Oracle Database for an e-commerce service
    • Guidelines for optimizing Oracle RDS
    • Optimizing a MySQL server database running Sysbench
    • Optimizing a MySQL server database running OLTPBench
    • Optimizing a live full-stack deployment (K8s + JVM)
    • Setup Instana integration
    • Setup Locust telemetry via CSV
    • Setup AppDynamics integration
Powered by GitBook
On this page
  • Integration Architecture
  • Configure InfluxDB
  • Post configuration remarks
  • Create LRE project and domain
  • Configure LRE external analysis server
  • Define LRE users & roles
  • Identify your test ID and Set
  • Creating a Telemetry Instance
  • Leveraging the integration

Was this helpful?

Export as PDF
  1. Integrating
  2. Integrating Load Testing

Integrating LoadRunner Enterprise

Last updated 1 month ago

Was this helpful?

The integration relies on InfluxDB acting as an external analysis server for LoadRunner Enterprise (LRE).

Integration Architecture

The following schema illustrates the components and networking connections you need to configure to setup the environment:

  • connection granted between the Akamas server and the LRE server (the one exposing LRE APIs) on port 443 - this connection is used by Akamas to invoke LRE APIs over HTTPs;

  • bi-directional connection granted between the LRE server (the one exposing LRE APIs) and InfluxDB on port 8086.: this connection is used by LRE to store analysis data into InfluxDB;

  • connection granted between Akamas server and InfluxDB on port 8086 - this connection is used by Akamas to collect LRE analysis data from InfluxDB.

Configure InfluxDB

Once you have an InfluxDB deployment running (either native or dockerized/containerized), you can configure it by running the following commands (read the proper section for your specific version 1.x or 2.x)Comment

InfluxDB version 1.x commands

# create the admin of your influxdb instance (only for a brand new installation)
curl -X POST -G http://localhost:8086/query --data-urlencode "q=CREATE USER admin WITH PASSWORD 'admin' WITH ALL PRIVILEGES"

# create a specific admin user for the analysis database, it will be used by Akamas to retrieve the data
curl -X POST -G http://localhost:8086/query -u admin:admin --data-urlencode "q=CREATE USER akamasinfluxadmin WITH PASSWORD 'password'"

# create the database hosting the analysis data
curl -X POST -G http://localhost:8086/query -u akamasinfluxadmin:password --data-urlencode "q=CREATE DATABASE LR2020SP3"

# grant on the created database
curl -X POST -G http://localhost:8086/query -u akamasinfluxadmin:password --data-urlencode "q=GRANT ALL ON lr2020sp3 TO akamasinfluxadmin"

# retention policy settings
curl -X POST -G http://localhost:8086/query -u admin:admin --data-urlencode "q=CREATE RETENTION POLICY lr_default_rp ON LR2020SP3 DURATION 1d REPLICATION 1"

InfluxDB version 2.x commands

Since LoadRunnerEnterprise (surely up to 2024 version) expects an InfluxDB 1.x, you have to configure V1 compatibility mode in your InfluxDb 2.x in order to simulate username and password and database accesses (instead of username and token and bucket accesses, which is the new InfluxDB 2.x approach).

The following configuration commands must be issued from InfluxDB CLI so install it as explained in URL https://docs.influxdata.com/influxdb/v2/tools/influx-cli/ Then use the following commands:

# create an admin user for the project and a standard bucket (database) for the project with a retention policy
influx setup -n admin -p password -o akamas -b PC_AKAMAS_PROJECT -r 180d -f

# create an all-access token for current admin user
influx auth create --all-access

# create a specific admin user for the analysis database, it will be used by Akamas to retrieve the data
influx user create -n akamasinfluxadmin -p password -o akamas

# create the bucket (database) hosting the analysis data and give it a retention policy setting
akamas bucket create -n LRE2024 -r 1d

# retrieve the BUCKET_ID for the bucket just created
export BUCKET_ID=$(influx bucket list -n LRE2024 --hide-headers | awk '{print $1}')

# create a token that allows this specific user to write to this bucket
influx auth create -u akamasinfluxadmin --read-bucket $BUCKET_ID --write-bucket $BUCKET_ID

# now we start with v1 compatibiliy mode configuration
# create a v1 username/password access to the bucket
influx v1 auth create --username akamasinfluxadmin --password password --read-bucket $BUCKET_ID --write-bucket $BUCKET_ID

# simulate a database instead of a bucket as in v1 (I use the name LR2020SP3 as in 1.x case)
influx v1 dbrp create --bucket-id $BUCKET_ID --rp PC_DEFAULT_RP --db LR2020SP3

Post configuration remarks

Since Akamas starts importing the LRE analysis data immediately once the execution is ended, there is no need to store data for a longer period of time than 1 day, which is the value set in the last command in 1.x case or in bucket creation in 2.x case.

Please take note of the admin user credentials (akamasinfluxadmin | password in the example above) as you will need them later in order to configure the external analysis server on LRE.

Create LRE project and domain

It is recommended to create a dedicated LRE project to store the scripts and tests that you want to run using Akamas. It is also a good practice to also create a dedicated domain.

This can be done by accessing the administration panel on your LRE installation, whose URL should either look like the following:

http://your.lreserver.endpoint/admin/

or, in a multitenancy-enabled environment, like:

http://your.lreserver.endpoint/admin/?tenant=your-tenant-id-here

First, navigate to the Projects menu:

then click on the Manage domains button, add a domain and then fill in the required information:

Second, click on the Add project button and add a project:

and fill in the required information (make sure to select the correct domain), then click on the Manage domains button, add a domain and then fill in the required information:

Configure LRE external analysis server

Access the administration panel of your LRE installation and navigate to the Analysis Servers menu:

and then click on the plus button to create a new Analysis Server by filling in the required information:

Make sure that the linked projects section lists the dedicated project you have created in the previous step. This step is required to let LoadRunner publish the performance test metrics to InfluxDB for the selected projects.

Notice: you may want to test that the connection to InfluxDB is working correctly by clicking on the Test connection button.

Define LRE users & roles

It is recommended to reserve a dedicated user for executing performance tests that you want to run using Akamas.

To create this user please access the administration panel of your LRE installation, then navigate to the Users menu:

then click on the plus button to add a user by filling in the required information:

Please notice that:

  • this user must be associated with the project created before and it must have the Performance tester role

  • this user does not need to have any special admin privilege

Identify your test ID and Set

As a final step on the LRE environment, you need to retrieve the Test Identifier (ID) and the Test Set associated to the performance tests what will be executed by Akamas.

aside positive In the following it is assumed that you already have a test scenario defined in your LRE environment that Akamas will execute as part of an optimization study.

These test ID and test set can be retrieved from LRE Loadtest panel, which you can access it through a link which looks similar to:

http://your.lreserver.endpoint/Loadtest/pcx/login

or, for a multitenancy-enabled environment:

http://your.lreserver.endpoint/Loadtest/pcx/login??tenant=your-tenant-id-here

You can retrieve the ID by selecting the Test Management menu:

and then by clicking on the test that you want to execute: the ID is displayed next to the test name:

You can also retrieve the test set from the test details page: the test set is displayed in the upper right corner of the screen:

At this point, your LoadRunner Enterprise is ready to be integrated with Akamas.

Creating a Telemetry Instance

First of all, check whether the telemetry provider for LoadRunnerEnterprise is installed:

Then, create a telemetry instance as follows:

provider: LoadRunnerEnterprise

config:
  address: http://influx.dev.akamas.io
  port: 8086
  username: lr2020sp3
  password: password
  database: LR2020SP3

where:

  • address: it is the FQDN of the server hosting your InfluxDB instance

  • port: the port where InfluxDB is running

  • username and password: credentials of the InfluxDB schema created in the previous steps

  • database: the name of the InfluxDB database schema created in the previous steps.

Leveraging the integration

The following only represents an example of a simple workflow that you can use to test your LRE integration. It contains just one task that triggers the execution of the specified performance test on LoadRunner Enterprise:

name: LRE simplest workflow
tasks:

- name: "LRE test"
  operator: "LoadRunnerEnterprise"
  arguments:
    retries: 0
    address: http://lre2020.akamas.io/
    username: johndoe
    password: password
    domain: akamasdomain
    project: akamas
    tenantID: "cf59c1a8-ad3d-4c9a-9222-edadaae7b8b9"
    testId: 183
    testSet: demotestset
    timeSlot: "30m"
    verifySSL: false

where:

  • address: it is the basic address of your LRE farm, where the tenant and any other URL or path parameter have been removed

  • username and password: the credentials that you have previously created in the LRE admin panel

  • domain and project: the domain and the project you have previously created in the LRE admin panel

  • tenantID: the ID of the tenant your project and user belong to - in case multitenancy is not enabled on your LRE environment, you can skip this parameter or set to the default value, thus fa128c06-5436-413d-9cfa-9f04bb738df3

  • testId: the test ID of the test that will be executed by Akamas (you should have already identified it in the previous steps)

  • testSet: the test set related to the test specified by testId (you should have already identified it in the previous steps)

  • timeSlot: it specifies the amount of time that LRE will reserve for running your test, therefore it must be greater or equal of the test duration

  • verifySSL: a flag to enable or ignore the SSL validation when connecting to LRE APIs - this flag is especially useful if your LRE environment exposes APIs over HTTPs with a self-signed certificate.

The following assumes that you have already deployed your InfluxDB instance. For more information on how to deploy an InfluxDB instance. As a reference, please see for a native deployment or for a containerized deployment.

To leverage the integration with LoadRunner Enterprise via InfluxDB, a needs to be created on the Akamas side.

A needs to be created for your specific offline optimization study by leveraging the LoadRunnerEnterprise operator to trigger the execution of a performance test.

here
here
telemetry instance
workflow