All pages
Powered by GitBook
1 of 28

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

CSV provider

The CSV provider collects metrics from CSV files and makes them available to Akamas. It offers a very versatile way to integrate custom data sources.

Prerequisites

This section provides the minimum requirements that you should match before using the CSV File telemetry provider.

Network requirements

The following requirements should be met to enable the provider to gather CSV files from remote hosts:

  • Port 22 (or a custom one) should be open from Akamas installation to the host where the files reside.

  • The host where the files reside should support SCP or SFTP protocols.

Permissions

  • Read access to the CSV files target of the integration

Akamas supported version

  • Versions < 2.0.0 are compatibile with Akamas until version 1.8.0

  • Versions >= 2.0.0 are compatible with Akamas from version 1.9.0

Supported component types

The CSV File provider is generic and allows integration with any data source, therefore it does not come with support for a specific component type.

Setup the data source

To operate properly, the CSV file provider expects the presence of four fields in each processed CSV file:

  • A timestamp field used to identify the point in time a certain sample refers to.

  • A component field used to identify the Akamas entity.

  • A metric field used to identify the name of the metric.

  • A value field used to store the actual value of the metric.

These fields can have custom names in the CSV file, you can specify them in the provider configuration.

The page describes how to get this Telemetry Provider installed. Once installed, this provider is shared with all users of your Akamas installation and can be used to monitor many different systems, by configuring appropriate telemetry provider instances as described in the page.

Integrating Telemetry Providers

Akamas supports the integration with virtually any telemetry and observability tool.

Supported Telemetry Providers

The following table describes the supported Telemetry Providers, which are created automatically at installation time.

Telemetry Provider
Description

Install CSV provider

To install the CSV File provider, create a YAML file (called provider.yml in this example) with the specification of the provider:

Then, you can then install the provider with the Akamas CLI:

Install CSV provider
Create a CSV provider instance
# CSV File Telemetry Provider
name: CSV File
description: Telemetry Provider that enables to import of metrics from a remote CSV file
dockerImage: 485790562880.dkr.ecr.us-east-2.amazonaws.com/akamas/telemetry-providers/csv-file-provider:3.1.4
akamas install telemetry-provider provider.yml

collects metrics from CSV files

collects metrics from Dynatrace

collects metrics from Prometheus

collects metrics from Spark History Server

collects metrics from Tricentis Neoload Web

collects metrics from MicroFocus Load Runner Professional

collects metrics from MicroFocus Load Runner Enterprise

collects price metrics for Amazon Elastic Compute Cloud (ec2) from Amazon’s own APIs

Notice that Telemetry Providers are shared across all the workspaces within the same Akamas installation, and only users with administrative privileges can manage them.

CSV provider
Dynatrace
Prometheus
Spark History Server
NeoloadWeb
Load Runner Professional
Load Runner Enterprise
AWS

Install NeoLoadWeb telemetry provider

To install the NeoLoad Web provider, create a YAML file (called provider.yml in this example) with the definition of the provider:

# NeoLoad Web Telemetry Provider
name: NeoLoadWeb
description: Telemetry Provider that enables to import of metrics from NeoLoad Web instances
dockerImage: 485790562880.dkr.ecr.us-east-2.amazonaws.com/akamas/telemetry-providers/neoload-web-provider:3.1.4

Then you can install the provider using the Akamas CLI:

akamas install telemetry-provider provider.yml

The installed provider is shared with all users of your Akamas installation and can monitor many different systems, by configuring appropriate telemetry provider instances.

Install AWS provider

To install the AWS provider, create a YAML file (called provider.yml in this example) with the specification of the provider:

# AWS Telemetry Provider
name: AWSProvider
description: Imports price metrics from AWS.
dockerImage: 485790562880.dkr.ecr.us-east-2.amazonaws.com/akamas/telemetry-providers/aws-provider:3.1.4

Then you can install the provider with the Akamas CLI:

akamas install telemetry-provider provider.yaml

The installed provider is shared with all users of your Akamas installation and can monitor many different systems, by configuring appropriate telemetry provider instances.

Install Prometheus provider

To install the Prometheus provider, create a YAML file (provider.yml in this example) with the definition of the provider:

name: Prometheus
description: Telemetry Provider that enables to import of metrics from Prometheus
dockerImage: 485790562880.dkr.ecr.us-east-2.amazonaws.com/akamas/telemetry-providers/prometheus-provider:3.4.4

Then you can install the provider using the Akamas CLI:

akamas install telemetry-provider provider.yml

The installed provider is shared with all users of your Akamas installation and can monitor many different systems, by configuring appropriate telemetry provider instances.

Install LoadRunner Professional provider

To install the LoadRunner provider, create a YAML file (called provider.yml in this example) with the definition of the provider:

Then you can install the provider using the Akamas CLI:

The installed provider is shared with all users of your Akamas installation and can monitor many different systems, by configuring appropriate telemetry provider instances.

Install LoadRunner Enterprise provider

To install the LoadRunnerEnterprise provider, create a YAML file (called provider.yml in this example) with the definition of the provider:

Then you can install the provider using the Akamas CLI:

The installed provider is shared with all users of your Akamas installation and can monitor many different systems, by configuring appropriate telemetry provider instances.

# LoadRunner Telemetry Provider
name: LoadRunner
description: Telemetry Provider that enables to import of metrics from LoadRunner installations
dockerImage: 485790562880.dkr.ecr.us-east-2.amazonaws.com/akamas/telemetry-providers/loadrunner-provider:3.1.4
akamas install telemetry-provider provider.yml
#LoadRunnerEnterprise Telemetry Provider
name: LoadRunnerEnterprise
description: Telemetry Provider that enables to import of metrics from LoadRunner Enterprise through InfluxDB
dockerImage: 485790562880.dkr.ecr.us-east-2.amazonaws.com/akamas/telemetry-providers/lre-provider:3.1.4
akamas install telemetry-provider provider.yml

Install Dynatrace provider

Install the Telemetry Provider

Skip this part if the Telemetry Provider is already installed.

To install the Dynatrace provider, create a YAML file (called provider.yml in this example) with the definition of the provider:

# Dynatrace Telemetry Provider
name: Dynatrace
description: Telemetry Provider that enables to import metrics from Dynatrace installations
dockerImage: 485790562880.dkr.ecr.us-east-2.amazonaws.com/akamas/telemetry-providers/dynatrace-provider:3.3.4

Then you can install the provider using the Akamas CLI:

akamas install telemetry-provider provider.yml

Import Key Requests

By default, only requests at the service level are imported by the telemetry provider.

To import specific key requests you can follow these steps.

Currently only average response time, throughput, and error rate metrics are available for key requests.

Component Creation

Create a new component of type Web Application for each key request you want to import. This allows tracking response time, throughput, and error rates separately.

You can use the following yaml file as an example and customize it to suit your needs.

In order to instruct Akamas to import a specific key request you just need to change the id field of the yaml above to the one that matches your key request on Dynatarce.

To obtain that ID open the analysis page for the request as in the example below, take note of the URL of the page, and look for the SERVICE_METHOD keyword. The id is the one starting with SERVICE_METHOD and ending before the character %14

Considering the example below the id is SERVICE_METHOD-D4BCC949D5DD656A

Telemetry instance setup

Create a telemetry instance for your system using the yaml specified below as an example and modify it to provide your Dynatrace account and credentials. This will instruct Akamas to use key request metrics instead of service metrics.

NeoLoadWeb provider

The NeoLoad Web provider collects metrics from a NeoLoad Web instance and makes them available to Akamas.

Prerequisites

This section provides the minimum requirements that you should match before using the NeoLoad Web telemetry provider.

Supported versions

  • NeoLoad Web SaaS or managed version 7.1 or later.

Network requirements

  • The NeoLoad Web API must be reachable at a provided address and port (by default ).

Permissions

  • NeoLoad Web API access token.

Akamas supported version

  • Versions < 2.0.0 are compatibile with Akamas untill version 1.8.0

  • Versions >= 2.0.0 are compatible with Akamas from version 1.9.0

Supported component types

  • Web Application

You can check to see how component-types metrics are extracted by this provider.

Workflow requirements

This section lists the workflow operators this provider depends on.

Components configuration

Akamas reasons in terms of a system to be optimized and in terms of parameters and metrics of components of that system. To understand which metrics collected from NeoloadWeb should refer to which component, the NeoloadWeb provider looks up the property neoloadweb in the components of a system:

Spark History Server provider

The Spark History Server provider collects metrics from a Spark History Server instance and makes them available to Akamas.

Prerequisites

This section provides the minimum requirements that you should match before using the Spark History Server telemetry provider.

Supported versions

  • Apache Spark 2.3

Network requirements

  • Spark History Server API must be reachable at the provided address and port (the default port is 18080).

Supported component types

  • spark-application

You can check to see how component-types metrics are extracted by this provider.

Akamas supported version

  • Versions < 2.0.0 are compatible with Akamas until version 1.8.0

  • Versions >= 2.0.0 are compatible with Akamas from version 1.9.0

Workflow requirements

This section lists the workflow operators this provider depends on:

Components configuration

Akamas uses components to identify specific elements of the system to be monitored and optimized. Your system might contain multiple components to model, for example, a Spark application and each host of the cluster. To point Akamas to the right component when extracting metrics you need to add a property called sparkApplication to your Spark Application component. The provider will only extract metrics for components for which this property has been specified.

Create CSV telemetry instances

To create an instance of the CSV provider, build a YAML file (instance.yml in this example) with the definition of the instance:

Then you can create the instance for the system using the Akamas CLI:

timestampFormat format

Notice that the week-year format YYYY

LoadRunner Enterprise provider

The LoadRunner Enterprise provider collects metrics from a LoadRunner Enterprise instance and makes them available to Akamas.

Prerequisites

This section provides the minimum requirements that you should match before using the LoadRunnerEnterprise telemetry provider.

Install Spark History Server provider

To install the Spark History Server provider, create a YAML file (called provider.yml in this example) with the definition of the provider:

Then you can install the provider using the Akamas CLI:

The installed provider is shared with all users of your Akamas installation and can monitor many different systems, by configuring appropriate telemetry provider instances.

Dynatarce Analysis view of a key request
Service Method ID
https://neoload-api.saas.neotys.com
NeoLoadWeb provider metrics mapping
NeoLoadWeb operator
Spark History Server provider metrics mapping
SparkSubmit Operator
SparkSSHSubmit operator
SparkLivy Operator
# Spark History Server Telemetry Provider
name: SparkHistoryServer
description: Telemetry Provider that enables to import of metrics from Spark History Server instances
dockerImage: 485790562880.dkr.ecr.us-east-2.amazonaws.com/akamas/telemetry-providers/spark-history-server-provider:3.1.4
akamas install telemetry-provider provider.yml
Supported versions
  • LoadRunner Enterprise 12.60, 12.63 and 2020 SP3, 2022 and 2022 R1

  • InfluxDB 1.7 and 1.8

Network requirements

  • Port 8086 between Akamas VM to InfluxDB host, opened in both directions. This port is used to gather metrics.

Permissions

  • The provider requires a user that can access InfluxDB.

  • The user must have read permission on the database containing the LoadRunner metrics.

Supported component types

  • Web Application

You can check LoadRunner provider metrics mapping to see how component-types metrics are extracted by this provider.

Workflow requirements

This section lists the workflow operators this provider depends on.

  • LoadRunnerEnterprise operator

Setup the data source

To set up the integration between Loadrunner Enterprise and InfluxDB please follow the official Microfocus documentation. Akamas does not require any additional setup on the data source.

Components configuration

Akamas reasons in terms of a system to be optimized and in terms of parameters and metrics of components of that system. To understand the link between metrics collected from LoadRunnerEnterprise through InfluxDB and a specific component, the LoadRunnerEnterprise provider looks up some properties in the components of a system:

  • loadrunnerenterprise

You can use this example to start building your component specification:

name: KeyRequestA
description: The key request A for my application
componentType: Web Application
properties:
 dynatrace: 
  type: SERVICE_METHOD
  id: SERVICE_METHOD-D4BCC949D5DD656A
provider: Dynatrace
config:
 url: https://<my-account>.dynatrace.com/
 token: <my-token>
metrics:
  - metric: requests_response_time
    datasourceMetric: builtin:service.keyRequest.response.time
    scale: 0.001    
    defaultValue: 0.0
    staticLabels:
      provider: dynatrace
  - metric: requests_throughput
    datasourceMetric: builtin:service.keyRequest.errors.server.successCount
    scale: 0.0166666666666666666666666666666
    defaultValue: 0.0
    staticLabels:
      provider: dynatrace        
  - metric: requests_error_rate
    datasourceMetric: builtin:service.keyRequest.errors.server.rate
    scale: 0.01
    defaultValue: 0.0
    staticLabels:
      provider: dynatrace    
name: MyComponent
properties:
 neoloadweb: "true" # The presence of this property helps akamas discriminate metrics imported using neoloadweb from the ones imported by other providers for the same component
name: My Application
properties:
 sparkApplication: "true"
# Specification for a component, whose metrics should be collected by the Prometheus Provider
name: "WebApp1" # name of the component
description: "WebApp for payment services" # description of the component
properties:
  loadrunnerenterprise: ""
is compliant with the ISO-8601 specification, but you should replace it with the year-of-era format
yyyy
if you are specifying a
timestampFormat
different from the ISO one. For example:
  • Correct: yyyy-MM-dd HH:mm:ss

  • Wrong: YYYY-MM-dd HH:mm:ss

You can find detailed information on timestamp patterns in the Patterns for Formatting and Parsing section on the DateTimeFormatter (Java Platform SE 8) page.

Configuration options

When you create an instance of the CSV provider, you should specify some configuration information to allow the provider to correctly extract and process metrics from your CSV files.

You can specify configuration information within the config part of the YAML of the instance definition.

Required properties

  • address - a URL or IP identifying the address of the host where CSV files reside

  • username - the username used when connecting to the host

  • authType - the type of authentication to use when connecting to the file host; either password or key

  • auth - the authentication credential; either a password or a key according to authType. When using keys, the value can either be the value of the key or the path of the file to import from

  • remoteFilePattern - a list of remote files to be imported

Optional properties

  • protocol - the protocol to use to retrieve files; either scp or sftp. Default is scp

  • fieldSeparator - the character used as a field separator in the CSV files. Default is ,

  • componentColumn - the header of the column containing the name of the component. Default is COMPONENT

  • timestampColumn - the header of the column containing the timestamp. Default is TS

  • timestampFormat - the format of the timestamp (e.g. yyyy-MM-dd HH:mm:ss zzz). Default is YYYY-MM-ddTHH:mm:ss

You should also specify the mapping between the metrics available in your CSV files and those provided by Akamas. This can be done in the metrics section of the telemetry instance configuration. To map a custom metric you should specify at least the following properties:

  • metric - the name of a metric in Akamas

  • datasourceMetric - the header of a column that contains the metric in the CSV file

The provider ignores any column not present as datasourceMetric in this section.

The sample configuration reported in this section would import the metric cpu_util from CSV files formatted as in the example below:

Telemetry instance reference

The following represents the complete configuration reference for the telemetry provider instance.

The following table reports the configuration reference for the config section

Field
Type
Description
Default Value
Restrictions
Required

address

String

The address of the machine where the CSV file resides

A valid URL or IP

Yes

port

Number (integer)

The port to connect to, in order to retrieve the file

The following table reports the configuration reference for the metrics section

Field
Type
Description
Restrictions
Required

metric

String

The name of the metric in Akamas

An existing Akamas metric

Yes

datasourceMetric

String

The name (header) of the column that contains the specific metric

An existing column in the CSV file

Yes

Use cases

Here you can find common use cases addressed by this provider.

Linux SAR

In this use case, you are going to import some metrics coming from SAR, a popular UNIX tool to monitor system resources. SAR can export CSV files in the following format.

Note that the metrics are percentages (between 1 and 100), while Akamas accepts percentages as values between 0 and 1, therefore each metric in this configuration has a scale factor of 0.001.

You can import the two CPU metrics and the memory metric from a SAR log using the following telemetry instance configuration.

Using the configured instance, the CSV File provider will perform the following operations to import the metrics:

  1. Retrieve the file "/csv/sar.csv" from the server "127.0.0.1" using the SCP protocol authenticating with the provided password.

  2. Use the column hostname to lookup components by name.

  3. Use the column timestamp to find the timestamps of the samples (that are expected to be in the format specified by timestampFormat).

  4. Collect the metrics (two with the same name, but different labels, and one with a different name):

    • cpu_util: in the CSV file is in the column %user and attach to its samples the label "mode" with value "user".

    • cpu_util: in the CSV file is in the column %system and attach to its samples the label "mode" with value "system".

Create NeoLoadWeb telemetry instances

When you create an instance of the NeoLoad Web provider, you should specify some configuration information to allow the provider to correctly extract and process metrics from NeoLoad Web.

You can specify configuration information within the config part of the YAML of the instance definition.

Required properties

OracleDB Exporter

This page describes how to set up an OracleDB exporter in order to gather metrics regarding an Oracle Database instance through the Prometheus provider.

Installation

The OracleDB exporter repository is available on the . The suggested deploy mode is through a , since the Prometheus instance can easily access the running container through the Akamas network.

Use the following command line to run the container, where cust-metrics.toml is your configuration file defining the queries for additional custom metrics (see paragraph below) and DATA_SOURCE_NAME

LoadRunner Professional provider

The LoadRunner provider collects metrics generated by a LoadRunner instance (converted to their JSON format and placed in CIFS network share) and makes them available to Akamas.

Prerequisites

This section provides the minimum requirements that you should match before using the LoadRunner telemetry provider.

Supported versions

Create LoadRunner Professional telemetry instances

To create an instance of the LoadRunner provider, build a YAML file (instance.yml in this example) with the definition of the instance:

Then you can create the instance for the system using the Akamas CLI:

Configuration options

When you create an instance of the LoadRunner provider, you should specify some configuration information to allow the provider to correctly extract and process metrics from LoadRunner.

Prometheus provider

The Prometheus provider collects metrics from a Prometheus instance and makes them available to Akamas.

This provider includes support for several technologies (Prometheus exporters). In any case, custom queries can be defined to gather the desired metrics.

Prerequisites

This section provides the minimum requirements that you should match before using the Prometheus provider.

Create LoadRunner Enterprise telemetry instances

Create a telemetry instance

To create an instance of the LoadRunnerEnterprise provider, build a YAML file (instance.yml in this example) with the definition of the instance:

Then you can create the instance using the Akamas CLI:

# CSV Telemetry Provider Instance
provider: CSV File
config:
  address: host1.example.com
  authType: password
  username: akamas
  auth: akamas
  remoteFilePattern: /monitoring/result-*.csv
  componentColumn: COMPONENT
  timestampColumn: TS
  timestampFormat: YYYY-MM-dd'T'HH:mm:ss
metrics:
  - metric: cpu_util
    datasourceMetric: user%
akamas create telemetry-instance instance.yml system
TS,                   COMPONENT,  user%
2020-04-17T09:46:30,  host,       20
2020-04-17T09:46:35,  host,       23
2020-04-17T09:46:40,  host,       32
2020-04-17T09:46:45,  host,       21
provider: CSV File             # this is an instance of the CSV provider
config:
  address: host1.example.com   # the address of the host with the CSV files
  port: 22                     # the port used to connect
  authType: password           # the authentication method
  username: akamas             # the username used to connect
  auth: akamas                 # the authentication credential
  protocol: scp                # the protocol used to retrieve the file
  fieldSeparator: ","          # the character used as field separator in the CSV files
  remoteFilePattern: /monitoring/result-*.csv    # the path of the CSV files to import
  componentColumn: COMPONENT                     # the header of the column with component names
  timestampColumn: TS                            # the header of the column with the time stamp
  timestampFormat: YYYY-mm-ddTHH:MM:ss           # the format of the timestamp
metrics:
  - metric: cpu_util                             # the name of the Akamas metric
    datasourceMetric: user%                      # the header of the column with the original metric
    staticLabels:
      mode: user                                 # (optional) additional labels to add to the metric
hostname, interval,     timestamp, 		        %user,	%system,      %memory
machine1, 600,		2018-08-07 06:45:01 UTC,	30.01,	20.77,		96.21
machine1, 600,		2018-08-07 06:55:01 UTC,	40.07,	13.00,		84.55
machine1, 600,		2018-08-07 07:05:01 UTC,	5.00,	90.55,		89.23
provider: CSV File
config:
  remoteFilePattern: /csv/sar.csv
  address: 127.0.0.1
  port: 22
  username: user123
  auth: password123
  authType: password
  protocol: scp
  componentColumn: hostname
  timestampColumn: timestamp
  timestampFormat: yyyy-MM-dd HH:mm:ss zzz
metrics:
  - metric: cpu_util
    datasourceMetric: %user
    scale: 0.001
    staticLabels:
      mode: user
  - metric: cpu_util
    datasourceMetric: %system
    scale: 0.001
    staticLabels:
      mode: system
  - metric: mem_util
    scale: 0.001
    datasourceMetric: %memory
mem_util: in the CSV file is in the column %memory.

22

1≤port≤65536

No

username

String

The username to use in order to connect to the remote machine

Yes

protocol

String

The protocol used to connect to the remote machine: SCP or SFTP

scp

scp sftp

No

authType

String

Specify which method is used to authenticate against the remote machine:

  • password: use the value of the parameter auth as a password

  • key: use the value of the parameter auth as a private key. Supported formats are RSA and DSA

password key

Yes

auth

String

A password or an RSA/DSA key (as YAML multi-line string, keeping new lines)

Yes

remoteFilePattern

String

The path of the remote file(s) to be analyzed. The path can contains GLOB expressio

A list of valid path for linux

Yes

componentColumn

String

The CSV column containing the name of the component.

The column's values must match (case sensitive) the name of a component specified in the System

COMPONENT

The column must exists in the CSV file

Yes

timestampColumn

String

The CSV column containing the timestamps of the samples

TS

The column must exists in the CSV file

No

timestampFormat

String

Timestamps' format

YYYY-mm-ddTHH:MM:ss

Must be specified using Java syntax.

No

fieldSeparator

String

Specify the field separator of the CSV

,

, ;

No

scale

Decimal number

The scale factor to apply when importing the metric

staticLabels

List of key-value pairs

A list of key-value pairs that will be attached to the specific metric sample

No

an environment variable containing the Oracle EasyConnect string:

You can refer to the official guide for more details or alternative deployment modes.

Custom queries

It is possible to define additional queries to expose custom metrics using any data in the database instance that is readable by the monitoring user (see the guide for more details about the syntax).

Custom Configuration file

The following is an example of exporting system metrics from the Dynamic Performance (V$) Views used by the Prometheus provider default queries for the Oracle Database optimization pack:

official project page
Docker image
docker run -d --name oracledb_exporter --restart always \
  --network akamas -p 9161:9161 \
  -v ~/oracledb_exporter/cust-metrics.toml:/cust-metrics.toml \
  -e CUSTOM_METRICS=/cust-metrics.toml \
  -e DATA_SOURCE_NAME="username/password@//oracledb.mycompany.com/service" \
  iamseth/oracledb_exporter
[[metric]]
context= "memory"
labels= [ "component" ]
metricsdesc= { size="Component memory extracted from v$memory_dynamic_components in Oracle." }
request = '''
SELECT component, current_size as "size"
FROM V$MEMORY_DYNAMIC_COMPONENTS
UNION
SELECT name, bytes as "size"
FROM V$SGAINFO
WHERE name in ('Free SGA Memory Available', 'Redo Buffers', 'Maximum SGA Size')
'''

[[metric]]
context = "activity"
metricsdesc = { value="Generic counter metric from v$sysstat view in Oracle." }
fieldtoappend = "name"
request = '''
SELECT name, value
FROM V$SYSSTAT WHERE name IN (
  'execute count',
  'user commits', 'user rollbacks',
  'db block gets from cache', 'consistent gets from cache', 'physical reads cache', /* CACHE */
  'redo log space requests'
 )
 '''

[[metric]]
context = "system_event"
labels = [ "event", "wait_class" ]
request = '''
SELECT
  event, wait_class,
  total_waits, time_waited
FROM V$SYSTEM_EVENT
'''
[metric.metricsdesc]
  total_waits= "Total number of waits for the event as per V$SYSTEM_EVENT in Oracle."
  time_waited= "Total time waited for the event (in hundredths of seconds) as per V$SYSTEM_EVENT in Oracle."
accountToken - NeoLoad Web API access token.

Telemetry instance reference

The following YAML file describes the definition of a telemetry instance.

The following table provides the reference for the config section within the definition of the NeoLoad Web provider instance:

Field
Type
Description
Default value
Restrictions
Required

account

String

A valid access token

Yes

neoloadApi

URL

Hostname of the NeoLoad Web API

Notice: the NeoLoadWeb provider imports data points matching at least one of the configured values for both metrics and actions.

Use cases

This section reports common use cases addressed by this provider.

Collect Web Application metrics

Check the Web Application page for a list of all web application metrics available in Akamas

This example shows how to configure the NeoLoad Web provider in order to collect performance metrics published on the SaaS web API.

You must create a YAML file with the definition of a telemetry instance (neoload_instance.yml) of the NeoLoad Web provider:

and then create the telemetry instance using the Akamas CLI:

You can then configure the workflow in order to trigger the execution of a NeoLoad test using the NeoLoadWeb provider, as in the following example:

Best practices

This section reports common best practices you can adopt to ease the use of this telemetry provider.

  • filter the imported metrics: import only the required metrics using the metrics and actions filters, in order to avoid throttling on the NeoLoad Web instance.

Micro Focus LoadRunner 12.60 or 2020

Network requirements

  • The network share is reachable at port 445/TCP

  • The network share is reachable at port 139/UDP

Permissions (optional)

  • Username, domain (if required), and password of the network share.

  • Read permission on the network share.

Supported component types

  • Web Application

You can check LoadRunner provider metrics mapping to see how component-types metrics are extracted by this provider.

Setup the data source

The provider expects that its required data are on a CIFS share. If you are using the LoadRunner operator, please follow the instruction on here below on how to export LoadRunner results in a network share by setup a share on the LoadRunner Controller.

Share a Controller folder

To share a folder on Windows, please follow these steps:

  1. Right-click on the folder, then select Properties

  2. Go to Sharing tab, then select Advanced Sharing

  3. In the opened window, enable Share this folder

  4. In the "Share name" textbox type the name of the share. This is the name of the share over the network\

  5. Then click on Permissions, then Add

  6. In the textbox type the name of the user or the group (with the domain if required) that you want to grant access to the share, then click "OK"\

  7. Select the added user (or group) and grant the required permissions\

  8. Click OK, OK, and then Close

Mount network share on the controller

  1. Open "This PC" from the Start menu, then click on "Map network drive"\

  2. In the "Map Network Drive" window, select a suitable drive letter name and enter the remote folder path of the network share that has been given to you by your storage admin (it should be something with the format \\mycompanyshareserver.mycompany\foldername)\

  3. Make sure to check "Reconnect at sign-in" and "Connect using different credentials".

  4. Click Finish

  5. In the "Windows Security" window enter the username (with the domain, if required) and the password for the network share and check "Remember my credentials".

  6. Click "OK"

Components configuration

Akamas reasons in terms of a system to be optimized and in terms of parameters and metrics of components of that system. To understand the link between metrics collected from LoadRunner and a specific component, the LoadRunner provider looks up the property loadrunner in the components of a system:

You can specify configuration information within the config part of the YAML of the instance definition.

Required properties

  • hostname: The hostname or IP of the server that hosts the CIFS share where the LoadRunner results have been exported. See LoadRunner operator.

  • username: The username and the domain required to access the network share. Supported formats are:

    • username

    • domain\username

    • username@domain

  • password: The password required to access the network share

  • shareName: The name of the network share as it is exposed by the server

Telemetry instance reference

The following YAML file provides the reference for the definition of a telemetry instance.

The following table describes the reference for the config section within the definition of the LoadRunner provider instance:

Field
Type
Description
Default Value
Restrictions
Required

hostname

String

the hostname or IP of the server that hosts the CIFS share where the LoadRunner results have been exported. See

-

IP address or FQDN

Yes

username

String

The username and the domain required to access the share

Supported Prometheus versions:

Akamas supports Prometheus starting from version2.26.

Using also theprometheus-operator requires Prometheus 0.47 or greater. This version is bundled with the kube-prometheus-stack since version 15.

Connectivity between the Akamas server and the Prometheus server is also required. By default, Prometheus is run on port 9090.

Supported Prometheus exporters

  • Node exporter (Linux system metrics)

  • JMX exporter (Java metrics)

  • cAdvisor (Docker container metrics)

  • CloudWatch exporter (AWS resources metrics)

  • (Web application metrics)

The Prometheus provider includes queries for most of the monitoring use cases these exporters cover. If you need to specify custom queries or make use of exporters not currently supported you can specify them as described in creating Prometheus telemetry instances.

Supported Akamas component types

  • Kubernetes (Pod, Container, Workload, Namespace)

  • Web Application

  • Java (java-ibm-j9vm-6, java-ibm-j9vm-8, java-eclipse-openj9-11, java-openjdk-8, java-openjdk-11, java-openjdk-17)

  • Linux (Ubuntu-16.04, Rhel-7.6)

Refer to Prometheus provider metrics mapping to see how component-type metrics are extracted by this provider.

Component configuration

Akamas reasons in terms of a system to be optimized and in terms of parameters and metrics of components of that system. To understand which metrics collected from Prometheus should be mapped to a component, the Prometheus provider looks up some properties in the components of a system grouped under prometheus property. These properties depend on the exporter and the component type.

Nested under this property you can also include any additional field your use case may require to filter the imported metrics further. These fields will be appended in queries to the list of label matches in the form field_name=~'field_value', and can specify either exact values or patterns.

Notice: you should configure your Prometheus instances so that the Prometheus provider can leverage the instance property of components, as described in the Setup datasource section here above.

It is important that you add instance and, optionally, the job properties to the components of a system so that the Prometheus provider can gather metrics from them:

Prometheus configuration

The Prometheus provider does not usually require a specific configuration of the Prometheus instance it uses.

When gathering metrics for hosts it's usually convenient to set the value of the instance label so that it matches the value of the instance property in a component; in this way, the Prometheus provider knows which system component each data point refers to.

Here’s an example configuration for Prometheus that sets the instance label:

Configuration options

When you create an instance of the LoadRunnerEnterprise provider, you should specify some configuration information to allow the provider to correctly extract and process metrics from Prometheus.

You can specify configuration information within the config part of the YAML of the instance definition.

Required properties

  • address: The address of the InfluxDB instance, in the form of schema://address (i.e https://influxdb.mycompay.com)

  • port: The InfluxDB port

  • username: The username required to connect to InfluxDB.

  • password: The password for the username

  • database: The database name where LoadRunner metrics are stored

Telemetry instance reference

This table reports the reference for the config section within the definition of the LoadRunnerEnterprise provider instance:

Field
Type
Description
Default value
Restrictions
Required

address

String

The address of the InfluxDB

-

A valid URL

Yes

port

Integer

The port of the InfluxDB

Exported metrics

Create AWS telemetry instances

To create an instance of the CSV provider, build a YAML file (instance.yml in this example) with the definition of the instance:

Then you can create the instance for the aws-system using the Akamas CLI:

Configuration options

When you create an instance of the AWS provider, you should specify some configuration information to allow the provider to correctly extract and process metrics from your CSV files.

You can specify configuration information within the config part of the YAML of the instance definition.

Required properties

  • accessKeyId - the access key id of your chosen IAM use

  • secretAccessKey - the secret access key of your chosen IAM user

Telemetry instance reference

The following represents the complete configuration reference for the telemetry provider instance.

Then you can create the instance for the aws-system using the Akamas CLI:

Configuration options

When you create an instance of the AWS provider, you should specify some configuration information to allow the provider to correctly extract and process metrics from your CSV files.

You can specify configuration information within the config part of the YAML of the instance definition.

Required properties

  • accessKeyId - the access key id of your chosen IAM use

  • secretAccessKey - the secret access key of your chosen IAM user

Telemetry instance reference

The following YAML file represents a template to define the telemetry provider instance.

The following table describes the configuration reference for the config section

Field
Type
Description
Default Value
Restrictions
Required
provider: NeoLoadWeb  # this is an instance of the NeoLoad Web provider
config:
  neoloadApi: http://neoload-api.mydomain.com     # API host
  accountToken: d2d9d8c6064b35209d7f6912db986a6e  # access token
  actions:    # the list of User Paths to import
    - 01 - Home page
  metrics:    # the list of metrics to import
    - transactions_response_time
    - transactions_throughput
    - pages_response_time
    - users
provider: NeoLoadWeb
config:
  accountToken: d2d9d8c6064b35209d7f6912db986a6e
akamas create telemetry-instance neoload_instance.yml
name: neoloadweb_wf
tasks:
- name: Run NeoLoadWeb
  operator: NeoLoadWeb

  arguments:
    accountToken: d2d9d8c6064b35209d7f6912db986a6e
    controllerZoneId: mlmEt
    lgZones: mlmEt:1

    scenarioName: test
    projectFile:
      http:
        url: https://files.akamas.io/neoload/project00.zip
        verifySSL: false
# Specification for a component, whose metrics should be collected by the Prometheus Provider
name: "WebApplication1" # name of the component
description: "LoadRunner controller" # description of the component
properties:
  loadrunner: ""   # the presence of this property helps akamas discriminate metrics imported using loadrunner from the ones provided by other providers
# LoadRunner Telemetry Provider Instance
provider: LoadRunner
config:
 hostname: cifsserver.mycompany.com
 username: akamas
 password: mypassword
 shareName: akamas_lr_results
akamas create telemetry-instance instance.yml system
provider: LoadRunner # this is an instance of the LoadRunner provider
config:
 hostname: cifsserver.mycompany.com # Share server
 username: user@mycompany
 password: mypassword
 shareName: akamas_lr_results
 pathPrefix:YAML
# Specification for a component, whose metrics should be collected by the Prometheus Provider
name: jvm1  # name of the component
description: jvm1 for payment services  # description of the component
properties:
  prometheus:
    instance: service0001  # instance of the component: where the component is located relative to Prometheus
    job: jmx               # job of the component: which prom exporter is gathering metrics from the component
# Custom global config
global:
  scrape_interval:     5s   # Set the scrape interval to every 15 seconds. The default is every 1 minute.
  evaluation_interval: 5s   # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# A scrape configuration containing exactly one endpoint to scrape:
scrape_configs:
# Node Exporter
- job_name: 'node'
  static_configs:
  - targets: ["localhost:9100"]
  relabel_configs:
  - source_labels: ["__address__"]
    regex: "(.*):.*"
    target_label: instance
    replacement: value_of_instance_property_in_the_component_the_data_points_should_refer_to
#LoadRunnerEnterprise Telemetry Provider Instance
provider: LoadRunnerEnterprise
config:
 address: http://influxdb.mycompany.com
 port: 8086
 username: akamas
 password: influxdbpassword
 database: pc_akamas_project
akamas create telemetry-instance instance.yml
provider: LoadRunnerEnterprise # the instance of the LoadRunnerEnterprise provider
config:
 address: https://influxdb.mycompany.com # the address of the Prometheus instance
 port: 8086 # the port of the Prometheus
 username: akamasforinflux
 passowrd: influxpassword
 database: pc_akamas_project
 verifySSL: false
 transactions:
   - mytransaction1
   - mytransaciton2
users
transactions_throughput
transactions_response_time
transactions_response_time_min
transactions_response_time_max
requests_throughput
requests_error_throughput
# AWS Telemetry Instance
provider: AWS
config:
  accessKeyId: access_key_id
  secretAccessKey: secret_access_key
akamas create telemetry-instance instance.yml aws-system

https://neoload-api.saas.neotys.com

No

metrics

List of strings

List of component metrics to import

['<all transactions>']

No

actions

List of strings

List of "User Paths" to import

No

-

A valid port

yes

username

String

The username to connect to InfluxDB

-

-

Yes

password

String

The password of the specified user

-

-

Yes

database

String

The database with the metrics

-

-

Yes

verifySSL

Boolean

Wheter to check the certificate, of InfluxDB API is exposed in HTTPS

false

true/false

No

transactions

List of string

Restrict metrics collection to the listed transactions names. If not specified the provider will collect metrics of all transactions

Empty

Jmeter

-

Supported formats:

  • username

  • domain\username

  • username@domain

Yes

password

String

The password required to access the share

-

-

Yes

shareName

String

The name of the share as it is exposed by the server

-

-

Yes

pathPrefix

String

A prefix for the default path where the provider looks for the data.

The default path is {studyName}{experimentId}{trialId}

See LoadRunner operator

-

A valid Windows path

No

LoadRunner operator

Valid IAM credentials

Yes

accessKeyId

String

The access key id of your chosen IAM user

Valid IAM credentials

Yes

secretAccessKey

String

The secret access key of your chosen IAM user

provider: AWS                                # This is an instance of the AWS provider
config:
  accessKeyId: access_key_id                 # The access key id of your IAM user
  secretAccessKey: secret_access_key         # The secret access key of your IAM user
akamas create telemetry-instance instance.yml aws-system
provider: AWS                                # This is an instance of the AWS provider
config:
  accessKeyId: access_key_id                 # The access key id of your IAM user
  secretAccessKey: secret_access_key         # The secret access key of your IAM user

AWS provider

The AWS provider collects price metrics for Amazon Elastic Compute Cloud (EC2) from Amazon’s own APIs.

This provider imports just one metricaws_ec2_price which is available in the EC2 component type of the AWS Optimization pack.

Prerequisites

This section provides the minimum requirements that you should match before using the AWS telemetry provider.

This telemetry provider makes use of parameters aws_ec2_instance_type andaws_ec2_instance_size to identify the price. When using this provider make sure that your system has a component of type EC2 and that those parameters are defined in the baseline and, if optimized, in the parameters selection.

AWS users and policy requirements

  • An who has been granted the AWSPriceListServiceFullAccess policy, that is the following permissions:

    • DescribeServices

    • GetAttributeValues

You may find more information on AWS cost permissions .

Akamas supported version

  • Versions >= 2.0.0 are compatible with Akamas from version 1.9.0

Supported component types

  • EC2

Components configuration

In order to gather price information about a component you’re required to input an extra field in its definition:

  • region, which tells the provider the AWS region of the modeled instance

Please note this field is mandatory and must be specified as follows:

Here is a complete list of AWS region names, together with their Akamas-compatible codes:

Region Name
Region

Create Spark History Server telemetry instances

Create a telemetry instance

To create an instance of the Spark History Server provider, build a YAML file (instance.yml in this example) with the definition of the instance:

Then you can create the instance for the system spark-system using the Akamas CLI:

GetProducts

ap-east-1

Asia Pacific (Mumbai)

ap-south-1

Asia Pacific (Osaka-Local)

ap-northeast-3

Asia Pacific (Seoul)

ap-northeast-2

Asia Pacific (Singapore)

ap-southeast-1

Asia Pacific (Sydney)

ap-southeast-2

Asia Pacific (Tokyo)

ap-northeast-1

Canada (Central)

ca-central-1

China (Beijing)

cn-north-1

China (Ningxia)

cn-northwest-1

Europe (Frankfurt)

eu-central-1

Europe (Ireland)

eu-west-1

Europe (London)

eu-west-2

Europe (Milan)

eu-south-1

Europe (Paris)

eu-west-3

Europe (Stockholm)

eu-north-1

Middle East (Bahrain)

me-south-1

South America (São Paulo)

sa-east-1

AWS GovCloud (US-East)

us-gov-east-1

AWS GovCloud (US)

us-gov-west-1

US East (Ohio)

us-east-2

US East (N. Virginia)

us-east-1

US West (N. California)

us-west-1

US West (Oregon)

us-west-2

Africa (Cape Town)

af-south-1

IAM user
here

Asia Pacific (Hong Kong)

# Specification for a component, whose metrics should be collected by the AWS Provider
name: aws_1 # name of the component
description: aws_1 instance to tune # description of the component
properties:
    ec2:
        region: us-east-1 # AWS region of the component
Configuration options

When you create an instance of the Spark History Server provider, you should specify some configuration information to allow the provider to correctly extract and process metrics from the Spark History server.

You can specify configuration information within the config part of the YAML of the instance definition.

Required properties

  • address - hostname of the Spark History Server instance

Telemetry instance reference

The following YAML file describes the definition of a telemetry instance.

The following table reports the reference for the config section within the definition of the Spark History Server provider instance:

Field
Type
Description
Default value
Restriction
Required

address

URL

Spark History Server address

Yes

importLevel

String

Granularity of the imported metrics

Use cases

This section reports common use cases addressed by this provider.

Collect stage metrics of a Spark Application

Check Spark Application page for a list of all Spark application metrics available in Akamas

This example shows how to configure a Spark History Server provider in order to collect performance metrics about a Spark application submitted to the cluster using the Spark SSH Submit operator.

As a first step, you need to create a YAML file (spark_instance.yml) containing the configuration the provider needs to connect to the Spark History Server, plus the filter on the desired level of granularity for the imported metrics:

and then create the telemetry instance using the Akamas CLI:

Finally, you will need to define for your study a workflow that includes the submission of the Spark application to the cluster, in this case using the Spark SSH Submit operator:

Best practices

This section reports common best practices you can adopt to ease the use of this telemetry provider.

  • configure metrics granularity: in order to reduce the collection time, configure the importLevel to import metrics with a granularity no finer than the study requires.

  • wait for metrics publication: make sure in the workflow there is a few-minute interval between the end of the Spark application and the execution of the Spark telemetry instance, since the Spark History Server may take some time to complete the publication of the metrics.

provider: SparkHistoryServer
config:
  address: spark_master_node
  port: 18080
  importLevel: stage
akamas create telemetry-instance instance.yml spark-system
provider: SparkHistoryServer  # This is an instance of the Spark History Server provider
config:
  address: spark_master_node # The adress of Spark History Server
  port: 18080   # The port of Spark History Server
  importLevel: job  # The granularity of the imported metrics
provider: SparkHistoryServer
config:
  address: spark_master_node
  port: 18080
  importLevel: stage
akamas create telemetry-instance spark_instance.yml
name: spark_workflow
tasks:
  - name: Run Spark application
    operator: SSHSparkSubmit
    arguments:
      component: spark

job

Allowed values: job, stage, task

No

port

Integer

Spark History Server listening port

18080

No

Dynatrace provider

The Dynatrace provider collects metrics from Dynatrace and makes them available to Akamas.

This provider includes support for several technologies. In any case, custom queries can be defined to gather the desired metrics.

Supported versions

Dynatrace SaaS/Managed version 1.187 or later

Supported component types:

  • Kubernetes and Docker

  • Web Application

  • Ubuntu-16.04, Rhel-7.6

  • java-openjdk-8, java-openjdk-11, java-openjdk-17

Refer to to see how component-types metrics are extracted by this provider.

Prerequisites

This section provides the minimum requirements that you should match before using the Prometheus provider.

  • Dynatrace SaaS/Managed version 1.187 or later

  • A valid Dynatrace license

  • Dynatrace OneAgent installed on the servers where the Dynatrace entities to be monitored are running

  • Connectivity between Akamas and the Dynatrace server on port 443

Dynatrace Token

The Dynatrace provider needs a Dynatrace API token with the following privileges:

  • metrics.read (Read metrics)

  • entities.read (Read entities and tags)

  • DataExport (Access problem and event feed, metrics, and topology)

  • ReadSyntheticData (Read synthetic monitors, locations, and nodes)

To generate an API Token for your Dynatrace installation you can follow .

Component configuration

To instruct Akamas from which Dynatrace entities (e.g. Workloads, Services, Process Groups) metrics should be collected you can some specific properties on components.

Different strategies can be used to map Dynatrace entities to Akamas components:

  • By id

  • By name

  • By tags

  • By Kubernetes properties

By id

You can map a component to a Dynatrace entity by leveraging the unique id of the entity, which you should put under the id property in the component. This strategy is best used for long-lived instances whose ID does not change during the optimization such as Hosts, Process Groups, or Services.

Here is an example of how to setup host monitoring via id:

You can find the id of a Dynatrace entity by looking at the URL of a Dynatrace dashboard relative to the entity. Watch out that the "host" key is valid only for Linux components, other components (e.g. the JVM) must drill down into the host entities to get the PROCESS_GROUP_INSTANCE or PROCESS_GROUP id.

By name

You can map a component to a Dynatrace entity by leveraging the entity’s display name. This strategy is similar to the map by id but provides a more friendly way to identify the mapped entity. Beware that id multiple entities in your Dynatrace installation share the same name they will all be mapped to the same component. The Dynatrace display name should be put under the name property in the component definition:

By tags

You can map a component to a Dynatrace entity by leveraging Dynatrace tags that match the entity, tags which you should put under the tags property in the component definition.

If multiple tags are specified, instances matching any of the specified tags will be selected.

This sample configuration maps to the component all Dynatrace entities with tag environment: test or [AWS]dynatrace-monitored: true

Dynatrace supports both key-value and key-only tags. Key-only tags can be specified as Key-value tags with an empty value as in the following example

By Kubernetes properties

By leveraging dedicated properties, you can map a component to a Dynatrace entity referring to a Kubernetes cluster (e.g., a Pod or a Container).

Container

To properly identify the set of containers to be mapped, you can specify the following properties. Any container matching all the properties will be mapped to the component.

Akamas property
Dynatrace property
Location

You can retrieve all the information to setup the properties on the top of the Dynatrace container dashboard.

The following example shows how to map a component to a container running in Kubernetes:

Pod

To properly identify the set of pods to be mapped, you can specify the following properties. Any pod matching all the properties will be mapped to the component.

Akamas property
Dynatrace property
Location

If you need to narrow your pod selection further you can also specify a set of tags as described in the by tags. Note that tags for Kubernetes resources are called Labels in the Dynatrace dashboard.

Labels are specified as key-value in the Akamas configuration. In Dynatrace’s dashboard key and value are separated by a column (:)

Example

The following example shows how to map a component to a pod running in Kubernetes:

Container, Pod, or Workload?

Please note, that when you are mapping components to Kubernetes entities the property type is required to instruct Akamas on which type of entity you want to map. Dynatrace maps Kubernetes entities to the following types:

Kubernetes type
Dynatrace type

Improve component mapping with type

You can improve the matching of components with Dynatrace by adding a type property in the component definition, this property will help the provider match only those Dynatrace entities of the given type.

The type of an entity can be retrieved from the URL of the entity’s dashboard

Available entity types can be retrieved, from your Dynatrace instance, with the following command:

Mapping multiple entities in one component

In some circumstances, you might want to map multiple Dyantrace entities (e.g. a set of hosts) to the same Akamas component and import aggregated metrics.

This can be easily done by using tags. If Akamas detects that multiple entities have been mapped to the same component it will try to aggregate metrics; some metrics, however, can not be automatically aggregated.

To force aggregation on all available metrics you can add the mergeable: trueproperty to the component under the Dynatrace element.

Create Dynatrace telemetry instances

The installed provider is shared with all users of your Akamas installation and can monitor many different systems, by configuring appropriate telemetry provider instances.

To create an instance of the Dynatrace provider, build a YAML file (instance.yml in this example) with the definition of the instance:

Then you can create the instance for the system using the Akamas CLI:

Configuration options

When you create an instance of the Dynatrace provider, you should specify some configuration information to allow the provider to correctly extract and process metrics from Dynatrace.

You can specify configuration information within the config part of the YAML of the instance definition.

Required properties

  • url - URL of the Dynatrace installation API (see to retrieve the URL of your installation)

  • token - A Dynatrace API Token with the

Collect additional metrics

You can collect additional metrics with the Dynatrace provider by using the metrics field:

Configure a proxy for Dynatrace

In the case in which Akamas cannot reach directly your Dynatrace installation, you can configure an HTTP proxy by using the proxy field:

Telemetry instance reference

This section reports the complete reference for the definition of a telemetry instance.

This table shows the reference for the config section within the definition of the Dynatrace provider instance:

Field
Type
Value restrictions
Required
Default Value
Description

Proxy options reference

This table reports the reference for the config → proxy section within the definition of the Dynatrace provider instance:

Field
Type
Value restrictions
Required
Default value
Description

Metrics options reference

This table reports the reference for the metrics section within the definition of the Dynatrace provider instance. The section contains a collection of objects with the following properties:

Field
Type
Value Restrictions
Required
Default value
Description

Use cases

This section reports common use cases addressed by this provider.

Collect system metrics

Check the Linux optimization pack for a list of all the system metrics available in Akamas.

As a first step to start extracting metrics from Dyntrace, and make sure it has the right permissions.

As a second step, choose a strategy to map your Linux component (MyLinuxComponent) with the corresponding Dyntrace entity.

Let’s assume you want to map by id your Dynatrace entity, you can find the id in the URL bar of a Dyntrace dashboard of the entity:

Grab the id and add it to the Linux component definition:

You can leverage the name of the entity as well:

As a third and final step, once the component is all set, you can create an instance of the Dynatrace provider and then build your first studies:

Create Prometheus telemetry instances

To create an instance of the Prometheus provider, edit a YAML file (instance.yml in this example) with the definition of the instance:

Then you can create the instance for the system using the Akamas CLI:

Configuration options

When you create an instance of the Prometheus provider, you should specify some configuration information to allow the provider to extract and process metrics from Prometheus correctly.

# Dynatrace Telemetry Provider Instance
provider: Dynatrace
config:
  url: https://wuy711522.live.dynatrace.com
  token: XbERgThisIsAnExampleToken
akamas create telemetry-instance instance.yml system

java-ibm-j9vm-6, java-ibm-j9vm-8, java-eclipse-openj9-11

A Dynatrace API token with the privileges described here.

DataImport (Data ingest, e.g.: metrics and events). This permission is used to inform Dynatrace about configuration changes.

namespace

Kubernetes namespace

Container dashboard

containerName

Kubernetes container name

Container dashboard

basePodName

Kubernetes base pod name

Container dashboard

state

State

Pod dashboard

namespace

Namespace

Pod dashboard

workload

Workload

Pod dashboard

Docker container

CONTAINER_GROUP_INSTANCE

Pod

CLOUD_APPLICATION_INSTANCE

Workload

CLOUD_APPLICATION

Namespace

CLOUD_APPLICATION_NAMESPACE

Cluster

KUBERNETES_CLUSTER

Dynatrace provider metrics mapping
these steps
name: My Host
properties:
 dynatrace:
  id: HOST-12345YUAB1
name: MyComponent
properties:
 dynatrace:
  name: host-1
name: MyComponent
properties:
 dynatrace:
  tags:
     environment: test
     [AWS]dynatrace-monitored: true
name: MyComponent
properties:
 dynatrace:
  tags:
     myKeyOnlyTag: ""
dynatrace:
  type: CONTAINER_GROUP_INSTANCE
  kubernetes:
    namespace: boutique
    containerName: server
    basePodName: ak-frontend-*
dynatrace:
  type: CLOUD_APPLICATION_INSTANCE
  namePrefix: ak-frontend-
  kubernetes:
    labels:
      workload: ak-frontend
      product: hipstershop
name: MyComponent
properties:
 dynatrace:
  type: SERVICE     # the type helps the mapping by tags by filtering down entities that are only services
  tags:
     environment: test
     "[AWS]dynatrace-monitored": true
curl 'https://<Your Dynatrace host>/api/v2/entityTypes/?pageSize=500' \
  --header 'Authorization: Api-Token <API-TOKEN>'
name: MyComponent
properties:
 dynatrace:
  mergeable: true
  tags:
     environment: test
     [AWS]dynatrace-monitored: true

The Dynatrace API Token the provider should use to interact with Dynatrace. The token should have .

proxy

Object

See Proxy options reference

No

The specification of the HTTP proxy to use to communicate with Dynatrace.

pushEvents

String

true, false

No

true

If set to true the provider will inform dynatrace of the configuration change event which will be visible in the Dynatrace UI.

tags

Object

No

A set of global tags to match Dynatrace entities. The provider uses these tags to apply a default filtering of Dynatrace entities for every component.

The port at which the HTTP proxy listens for connections

username

String

No

The username to use when authenticating against the HTTP proxy, if necessary

password

String

No

The username to use when authenticating against the HTTP proxy, if necessary

The Dynatrace query to use to extract metric

labels

Array of strings

-

No

The list of Dynatrace labels that should be retained when gathering the metric

staticLabels

Key-Value

-

No

Static labels that will be attached to metric samples

aggregation

String

see

No

avg

The aggregation to perform if the mergeEntities property under the extras section is set to true

extras

Object

Only the parameter mergeEntities can be defined to either true or false

No

Section for additional properties

url

String

It should be a valid URL

Yes

The URL of the Dynatrace installation API (see the official reference)

token

String

address

String

It should be a valid URL

Yes

The URL of the HTTP proxy to use to communicate with the Dynatrace installation API

port

Number (integer)

1 <port<65535

metric

String

It must be an Akamas metric

Yes

The name of an Akamas metric that should map to the new metric you want to gather

datasourceMetric

String

A valid Dynatrace metric

https://www.dynatrace.com/support/help/extend-dynatrace/dynatrace-api/
proper permissions
generate your API token

Yes

Yes

Yes

You can specify configuration information within the config part of the YAML of the instance definition.

Required properties

  • address, a URL or IP identifying the address of the host where Prometheus is installed

  • port, the port exposed by Prometheus

Optional properties

  • user, the username for the Prometheus service

  • password, the user password for the Prometheus service

  • job, a string to specify the scraping job name. The default is ".*" for all scraping jobs

  • logLevel, set this to "DETAILED" for some extra logs when searching for metrics (default value is "INFO")

  • headers, to specify additional custom headers (e.g.: headers: {key: value})

  • namespace, a string to specify the namespace

  • duration, integer to determine the duration in seconds for data collection (use a number between 1 and 3600)

  • enableHttps, boolean to enable HTTPS in Prometheus (since 3.2.6)

  • ignoreCertificates, boolean to ignore SSL certificates

  • disableConnectionCheck, boolean to disable initial connection check to Prometheus

Custom queries

The Prometheus provider allows defining additional queries to populate custom metrics or redefine the default ones according to your use case. You can configure additional metrics using the metrics field as shown in the configuration below:

In this example, the telemetry instance will populate cust_metric with the results of the query specified in datasource, maintaining the value of the labels listed under labels.

Please refer to Querying basics | Prometheus for a complete reference of PromQL

Akamas placeholders

Akamas pre-processes the queries before running them, replacing special-purpose placeholders with the fields provided in the components. For example, given the following component definition:

the query sum(jvm_memory_used_bytes{instance=~"$INSTANCE$", job=~"$JOB$"}) will be expanded for this component into sum(jvm_memory_used_bytes{instance=~"service01", job=~"jmx"}). This provides greater flexibility through the templatization of the queries, allowing the same query to select the correct data sources for different components.

The following is the list of available placeholders:

Placeholder
Usage example
Component definition example
Expanded query
Description

$INSTANCE$, $JOB$

node_load1{instance=~"$INSTANCE$", job=~"$JOB$"}

See below

node_load1{instance=~"frontend", job=~"node"}

These placeholders are replaced respectively with the instance and job fields configured in the component’s prometheus configuration.

%FILTERS%

container_memory_usage_bytes{job=~"$JOB$" %FILTERS%}

See below

Example

Use cases

This section reports common use cases addressed by this provider.

Collect Kubernetes metrics

To gather kubernetes metrics, the following exporters are required:

  • kube-state-metrics

  • cadvisor

As an example, you can define a component with type Kubernetes Container in this way:

Collect Java metrics

Check Java OpenJDK page for a list of all the Java metrics available in Akamas

You can leverage the Prometheus provider to collect Java metrics by using the JMX Exporter. The JMX Exporter is a collector of Java metrics for Prometheus that can be run as an agent for any Java application. Once downloaded, you execute it alongside a Java application with this command:

The command will expose on localhost on port 9100 Java metrics of youJar.jar __ which can be scraped by Prometheus.

config.yaml is a configuration file useful for the activity of this exporter. It is suggested to use this configuration for an optimal experience with the Prometheus provider:

As a next step, add a new scraping target in the configuration of the Prometheus used by the provider:

You can then create a YAML file with the definition of a telemetry instance (prom_instance.yml) of the Prometheus provider:

And you can create the telemetry instance using the Akamas CLI:

Finally, to bind the extracted metrics to the related component, you should add the following field to the properties of the component’s definition:

Collect system metrics

Check the Linux page for a list of all the system metrics available in Akamas

You can leverage the Prometheus provider to collect system metrics (Linux) by using the Node exporter. The Node exporter is a collector of system metrics for Prometheus that can be run as a standalone executable or a service within a Linux machine to be monitored. Once downloaded, schedule it as a service using, for example, systemd:

Here’s the manifest of the node_exporter service:

The service will expose on localhost on port 9100 system metrics __ which can be scraped by Prometheus.

As a final step, add a new scraping target in the configuration of the Prometheus used by the provider:

You can then create a YAML file with the definition of a telemetry instance (prom_instance.yml) of the Prometheus provider:

And you can create the telemetry instance using the Akamas CLI:

Finally, to bind the extracted metrics to the related component, you should add the following field to the properties of the component’s definition:

config:
  url: https://wuy71982.live.dynatrace.com
  token: XbERgkKeLgVfDI2SDwI0h
metrics:
- metric: "akamas_metric"                     # extra akamas metrics to monitor
  datasourceMetric: builtin:host:new_metric   # query to execute to extract the metric
  labels:
  - "method"      # the "method" label will be retained within akamas
config:
  url: https://wuy71982.live.dynatrace.com
  token: XbERgkKeLgVfDI2SDwI0h
  proxy:
    address: https://dynaproxy  # the URL of the HTTP proxy
    port: 9999                  # the port the proxy listens to
provider: Dynatrace  # this is an instance of the <name> provider
config:
  url: https://wuy71982.live.dynatrace.com
  token: XbERgkKeLgVfDI2SDwI0h
  proxy:
    address: https://dynaproxy # the URL of the HTTP proxy
    port: 9999            # the port the proxy listens to
    username: myusername  # http basic auth username if necessary
    password: mypassword  # http basic auth password if necessary
  tags:
    Environment: Test       # dynatrace tags to be matched for every component

metrics:
- metric: "cpu_usage"  # this is the name of the metric within Akamas
  # The Dynatrace metric name
  datasourceMetric: "builtin:host.cpu.usage"
  extras:
    mergeEntities: true  # instruct the telemetry to aggregate the metric over multiple entities
  aggregation: avg  # The aggregation to perform if the mergeEntities property is set to true
name: MyLinuxComponent
description: this is a Linux component
properties:
  dynatrace:
    id: HOST-A987D45512ABCEEE
name: MyLinuxComponent
description: this is a Linux component
properties:
  dynatrace:
    name: Host1
name: Dynatrace
config:
  url: https://my_dyna_installation_url
  token: MY_DYNA_TOKEN
# Prometheus Telemetry Provider Instance
provider: Prometheus

config:
  address: host1  # URL or IP of the Prometheus from which extract metrics
  port: 9090      # Port of the Prometheus from which extract metrics
akamas create telemetry-instance instance.yml system
config:
  address: host1
  port: 9090

metrics:
  - metric: cust_metric   # extra akamas metric to monitor
    datasourceMetric: 'http_requests_total{environment=~"staging|testing|development", method!="GET"}' # query to execute to extract the metric
    labels:
    - method   # The "method" label will be retained within akamas
name: jvm1
description: jvm1 for payment services
properties:
  prometheus:
    instance: service01
    job: jmx
prometheus:
  instance: frontend
  job: node
name: adservice
description: The adservice of the online boutique by Google
componentType: Kubernetes Container
properties:
  prometheus:
    namespace: boutique
    pod: adservice.*
    container: server
java -javaagent:the_downloaded_jmx_exporter_jar.jar=9100:config.yaml -jar yourJar.jar
startDelaySeconds: 0
username:
password:
ssl: false
lowercaseOutputName: false
lowercaseOutputLabelNames: false
# using the property above we are telling the export to export only relevant java metrics
whitelistObjectNames:
- "java.lang:*"
- "jvm:*"
...
scrape_configs:
# JMX Exporter
- job_name: "jmx"
  static_configs:
  - targets: ["jmx_exporter_host:9100"]
name: Prometheus
config:
  address: prometheus_host
  port: 9090
akamas create telemetry-instance prom_instance.yml
prometheus:
  job: jmx
systemctl start node_exporter
[Unit]
Description=Node Exporter

[Service]
ExecStart=/path/to/node_exporter/executable

[Install]
WantedBy=default.target
scrape_configs:
# Node Exporter
- job_name: "node"
  static_configs:
  - targets: ["node_exporter_host:9100"]
  relabel_configs:
  - source_labels: ["__address__"]
    regex: "(.*):.*"
    # here we put as "instance", the name of the component the metrics refer to
    target_label: "instance"
    replacement: "linux_component_name"
provider: Prometheus
config:
  address: prometheus_host
  port: 9090
akamas create telemetry-instance prom_instance.yml
prometheus:
  instance: linux_component_name
  job: node
sufficient permissions
Dynatrace metric aggregations

container_memory_usage_bytes{job=~"advisor", name=~"db-.*"}

This placeholder is replaced with a list containing any additional filter in the component’s definition (other than instance and job), where each field is expanded as field_name=~"field_value". This is useful to define additional label matches in the query without the need to hardcode them.

$DURATION$

rate(http_client_requests_seconds_count[$DURATION$])

rate(http_client_requests_seconds_count[30s])

If not set in the component properties, this placeholder is replaced with the duration field configured in the telemety-instance. You should use it with range vectors instead of hardcoding a fixed value.

$NAMESPACE$, $POD$, $CONTAINER$

1e3 * avg(kube_pod_container_resource_limits{resource="cpu", namespace=~"$NAMESPACE$", pod=~"$POD$", container=~"$CONTAINER$" %FILTERS%})

See Collect Kubernetes metrics

1e3 * avg(kube_pod_container_resource_limits{resource="cpu", namespace=~"boutique", pod=~"adservice.*", container=~"server"})

These placeholders are used within kubernetes environments

Example
Example

CloudWatch Exporter

This page describes how to set up a CloudWatch exporter in order to gather AWS metrics through the Prometheus provider. This is especially useful to monitor system metrics when you don’t have direct SSH access to AWS resources like EC2 Instances or if you want to gather AWS-specific metrics not available in the guest OS.

AWS policies

In order to fetch metrics fromCloudWatch, the exporter requires an IAM user or role with the following privileges:

  • cloudwatch:GetMetricData

  • cloudwatch:GetMetricStatistics

  • cloudwatch:ListMetrics

  • tag:GetResources

You can assign AWS-managed policies CloudWatchReadOnlyAccess and ResourceGroupsandTagEditorReadOnlyAccess to the desired user to enable these permissions.

Exporter configuration

The CloudWatch exporter repository is available on the . It requires a minimal configuration to fetch metrics from the desired AWS instances. Below is a short list of the parameters needed for a minimal configuration:

  • region: AWS region of the monitored resource

  • metrics: a list of objects containing filters for the exported metrics

    • aws_namespace: the namespace of the monitored resource

For a complete list of possible values for namespaces, metrics, and dimensions please refer to the official .

Notice: AWS bills CloudWatch usage in batches of 1 million requests, where every metric counts as a single request. To avoid unnecessary expenses configure only the metrics you need.

The suggested deployment mode for the exporter is through a . The following snippet provides a command line example to run the container (remember to provide your AWS credentials if needed and the path of the configuration file):

You can refer to the for more details or alternative deployment modes.

Prometheus configuration

In order to scrape the newly created exporter add a new job to the configuration file. You will also need to define some in order to add the instance label required by Akamas to properly filter the incoming metrics. In the example below the instance label is copied from the instance’s Name tag:

Notice: AWS bills CloudWatch usage in batches of 1 million requests, where every metric counts as a single request. To avoid unnecessary expenses configure an appropriate scraping interval.

Additional workflow task

Once you configured the exporter in the Prometheus configuration you can start to fetch metrics using the Prometheus provider. The following sections describe some scripts you can add as tasks in your workflow.

Wait for metrics

It’s worth noting that CloudWatch may require some minutes to aggregate the stats according to the configured granularity, causing the telemetry provider to fail while trying to fetch data points not available yet. To avoid such issues you can add at the end of your workflow a task using an to wait for the CloudWatch metrics to be ready. The following script is an example of implementation:

Start/stop the exporter as needed

Since Amazon bills your CloudWatch queries is wise to run the exporter only when needed. The following script allows you to manage the exporter from the workflow by adding the following tasks:

  • start the container right before the beginning of the load test (command: bash script.sh start)

  • stop the container after the metrics publication, as described in the (command: bash script.sh stop).

Custom Configuration file

The example below is the Akamas-supported configuration, fetching metrics of EC2 instances named server1 and server2.

aws_metric_name: the name of the AWS metric to fetch
  • aws_dimensions: the dimension to expose as labels

  • aws_dimension_select: the dimension to filter over

  • aws_statistics: the list of metric statistics to expose

  • aws_tag_select: optional tags to filter on

    • tag_selections: map containing the list of values to select for each tag

    • resource_type_selection: resource type to fetch the tags from (see: Resource Types)

    • resource_id_dimension: dimension to use for the resource id (see: )

  • official project page
    AWS CloudWatch User Guide
    Docker image
    official guide
    relabeling rules
    Executor operator
    previous section
    region: us-east-2
    metrics:
      - aws_namespace: AWS/EC2
        aws_metric_name: CPUUtilization
        aws_statistics: [Average]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: NetworkIn
        aws_statistics: [Sum]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: NetworkOut
        aws_statistics: [Sum]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: NetworkPacketsIn
        aws_statistics: [Sum]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: NetworkPacketsOut
        aws_statistics: [Sum]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: CPUCreditUsage
        aws_statistics: [Sum]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: CPUCreditBalance
        aws_statistics: [Average]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: EBSReadOps
        aws_statistics: [Sum]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: EBSWriteOps
        aws_statistics: [Sum]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: EBSReadBytes
        aws_statistics: [Sum]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: EBSWriteBytes
        aws_statistics: [Sum]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: EBSIOBalance%
        aws_statistics: [Average]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: EBSByteBalance%
        aws_statistics: [Average]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    docker run -d --name cloudwatch_exporter \
      -p 9106:9106 \
      -v $(pwd)/cloudwatch-exporter.yaml:/config/config.yml \
      -e AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} -e AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} \
      prom/cloudwatch-exporter
    scrape_configs:
      - job_name: cloudwatch_exporter
        scrape_interval: 60s
        scrape_timeout: 30s
        static_configs:
          - targets: [cloudwatch_exporter:9106]
        metric_relabel_configs:
          - source_labels: [tag_Name]
            regex: '(.+)'
            target_label: instance
    METRIC=aws_rds_cpuutilization_sum   # metric to check for
    DELAY_SEC=15
    RETRIES=60
    
    NOW=`date +'%FT%T.%3NZ'`
    
    for i in `seq $RETRIES`; do
      sleep $DELAY_SEC
      curl -sS "http://prometheus_host/api/v1/query?query=${METRIC}&time=${NOW}" | jq -ce '.data.result[]' && exit 0
    done
    
    exit 255
    #!/bin/bash
    
    set -e
    
    CMD=$1
    CONT_NAME=cloudwatch_exporter
    
    stop_cont() {
      [ -z `docker ps -aq -f "name=${CONT_NAME}"` ] || (echo Removing ${CONT_NAME} && docker rm -f ${CONT_NAME})
    }
    
    case $CMD in
      stop|remove)
        stop_cont
        ;;
    
      start)
        stop_cont
    
        AWS_ACCESS_KEY_ID=`awk 'BEGIN { FS = "=" } /aws_access_key_id/ {print $2 }' ~/.aws/credentials | tr -d '[:space:]'`
        AWS_SECRET_ACCESS_KEY=`awk 'BEGIN { FS = "=" } /aws_secret_access_key/ {print $2 }' ~/.aws/credentials | tr -d '[:space:]'`
    
        echo Starting container $CONT_NAME
        docker run -d --name $CONT_NAME \
          -p 9106:9106 \
          -v ~/oracle-database/utils/cloudwatch-exporter.yaml:/config/config.yml \
          -e AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} -e AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} \
          prom/cloudwatch-exporter
        ;;
    
        *)
        echo Unrecognized option $CMD
        exit 255
        ;;
    esac
    region: us-east-2
    metrics:
      - aws_namespace: AWS/EC2
        aws_metric_name: CPUUtilization
        aws_statistics: [Average]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: NetworkIn
        aws_statistics: [Sum]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: NetworkOut
        aws_statistics: [Sum]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: NetworkPacketsIn
        aws_statistics: [Sum]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: NetworkPacketsOut
        aws_statistics: [Sum]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: CPUCreditUsage
        aws_statistics: [Sum]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: CPUCreditBalance
        aws_statistics: [Average]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: EBSReadOps
        aws_statistics: [Sum]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: EBSWriteOps
        aws_statistics: [Sum]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: EBSReadBytes
        aws_statistics: [Sum]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: EBSWriteBytes
        aws_statistics: [Sum]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: EBSIOBalance%
        aws_statistics: [Average]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
      - aws_namespace: AWS/EC2
        aws_metric_name: EBSByteBalance%
        aws_statistics: [Average]
        aws_dimensions: [InstanceId]
        # aws_dimension_select:
        #   InstanceId: [i-XXXXXXXXXXXXXXXXX]
        aws_tag_select:
          tag_selections:
            Name: [server1, server2]
          resource_type_selection: ec2:instance
          resource_id_dimension: InstanceId
    
    Resource Types