Akamas leverages the CSV telemetry provider to integrate a variety of data sources such as Instana.
All integrations based on this provider consist of two phases:
Metric extraction from Instana
Metric import via CSV provider
The first phase is composed of a set of scripts launched by a workflow task that interacts with the Instana API and saves the metrics of interest for the experiment in a CSV file with a proper format.
The second phase is executed by the CSV telemetry provider that imports the metrics from the CSV file.
In order to set up the integration you need:
A host (or a container) that can be accessed via SSH from Akamas to run the extraction scripts and host the generated CSV file.
The host must be able to connect to Instana APIs
A token to authenticate to your Instana account and extract the metrics
The script required to set up this integration is not currently publicly available. To obtain them please contact support@akamas.io.
You can deploy the scripts once and then re-use them for multiple studies as all the required configurations can be provided as arguments which can be changed directly in the akamas workflow yaml or from the UI.
To deploy the scripts, extract the archive to a location of your choice on the host. You can verify that the script can be executed correctly by running the following command, substituting these placeholders:
<my-environment>
with your environment id.
<my-token>
with the token you generated from Instana. You can read more on this on the Instana Official documentation.
<my-service-id>
with the id of one of your services on instana.
The script will extract the application metrics and save them to /tmp/instana/metrics.
These are the main parameters that can be used with the script along with their description.
endpoint
: URL of the Instana environment (ex: https://moviri-moviri.instana.io)
token
: environment token
window_size
: defined in ms, is the size of the window for which the metrics are collected. The script collects metrics from now-window_size to now
rollup
: Depending on the selected timeFrame it's possible to select the rollup. The limitation is that we only return 600 Data points per call, thus if you select a windowSize of 1 hour the most accurate rollup you can query for would be 5s. Valid rollups are:
granularity
: granularity of the application metrics (services and endpoints)
max_attempts
: maximum number of attempts before considering the API call failed
timeshift
: fixed time to add to metrics' timestamp
timezone
: timezone to use in timestamp
output_dir
: directory in which save the output files
timestamp_format
: format in which the epoch timestamp is converted into the final CSV file
filename
: Output file name
component
: Akamas component name
type
: the available types are infrastructure
, service
and endpoint
plugin
: plugin type, the available plugins are: kubernetesPod
, containerd
, process
and jvmRuntimePlatform
query
: query to select the correct entity. Used for infrastructure entities
id
: entity id of the selected service or endpoint
Once the scripts have been deployed you can use them across multiple studies.
In order to generate the CSV with the required metric add the following task to the workflow of your study taking care of substituting the following variables. Please note that all these variables can also be updated via UI once the workflow has been created.
<my-host>
with the hostname or IP of the instance hosting the scripts.
<my-user>
with the username used to access the instance via ssh.
<my-key>
with an ssh key to access the instance.
<my-environment>
with your environment id.
<my-token>
with the token you generated from Instana. You can read more on this on the Instana Official documentation.
<my-application-component>
with the name of the component of type Web Application in your system.
<my-jvm-component>
with the name of the component of type open-jdk in your system.
<my-container-component>
with the name of the component of type container in your system.
<my-instana-process>
with the id of the Instana process you want to extract.
<my-instana-jvm>
with the id of the Instana jvm you want to extract.
<my-insana-pod-name>
with the name of the pod on Instana you want to extract.
<my-instana-container-name>
with the name of the container on Instana you want to extract.
<my-instana-endpoint-id>
with the id of the endpoint on Instana you want to extract.
<my-instana-service-id>
with the id of the service on Instana you want to extract.
Note that if your system does not include all these components you can just omit some commands as described in the yaml file.
Please note that the script will produce results in the /tmp/instana/metrics
folder. If you wish to run more studies in parallel you might need to change this folder as well.
To set up the CSV telemetry provider create a new telemetry instance for each of your system components.
Here you can find the configuration for each supported component type.
Take care of substituting the following variables.
<my-host>
with the hostname or IP of the instance hosting the scripts.
<my-user>
with the username used to access the instance via ssh.
<my-key>
with an ssh key to access the instance.
Here you can find the list of supported metrics. Metrics from instana are mapped to metrics from the Akamas optimization pack. As an example the memoryRequests
metric on the kubernetesPod
entity in Instana is mapped to the container_memory_request
metric of component type Kubernetes Container
.
kubernetesPod
containerd
jvmRuntimePlatform
Rollup | Value |
---|---|
Name | Argument | Required | Default |
---|---|---|---|
Instana Metric | Akamas Component Type | Akamas Metric |
---|---|---|
Instana Metric | Akamas Component Type | Akamas Metric |
---|---|---|
Instana Metric | Akamas Component Type | Akamas Metric |
---|---|---|
Instana Metric | Akamas Component Type | Akamas Metric |
---|---|---|
Instana Metric | Akamas Component Type | Akamas Metric |
---|---|---|
1 second
1
5 seconds
5
1 minute
60
5 minutes
300
1 hour
3600
Endpoint
-e, --endpoint
True
-
Token
-t, --token
True
-
Output Directory
-o, --output_directory
True
-
Window Size
-w, --window_size
False
3600000
Rollup
-r, --rollup
False
60
Granularity
-g, --granularity
False
60
Filename
-f, --filename
True
-
Component
-c, --component
True
-
Type
-tp, --type
True
-
Plugin
-p, --plugin
False
-
Query
-q, --query
False
-
Id
-id, --id
False
-
Max Attempts
-ma, --max_attempts
False
5
Timezone
-tz, --timezone
False
UTC
Timeshift
-ts, --timeshift
False
0
Timestamp Format
-tf, --timestamp_format
False
%Y-%m-%d %H:%M:00
Start Timestamp
-st, --start_timestamp
False
-
cpuRequests * 1000
Kubernetes Container
container_cpu_request
cpuLimits * 1000
Kubernetes Container
container_cpu_limit
memoryRequests
Kubernetes Container
container_memory_request
memoryLimits
Kubernetes Container
container_memory_limit
cpu.total_usage
Kubernetes Container
container_cpu_util
memory.usage
Kubernetes Container
container_memory_util
memory.total_rss
Kubernetes Container
container_memory_working_set
cpu.throttling_time
Kubernetes Container
container_cpu_throttle_time
threads.blocked
java-openjdk-XX
jvm_threads_deadlocked
jvm.heap.maxSize
java-openjdk-XX
jvm_heap_size
memory.used
java-openjdk-XX
jvm_memory_used
suspension.time
java-openjdk-XX
jvm_gc_duration
calls
Web Application
transactions_throughput
erroneousCalls
Web Application
transactions_error_throughput
latency - MEAN
Web Application
transactions_response_time
latency - P90
Web Application
transactions_response_time_p90
latency - P99
Web Application
transactions_response_time_p99
calls
Web Application
requests_throughput
erroneousCalls
Web Application
requests_error_throughput
latency - MEAN
Web Application
requests_response_time
latency - P90
Web Application
requests_response_time_p90
latency - P99
Web Application
requests_response_time_p99