Create Spark History Server telemetry instances
Create a telemetry instance
To create an instance of the Spark History Server provider, build a YAML file (instance.yml
in this example) with the definition of the instance:
Then you can create the instance for the system spark-system
using the Akamas CLI:
Configuration options
When you create an instance of the Spark History Server provider, you should specify some configuration information to allow the provider to correctly extract and process metrics from the Spark History server.
You can specify configuration information within the config
part of the YAML of the instance definition.
Required properties
address
- hostname of the Spark History Server instance
Telemetry instance reference
The following YAML file describes the definition of a telemetry instance.
The following table reports the reference for the config
section within the definition of the Spark History Server provider instance:
Field | Type | Description | Default value | Restriction | Required |
---|---|---|---|---|---|
| URL | Spark History Server address | Yes | ||
| String | Granularity of the imported metrics |
| Allowed values: | No |
| Integer | Spark History Server listening port |
| No |
Use cases
This section reports common use cases addressed by this provider.
Collect stage metrics of a Spark Application
Check Spark Application page for a list of all Spark application metrics available in Akamas
This example shows how to configure a Spark History Server provider in order to collect performance metrics about a Spark application submitted to the cluster using the Spark SSH Submit operator.
As a first step, you need to create a YAML file (spark_instance.yml
) containing the configuration the provider needs to connect to the Spark History Server, plus the filter on the desired level of granularity for the imported metrics:
and then create the telemetry instance using the Akamas CLI:
Finally, you will need to define for your study a workflow that includes the submission of the Spark application to the cluster, in this case using the Spark SSH Submit operator:
Best practices
This section reports common best practices you can adopt to ease the use of this telemetry provider.
configure metrics granularity: in order to reduce the collection time, configure the
importLevel
to import metrics with a granularity no finer than the study requires.wait for metrics publication: make sure in the workflow there is a few-minute interval between the end of the Spark application and the execution of the Spark telemetry instance, since the Spark History Server may take some time to complete the publication of the metrics.
Last updated