# Optimizing Spark

When optimizing applications running on the Apache Spark framework, the goal is to find the configurations that best optimize the allocated resources or the execution time.

Please refer to the [Spark optimization pack](https://docs.akamas.io/akamas-docs/3.2.1-1/akamas-reference/optimization-packs/spark-pack) for the list of component types, parameters, metrics, and constraints.

### Workflows <a href="#workflow-design" id="workflow-design"></a>

#### Applying parameters <a href="#applying-parameters" id="applying-parameters"></a>

Akamas offers several operators that you can use to apply the parameters for the tuned Spark application. In particular, we suggest using the [Spark SSH Submit operator](https://docs.akamas.io/akamas-docs/3.2.1-1/akamas-reference/workflow-operators/sparksshsubmit-operator), which connects to a target instance to submit the application using the configuration parameters to test.

Other solutions include:

* the [Spark Livy Operator](https://docs.akamas.io/akamas-docs/3.2.1-1/akamas-reference/workflow-operators/sparklivy-operator), which allows submitting the application along with the configuration parameters using the [Livy Rest interface](https://livy.incubator.apache.org/docs/latest/rest-api.html)
* the standard [Executor operator](https://docs.akamas.io/akamas-docs/3.2.1-1/akamas-reference/workflow-operators/executor-operator), which allows running a custom command or script once the [FileConfigurator operator](https://docs.akamas.io/akamas-docs/3.2.1-1/akamas-reference/workflow-operators/fileconfigurator-operator) updated the default Spark configuration file or a custom one using a template.

#### A typical workflow <a href="#a-typical-workflow" id="a-typical-workflow"></a>

You can organize a typical workflow to optimize a Spark application in three parts:

1. Setup the test environment
   1. prepare any required input data
   2. apply the Spark configuration parameters, if you are going for a file-based solution
2. Execute the Spark application
3. Perform cleanup

Here’s an example of a typical workflow where Akamas executes the Spark application using the [Spark SSH Submit operator](https://docs.akamas.io/akamas-docs/3.2.1-1/akamas-reference/workflow-operators/sparksshsubmit-operator):

{% code lineNumbers="true" %}

```yaml
name: Spark workflow
tasks:
   - name: cwspark
     arguments:
        master: yarn
        deployMode: cluster
        file: /home/hadoop/scripts/pi.py
        args: [ 100 ]L
```

{% endcode %}

### Telemetry Providers <a href="#telemetry-providers" id="telemetry-providers"></a>

Akamas can access [Spark History Server](https://docs.akamas.io/akamas-docs/3.2.1-1/integrating-akamas/integrating-telemetry-providers/spark-history-server-provider) statistics using the [Spark History Server Provider](https://docs.akamas.io/akamas-docs/3.2.1-1/integrating-akamas/integrating-telemetry-providers/spark-history-server-provider). This provider maps the metrics in this optimization pack to the statistics provided by the History Server endpoint.

Here’s a configuration example for a telemetry provider instance:

{% code lineNumbers="true" %}

```yaml
provider: SparkHistoryServer
config:
  address: sparkmaster.akamas.io
  port: 18080
```

{% endcode %}

### Examples

See this [page](https://docs.akamas.io/akamas-docs/3.2.1-1/knowledge-base/optimizing-a-spark-application) for an example of a study leveraging the Spark pack.
