Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This section provides some guidelines on how to define optimization studies by means of a few examples related to single-technology/layer systems, in particular on how to define workflows and telemetry providers.
More complex real-world examples are provided by the Knowledge Base guide.
This page intends to provide some guidance in optimizing web applications. Please refer to the Web Application optimization pack for the list of component types, parameters, metrics, and constraints.
No specialized telemetry solution to gather Web Application metrics is included. The following providers however can integrate with the provided metrics:
CSV File Provider: this provider can be configured to ingest data points generated by any monitoring application able to export the data in CSV format.
integrations leveraging NeoLoad Web, LoadRunner Professional or LoadRunner Enterprise as a load generator can use this ad-hoc provider that comes out of the box and uses the metrics defined in this optimization pack.
The provided component type does not define any parameter. The workflow will optimize parameters defined in other component types representing the underlying technological stack.
A typical workflow to optimize a web application is structured in three parts:
Configure and restart the application
Use the FileConfigura operator to interpolate the tuned parameters in the configuration files of the underlying stack.
Restart the application using an Executor operator.
Wait for the application to come up using the Sleep or Executor operator.
Run the test
use any of the available operators to trigger the execution of the performance test against the application.
Perform the cleanup
use any of the available operators to restore the application to the original state.
Here's an example workflow to perform a test on a Java web application using NeoLoad as a load generator:
See this page for an example of a study leveraging the Web Application pack.
When optimizing Kubernetes applications, typically the goal is to find the configuration that assigns resources to containerized applications so as to minimize waste and ensure the quality of service.
Please refer to the Kubernetes optimization pack for the list of component types, parameters, metrics, and constraints.
Akamas offers different operators to configure Kubernetes entities. In particular, you can use the FileConfigurator operator to update the definition file of a resource and apply it with the Executor operator.
The following example is the definition of a deployment, where the replicas and resources are templatized in order to work with the FileConfigurator:
A typical workflow to optimize a Kubernetes application is usually structured as the following:
Configure the Kubernetes artifacts: use the File Configurator operator to create the definition files starting from a template.
Apply the new parameters: apply the updated definitions using the Executor operator.
Wait for the application to be ready: run a custom script to wait until the rollout is complete.
Run the test: execute the benchmark.
Here’s an example of a typical workflow for a system:
Akamas can access Kubernetes metrics using the Prometheus provider. This provider comes out of the box with a set of default queries to interrogate a Prometheus instance configured to fetch data from cAdvisor and kube-state-metrics.
Here’s a configuration example for a telemetry provider instance that uses Prometheus to extract all the Kubernetes metrics defined in this optimization pack:
where the configuration of the monitored component provides the additional filters as in the following snippet:
Please keep in mind that some resources, such as pods belonging to deployments, require wildcards in order to match the auto-generated names.
See this page for an example of a study leveraging the Kubernetes pack.
When optimizing applications running on the Apache Spark framework, the goal is to find the configurations that best optimize the allocated resources or the execution time.
Please refer to the Spark optimization pack for the list of component types, parameters, metrics, and constraints.
Akamas offers several operators that you can use to apply the parameters for the tuned Spark application. In particular, we suggest using the Spark SSH Submit operator, which connects to a target instance to submit the application using the configuration parameters to test.
Other solutions include:
the Spark Livy Operator, which allows submitting the application along with the configuration parameters using the Livy Rest interface
the standard Executor operator, which allows running a custom command or script once the FileConfigurator operator updated the default Spark configuration file or a custom one using a template.
You can organize a typical workflow to optimize a Spark application in three parts:
Setup the test environment
prepare any required input data
apply the Spark configuration parameters, if you are going for a file-based solution
Execute the Spark application
Perform cleanup
Here’s an example of a typical workflow where Akamas executes the Spark application using the Spark SSH Submit operator:
Akamas can access Spark History Server statistics using the Spark History Server Provider. This provider maps the metrics in this optimization pack to the statistics provided by the History Server endpoint.
Here’s a configuration example for a telemetry provider instance:
See this page for an example of a study leveraging the Spark pack.
When optimizing Java applications based on OpenJDK, typically the goal is to tune the JVM from both the point of view of cost savings and quality of service.
Please refer to the for the list of component types, parameters, metrics, and constraints.
Akamas offers many operators that you can use to apply the parameters for the tuned JVM. In particular, it is suggested to use the to create a configuration file or inject the arguments directly in the command string using a template.
The following is an example of templatized executions string:
A typical workflow to optimize a Java application can be structured in two parts:
Configure the Java arguments
Generate a configuration file or a command string containing the selected JVM parameters using a .
Run the Java application
Use available to execute a performance test against the application.
Here’s an example of a typical workflow where Akamas executes the script containing the command string generated by the file configurator:
Here’s a configuration example for a telemetry provider instance that uses Prometheus to extract all the JMX metrics defined in this optimization pack:
where the configuration of the monitored component provides the additional references as in the following snippet:
When optimizing Linux systems, typically the goal is to allow cost savings or improve performance and the quality of service, such as sustaining higher levels of traffic or enabling transactions with lower latency.
Please refer to the for the list of component types, parameters, metrics, and constraints.
Akamas provides the as the preferred way to apply Linux parameters to a system to be optimized. The operator connects via SSH to your Linux components and will employ different strategies to apply Linux parameters. Notice that this operator allows you to exclude some block/network devices from being configured.
You can organize a typical workflow to optimize Linux in three parts:
Configure Linux
Use the to apply configuration parameters to the operating system, no restart is required
Test the performance of the system
Use to execute a performance test against the system
Perform some cleanup
Use to perform any clean-up to guarantee any subsequent execution of the workflow will run without problems
Here’s an example of a typical workflow for a Linux system:
When optimizing a MongoDB instance, typically the goal is one of the following:
Throughput optimization - increasing the capacity of a MongoDB deployment to serve clients
Cost optimization - decreasing the size of a MongoDB deployment while guaranteeing the same service level
To reach such goals, it is recommended to tune the parameters that manage the cache, which is of the elements that impact performances the most, in particular those parameters that control the lifecycle and the size of the MongoDB’s cache.
Even though it is possible to evaluate performance improvements of MongoDB by looking at the business application that uses it as its database, looking at the end-to-end throughput or response time, or using a performance test like , the optimization pack provides internal MongoDB metrics that can shed a light too on how MongoDB is performing, in particular in terms of throughput, for example:
The number of documents inserted in the database per second
The number of active connections
Please refer to the for the list of component types, parameters, metrics, and constraints.
Akamas offers many operators that you can use to apply freshly tuned configuration parameters to your MongoDB deployment. In particular, we suggest using the to create a configuration script file and the ExecutorOperator to execute it and thus apply the parameters.
FileConfigurator and Executor operator
You can leverage the FileConfigurator by creating a template file on a remote host that contains some scripts to configure MongoDB with placeholders that will be replaced with the values of parameters tuned by Akamas.
Here’s an example of the aforementioned template file:
You can leverage the FileConfigurator by creating a template file on a remote host that contains some scripts to configure MongoDB with placeholders that will be replaced with the values of parameters tuned by Akamas. Once the FileConfigurator has replaced all the tokens, you can use the Executor operator to actually execute the script to configure MongoDB.
A typical workflow to optimize a MongoDB deployment can be structured in three parts:
Configure MongoDB
Test the performance of the application
Prepare test results (optional)
Cleanup
Finally, when running performance experiments on a database, is common practice to execute some cleanup tasks at the end of the test to restore the database initial condition and avoid impacting subsequent tests.
Here’s an example of a typical workflow for a MongoDB deployment, which uses the YCSB benchmark to run performance tests:
Here’s an example of a telemetry providers instance that uses Prometheus to extract all the MongoDB metrics defined in this optimization pack:
When optimizing a PostgreSQL instance, typically the goal is one of the following:
Throughput optimization: increasing the number of transactions
Cost optimization: minimize resource consumption according to a typical workload, thus cutting costs
Please refer to the for the list of component types, parameters, metrics, and constraints.
Akamas offers many operators that you can use to apply the parameters for the tuned PostgreSQL instances. In particular, we suggest using the for parameters templating and configuration, and the for restoring DB data and launching scripts.
A typical optimization process involves the following steps:
Configure PostgreSQL parameters
Restore DB data
Restart PostgreSQL and wait for the initialization
Run benchmark
Parse results
Please note that most PostgreSQL parameters do not need an application restart.
Akamas can access JMX metrics using the . This provider comes out of the box with a set of default queries to interrogate a Prometheus instance configured to fetch data from a .
See this for an example of a study leveraging the Java OpenJDK pack.\
Akamas does not provide any specialized telemetry solution to gather Linux metrics as these metrics can be collected in a variety of ways, leveraging a plethora of existing solutions. For example, the supports Linux system metrics.
Use the to specify an input and an output template file. The input template file is used to specify how to interpolate MongoDB parameters into a script, and the output file contains the actual configuration.
Use the operator to reconfigure MongoDB exploiting the output file produced in the previous step. You may need to restart MongoDB depending on the configuration parameters you want to optimize.
Either use the operator or the operator to verify that the application is up and running and has finished any initialization logic (this step may not be necessary)
Use available to execute a performance test against the application
If Akamas does not already automatically import performance test metrics, then you can use available to extract test results and make them available to Akamas (for example, you can use an to launch a script that produces a CSV of the test results that Akamas can consume using the )
Use available to bring back MongoDB into a clean state to avoid impacting subsequent tests
Akamas offers many telemetry providers to extract MongoDB metrics; one of them is the which we can use to query MongoDB metrics collected by a Prometheus instance via the .
See the page for an example of a study leveraging the MongoDB pack.
When optimizing a MongoDB instance, typically the goal is to maximize the throughput of an Oracle-backed application or to minimize its resource consumption, thus reducing costs.
Please refer to the Oracle Database optimization pack for the list of component types, parameters, metrics, and constraints.
One common way to configure Oracle parameters is through the execution ALTER SYSTEM
statements on the database instance: to automate the execution of this task Akamas provides the OracleConfigurator operator. For finer control, Akamas provides the FileConfigurator operator, which allows building custom statements in a script file that can be executed by the Executor operator.
Oracle Configurator
The OracleConfigurator operator allows the workflow to configure an on-premise instance with minimal configuration. The following snippet is an example of a configuration task, where all the connection arguments are already defined in the referenced component:
File Configurator and Executor
Most cloud providers offer web APIs as the only way to configure database services. In this case, the Executor operator can submit an API request through a custom executable using a configuration file generated by a FileConfigurator operator.
The following is an example workflow where a FileConfigurator task generates a configuration file (oraconf
), followed by an Executor task that parses and submits the configuration to the API endpoint through a custom script (api_update_db_conf.sh
):
The optimization of an Oracle database usually includes the following tasks in the workflow, as implemented in the example below:
Apply the Oracle configuration suggested by Akamas and restart the instance if needed (Update parameters
task).
Perform any additional warm-up task that may be required to bring the database up at the operating regime (Execute warmup
task).
Execute the workload targeting the database or the front-end in front of it (Execute performance test
task).
Restore the original state of the database in order to guarantee the consistency of further tests, removing any dirty data added by the workload and possibly flushing the database caches (Cleanup
task).
The following is the complete YAML configuration file of the workflow described above:
Akamas offers many telemetry providers to extract Oracle Database metrics; one of them is the Prometheus provider, which we can use to query Oracle Database metrics collected by a Prometheus instance via the Prometheus Oracle Exporter.
The snippet below shows a toml configuration example for the Oracle Exporter extracting metrics regarding the Oracle sessions:
The following example shows how to configure a telemetry instance for a Prometheus provider in order to query the data points extracted from the exporter described above:
See Optimizing an Oracle Database server instance and Optimizing an Oracle Database for an e-commerce service for examples of studies leveraging the Oracle Database pack.
When optimizing a MySQL instance, typically the goal is one of the following:
Throughput optimization: increasing the capacity of a MySQL deployment to serve clients
Cost optimization: decreasing the size of a MySQL deployment while guaranteeing the same service level
Please refer to the MySQL optimization pack for the list of component types, parameters, metrics, and constraints.
Usually, MySQL parameters are configured by writing them in the MySQL configuration file, typically called my.cnf
, and located under /etc/mysql/
on most Linux systems.
In order to preserve the original config file intact, it is best practice to use additional configuration files, located in /etc/mysql/conf.d
to override the default parameters. These files are automatically read by MySQL.
FileConfigurator and Executor operator
You can leverage the FileConfigurator operator by creating a template file on a remote host that contains some scripts to configure MySQL with placeholders that will be replaced with the values of parameters tuned by Akamas. When all the placeholders in FileConfigurator get replaced, the operator can be used to actually execute the script to configure and restart the database
A typical workflow to optimize a MySQL deployment can be structured in three parts:
Configure MySQL
Use the FileConfigurator operator to specify an input and an output template file. The input template file is used to specify how to interpolate MySQL parameters into a configuration file, and the output file is used to contain the result of the interpolation.
Restart MySQL
Use the Executor operator to restart MySQL allowing it to load the new configuration file produced in the previous step.
Optionally, use the Executor operator to verify that the application is up and running and has finished any initialization logic.
Test the performance of the application
Use any of the workflow operators to perform a performance test against the application.
Prepare test results
Use any of the workflow operators to organize test results so that they can be imported into Akamas using the supported telemetry providers (see also section here below).
Finally, when running performance experiments on databases is common practice to do some cleanup tasks at the end of the test to restore the database's initial condition to avoid impacting subsequent tests.
Here’s an example of a typical workflow for MySQL, which uses the OLTP Resourcestresser benchmark to run performance tests
Akamas can access MySQL metrics using the Prometheus provider. This provider can be leveraged to query MySQL metrics collected by a Prometheus instance via the MySql Prometheus exporter.
Here’s an example of a telemetry providers instance that uses Prometheus to extract all the MySQL metrics defined in this optimization pack:
This page and this page describe an example of how to leverage the MySQL optimization pack.
When optimizing Java applications based on OpenJ9, typically the goal is to tune the JVM from both the point of view of cost savings and quality of service.
Please refer to the for the list of component types, parameters, metrics, and constraints.
Akamas offers many operators that you can use to apply the parameters for the tuned JVM. In particular, it is suggested to leverage the to create a configuration file or inject the arguments directly in the command string using a template.
The following is an example of templatized executions string:
A typical workflow to optimize a Java application can be structured in two parts:
Configure the Java arguments
Generate a configuration file or a command string containing the selected JVM parameters using a .
Run the Java application
Use available to execute a performance test against the application.
Here’s an example of a typical workflow where Akamas executes the script containing the command string generated by the file configurator:
Here’s a configuration example for a telemetry provider instance that uses Prometheus to extract all the JMX metrics defined in this optimization pack:
where the configuration of the monitored component provides the additional references as in the following snippet:
Akamas can access JMX metrics using the This provider comes out of the box with a set of default queries to interrogate a Prometheus instance configured to fetch data from a .
See this for an example of a study leveraging the Eclipse OpenJ9 pack.