Akamas Docs
3.3.0
3.3.0
  • How to use this documentation
  • Getting started with Akamas
    • Introduction to Akamas
    • Licensing
    • Deployment
      • Cloud Hosting
    • Security
    • Maintenance & Support (M&S) Services
      • Customer Support Services
      • Support levels for Customer Support Services
      • Support levels for software versions
      • Support levels with Akamas
  • Installing Akamas
    • Architecture
    • Docker compose installation
      • Prerequisites
        • Hardware Requirements
        • Software Requirements
        • Network requirements
      • Install Akamas dependencies
      • Install the Akamas Server
        • Online installation mode
          • Online installation behind a Proxy server
        • Offline installation mode
        • Changing UI Ports
        • Setup HTTPS configuration
      • Troubleshoot Docker installation issues
    • Kubernetes installation
      • Prerequisites
        • Cluster Requirements
        • Software Requirements
      • Install Akamas
        • Online Installation
        • Offline Installation - Private registry
      • Accessing Akamas
      • Useful commands
    • Install the CLI
      • Setup the CLI
      • Initialize the CLI
      • Change CLI configuration
      • Use a proxy server
    • Verify the installation
    • Management container/pod
    • Install the license
    • Manage anonymous data collection
    • Configure an external identity provider
      • Azure Active Directory
      • Google
  • Managing Akamas
    • Akamas logs
    • Audit logs
    • Upgrade Akamas
      • Docker compose
      • Kubernetes
    • Monitor the Akamas Server
    • Backup & Recover of the Akamas Server
  • Using Akamas
    • General optimization process and methodology
    • Preparing optimization studies
      • Modeling systems
      • Modeling components
        • Creating custom optimization packs
        • Managing optimization packs
      • Creating telemetry instances
      • Creating automation workflows
        • Creating workflows for offline studies
        • Performing load testing to support optimization activities
        • Creating workflows for live optimizations
      • Creating optimization studies
        • Defining optimization goal & constraints
        • Defining windowing policies
        • Defining KPIs
        • Defining parameters & metrics
        • Defining workloads
        • Defining optimization steps
        • Setting safety policies
    • Running optimization studies
      • Before running optimization studies
      • Analyzing results of offline optimization studies
        • Optimization Insights
      • Analyzing results of live optimization studies
      • Before applying optimization results
    • Guidelines for choosing optimization parameters
      • Guidelines for Kubernetes
      • Guidelines for JVM layer (OpenJDK)
      • Guidelines for JVM (OpenJ9)
      • Guidelines for Oracle Database
      • Guidelines for PostgreSQL
    • Guidelines for defining optimization studies
      • Optimizing Linux
      • Optimizing Java OpenJDK
      • Optimizing OpenJ9
      • Optimizing Web Applications
      • Optimizing Kubernetes
      • Optimizing Spark
      • Optimizing Oracle Database
      • Optimizing MongoDB
      • Optimizing MySQL Database
      • Optimizing PostgreSQL
  • Integrating Akamas
    • Integrating Telemetry Providers
      • CSV provider
        • Install CSV provider
        • Create CSV telemetry instances
      • Dynatrace provider
        • Install Dynatrace provider
        • Create Dynatrace telemetry instances
          • Import Key Requests
      • Prometheus provider
        • Install Prometheus provider
        • Create Prometheus telemetry instances
        • CloudWatch Exporter
        • OracleDB Exporter
      • Spark History Server provider
        • Install Spark History Server provider
        • Create Spark History Server telemetry instances
      • NeoLoadWeb provider
        • Install NeoLoadWeb telemetry provider
        • Create NeoLoadWeb telemetry instances
      • LoadRunner Professional provider
        • Install LoadRunner Professional provider
        • Create LoadRunner Professional telemetry instances
      • LoadRunner Enterprise provider
        • Install LoadRunner Enterprise provider
        • Create LoadRunner Enterprise telemetry instances
      • AWS provider
        • Install AWS provider
        • Create AWS telemetry instances
    • Integrating Configuration Management
    • Integrating Value Stream Delivery
    • Integrating Load Testing
      • Integrating NeoLoad
      • Integrating Load Runner Professional
      • Integrating LoadRunner Enterprise
  • Akamas Reference
    • Glossary
      • System
      • Component
      • Metric
      • Parameter
      • Component Type
      • Workflow
      • Telemetry Provider
      • Telemetry Instance
      • Optimization Pack
      • Goals & Constraints
      • KPI
      • Optimization Study
      • Offline Optimization Study
      • Live Optimization Study
      • Workspace
    • Construct templates
      • System template
      • Component template
      • Parameter template
      • Metric template
      • Component Types template
      • Telemetry Provider template
      • Telemetry Instance template
      • Workflows template
      • Study template
        • Goal & Constraints
        • Windowing policy
          • Trim windowing
          • Stability windowing
        • Parameter selection
        • Metric selection
        • Workload selection
        • KPIs
        • Steps
          • Baseline step
          • Bootstrap step
          • Preset step
          • Optimize step
        • Parameter rendering
        • Optimizer Options
    • Workflow Operators
      • General operator arguments
      • Executor Operator
      • FileConfigurator Operator
      • LinuxConfigurator Operator
      • WindowsExecutor Operator
      • WindowsFileConfigurator Operator
      • Sleep Operator
      • OracleExecutor Operator
      • OracleConfigurator Operator
      • SparkSSHSubmit Operator
      • SparkSubmit Operator
      • SparkLivy Operator
      • NeoLoadWeb Operator
      • LoadRunner Operator
      • LoadRunnerEnteprise Operator
    • Telemetry metric mapping
      • Dynatrace metrics mapping
      • Prometheus metrics mapping
      • NeoLoadWeb metrics mapping
      • Spark History Server metrics mapping
      • LoadRunner metrics mapping
    • Optimization Packs
      • Linux optimization pack
        • Amazon Linux
        • Amazon Linux 2
        • Amazon Linux 2022
        • CentOS 7
        • CentOS 8
        • RHEL 7
        • RHEL 8
        • Ubuntu 16.04
        • Ubuntu 18.04
        • Ubuntu 20.04
      • DotNet optimization pack
        • DotNet Core 3.1
      • Java OpenJDK optimization pack
        • Java OpenJDK 8
        • Java OpenJDK 11
        • Java OpenJDK 17
      • OpenJ9 optimization pack
        • IBM J9 VM 6
        • IBM J9 VM 8
        • Eclipse Open J9 11
      • Node JS optimization pack
        • Node JS 18
      • GO optimization pack
        • GO 1
      • Web Application optimization pack
        • Web Application
      • Docker optimization pack
        • Container
      • Kubernetes optimization pack
        • Kubernetes Pod
        • Kubernetes Container
        • Kubernetes Workload
        • Kubernetes Namespace
        • Kubernetes Cluster
      • WebSphere optimization pack
        • WebSphere 8.5
        • WebSphere Liberty ND
      • AWS optimization pack
        • EC2
        • Lambda
      • PostgreSQL optimization pack
        • PostgreSQL 11
        • PostgreSQL 12
      • Cassandra optimization pack
        • Cassandra
      • MySQL Database optimization pack
        • MySQL 8.0
      • Oracle Database optimization pack
        • Oracle Database 12c
        • Oracle Database 18c
        • Oracle Database 19c
        • RDS Oracle Database 11g
        • RDS Oracle Database 12c
      • MongoDB optimization pack
        • MongoDB 4
        • MongoDB 5
      • Elasticsearch optimization pack
        • Elasticsearch 6
      • Spark optimization pack
        • Spark Application 2.2.0
        • Spark Application 2.3.0
        • Spark Application 2.4.0
    • Command Line commands
      • Administration commands
      • User and Workspace management commands
      • Authentication commands
      • Resource management commands
      • Optimizer options commands
    • Release Notes
  • Knowledge Base
    • Setting up a Konakart environment for testing Akamas
    • Modeling a sample Java-based e-commerce application (Konakart)
    • Optimizing a web application
    • Optimizing a sample Java OpenJ9 application
    • Optimizing a sample Java OpenJDK application
    • Optimizing a sample Linux system
    • Optimizing a MongoDB server instance
    • Optimizing a Kubernetes application
    • Leveraging Ansible to automate AWS instance management
    • Guidelines for optimizing AWS EC2 instances
    • Optimizing a sample application running on AWS
    • Optimizing a Spark application
    • Optimizing an Oracle Database server instance
    • Optimizing an Oracle Database for an e-commerce service
    • Guidelines for optimizing Oracle RDS
    • Optimizing a MySQL server database running Sysbench
    • Optimizing a MySQL server database running OLTPBench
    • Optimizing cost of a Kubernetes application while preserving SLOs in production
    • Optimizing a live full-stack deployment (K8s + JVM)
    • Setup Instana Integration
  • Akamas Free Trial
Powered by GitBook
On this page
  • Environment Setup
  • MySQL Installation
  • OLTP Installation
  • Optimization Setup
  • System
  • Workflow
  • Telemetry
  • Study

Was this helpful?

Export as PDF
  1. Knowledge Base

Optimizing a MySQL server database running OLTPBench

Last updated 1 year ago

Was this helpful?

In this example study, we are going to optimize a MySQL instance by setting the performance goal of maximizing the throughput of operations towards the database.

As regards workload generation, in this example we are going to use , a popular open-source benchmarking suite for databases. OLTPBench supports several benchmarks, in this example we will be using Synthetic Resource Stresser.

To import the results of the benchmark into Akamas, we are going to use a custom script to convert its output to a CSV file that can be parsed by the .

Environment Setup

In order to run the OLTP Benchmark suite against a MySQL installation, you need to first install and configure the two software. In the following, we will assume that both MySQL and OLTP will run on the same machine, to obtain more significant results in terms of performance you might want to run them on separate hosts.

MySQL Installation

To install MySQL please follow the official documentation. In the following, we will make a few assumptions on the location of the configuration files, the user running the server, and the location of the datafiles. These assumptions are based on a default installation of MySQL on an Ubuntu instance performed via apt.

  • Datafile location: /var/lib/mysql

  • Configuration file: /etc/mysql/conf.d/mysql.cnf

  • MySQL user: mysql

  • MySQL root user password: root

This is a template for the configuration file mysql.cnf.template

[mysqld]

innodb_buffer_pool_size     = 134217728
innodb_thread_sleep_delay   = 10000
innodb_flush_method         = fsync
innodb_log_file_size        = 50331648
innodb_thread_concurrency   = 0
innodb_max_dirty_pages_pct  = 10.00000
innodb_read_ahead_threshold = 56

If your installation of MySQL has different default values for these parameters please update the provided scripts accordingly.

OLTP Installation

To verify your installation of OLTP and initialize the database you can download the following set of scripts and place them in the /home/ubuntu/scripts folder. Move in the folder and run the init-db.sh script.

This is the init-db.sh script:

#!/bin/bash
set -e


cd "$(dirname "$0")"
cd ../oltp
mysql -u root -proot -e "CREATE DATABASE resourcestresser"
./oltpbenchmark --bench resourcestresser --config scripts/resourcestresser.xml --create=true --load=true

sleep 5
sudo systemctl stop mysql

#Create the backup
echo "Backing up the database"
sudo rm -rf /tmp/backup
sudo mkdir /tmp/backup
sudo rsync -r --progress /var/lib/mysql /tmp/backup/
sleep 2

sudo systemctl start mysql
sudo systemctl status mysql

This script will:

  • connect to your MySQL installation

  • create a resourcestresser database for the test

  • run the OLTP data generation phase to populate the database

  • backup the initialized database under /tmp/backup

The resourcestresser.xml file contains the workload for the application. The default setting is quite small and should be used for testing purposes. You can then modify the test to suit your benchmarking needs.

Optimization Setup

Here is a step-by-step explanation of all the required configurations for this example.

System

In this example, we are interested in optimizing MySQL settings and measuring the peak throughput measured using OLTPBench. Hence, we are going to create two components:

  • A mysql component which represents the MySQL instance, including all the configuration parameters

  • An OLTP component which represents OLTPBench and contains the custom metrics reported by the benchmark

The OLTP component

MySQL is a widespread technology and Akamas provides a specific Optimization Pack to support its optimization. OLTP, on the other hand, is a benchmark application and is not yet supported by a specific optimization pack. To use it in our study, we will need to define its metrics first. This operation can be done once and the created component type can be used across many systems.

First, build a metrics.yamlfile with the following content:

metrics:
  - name: throughput
    description: The throughput of the database
    unit: tps

  - name: response_time_avg
    description: The average response time of the database
    unit: milliseconds

  - name: response_time_min
    description: The minimum response time of the database
    unit: milliseconds

  - name: response_time_25th
    description: The response time 25th percentile of the database
    unit: milliseconds

  - name: response_time_median
    description: The response time median of the database
    unit: milliseconds

  - name: response_time_75th
    description: The response time 75th percentile of the database
    unit: milliseconds

  - name: response_time_90th
    description: The response time 90th percentile of the database
    unit: milliseconds

  - name: response_time_95th
    description: The response time 95th percentile of the database
    unit: milliseconds

  - name: response_time_99th
    description: The response time 99th percentile of the database
    unit: milliseconds

  - name: response_time_max
    description: The maximum response time of the database
    unit: milliseconds

  - name: duration
    description: The duration of the task (load or benchmark execution)
    unit: seconds

You can now create the metrics by issuing the following command:

akamas create metrics metrics.yaml

Finally, create a file named resourcestresser.yaml with the following definition of the component:

name: ResourceStresser
description: >
  ResourceStresser benchmark from OLTPBench for database systems. It is a
  purely synthetic benchmark that can create isolated contention on the system
  resources. Each of the benchmark’s transactions imposes some load on three
  specific resources: CPU, disk I/O, and locks.
parameters: []
metrics:
  - name: throughput
  - name: response_time_avg
  - name: response_time_max
  - name: response_time_min
  - name: response_time_25th
  - name: response_time_median
  - name: response_time_75th
  - name: response_time_90th
  - name: response_time_95th
  - name: response_time_99th
  - name: duration

You can now create the metrics by issuing the following command:

akamas create component-type resourcestresser.yaml

Model the system

Here’s the definition of our system (system.yaml):

name: MySQL-ResourceStresser
description: A system for evaluating MySQL with OLTP Benchmark

Here’s the definition of our mysql component (mysql.yaml):

name: mysql
description: MySQL
componentType: MySQL 8.0
properties:
    hostname: gibbo.dev.akamas.io
    sshPort: "22"
    username: ubuntu
    sourcePath: /home/ubuntu/scripts/my.cnf.template
    targetPath: /home/ubuntu/scripts/my.cnf
    prometheus:
      instance: gibbo
      job: mysql_exporter
    key: |
            -----BEGIN RSA PRIVATE KEY-----
            ...
            -----END RSA PRIVATE KEY-----

Here’s the definition of our OLTP component (oltp.yaml):

name: OLTP
description: OLTP Benchmark for database systems
componentType: ResourceStresser

We can create the system by running:

akamas create system system.yaml

We can then create the components by running:

akamas create component mysql.yaml MySQL-ResourceStresser
akamas create component oltp.yaml MySQL-ResourceStresser

Workflow

A workflow for optimizing MySQL can be structured into 6 tasks:

  1. Reset OLTPBench data

  2. Configure MySQL

  3. Restart MySQL

  4. Launch the benchmark

  5. Parse the benchmark results

Here below you can find the scripts that codify these tasks.

This is the restart-mysql.sh script:

#!/usr/bin/env bash
set -e

cd "$(dirname "$0")"

#Stop the DB
echo "Stopping MySQL"
sudo systemctl stop mysql &> /dev/null
#sudo systemctl status mysql

#Apply Configuration
echo "Copying the configuration"
sudo cp my.cnf /etc/mysql/conf.d/mysql.cnf

#Drop data
echo "Dropping the data"
sudo rm -rf /var/lib/mysql
#Create the backup
# sudo rsync -r --progress /var/lib/mysql /tmp/backup/

#Restore the backup data
echo "Restoring the DB"
sudo rsync -r --progress /tmp/backup/mysql /var/lib/
sudo chown -R mysql: /var/lib/mysql

sync; sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"; sync

#Restart DB
echo "Restarting the database"
sudo systemctl start mysql &> /dev/null
#sudo systemctl status mysql
sleep 2

This is the clean_bench.sh script:

#!/usr/bin/env bash
set -e
cd "$(dirname "$0")"


if ! test -d results || [[ -z "$(ls -A results)" ]]; then
    echo "First iteration"
    mkdir -p results
    exit 0
fi

rm -rf results
mkdir -p results

This is the run_test.sh script:

#!/bin/bash
set -e

cd "$(dirname "$0")"

HOST="--mysql-host=127.0.0.1 --mysql-port=3306 --mysql-user=root --mysql-password=root"
sysbench oltp_read_only --tables=10 --table_size=1000000 --threads=100 $HOST --time=60 --max-requests=0 --report-interval=1 --rand-type=uniform --db-driver=mysql --mysql-db=sbtest --mysql-ssl=off run | tee -a results/res.txt

This file parse_csv.sh script:

#!/bin/bash
set -e
cd "$(dirname "$0")"
OUTFILE=$(pwd)/results/output.csv
INFILE=$(pwd)/results/res.txt
COMPONENT=Sysbench
epoch_now=$(date +"%s")
num_samples=$(grep -c "thds" ${INFILE})
epoch_start=$(($epoch_now - $num_samples))
cat $INFILE | while read line
do
        ts_sysbench=$(echo $line | cut -d' ' -f2)
        # CSV header
        [ "$ts_sysbench" == "started!" ] && echo "component,ts,throughput,response_time_95pct" > ${OUTFILE} && continue
        # CSV body
        tps=$(echo $line | cut -d' ' -f7)
        lat_95pct=$(echo $line | cut -d' ' -f14)
        # skip unless it's a metric line
        echo $line | grep -q "thds" || continue
        ts_seconds=$(echo $ts_sysbench | sed 's/s//')
        epoch_current=$(($epoch_start + $ts_seconds))
        ts=$(date -d @$(($epoch_current)) "+%Y-%m-%d %H:%M:%S")
        echo "${COMPONENT},$ts,$tps,$lat_95pct" >> ${OUTFILE}
done

Here is the complete Akamas workflow for this example (workflow.yaml):

name: MySQL-ResourceStresser
tasks:

  - name: Reset OLTP data
    operator: Executor
    arguments:
      command: "bash /home/ubuntu/scripts/clean_bench.sh"
      component: mysql

  - name: Configure MySQL
    operator: FileConfigurator
    arguments:
      component: mysql

  - name: Restart MySQL
    operator: Executor
    arguments:
      command: "/home/ubuntu/scripts/restart-mysql.sh"
      component: mysql

  - name: test
    operator: Executor
    arguments:
      command: "cd /home/ubuntu/oltp && ./oltpbenchmark --bench resourcestresser --config /home/ubuntu/scripts/resourcestresser.xml --execute=true -s 5 --output out"
      component: mysql

  - name: Parse csv results
    operator: Executor
    arguments:
      command: "bash /home/ubuntu/scripts/parse_csv.sh"
      component: mysql

You can create the workflow by running:

akamas create workflow workflow.yaml

Telemetry

This telemetry provider can be installed by running:

akamas install telemetry-provider telemetry/providers/csv.yaml

To start using the provider, we need to define a telemetry instance (csv.yaml):

provider: csv
config:
  protocol: scp
  address: gibbo.dev.akamas.io
  username: ubuntu
  authType: key
  auth: |
    -----BEGIN RSA PRIVATE KEY-----
    ...
    -----END RSA PRIVATE KEY-----
  remoteFilePattern: /home/ubuntu/output.csv
  csvFormat: horizontal
  componentColumn: component
  timestampColumn: ts
  timestampFormat: yyyy-MM-dd HH:mm:ss

metrics:
- metric: throughput
  datasourceMetric: throughput
- metric: response_time_avg
  datasourceMetric: response_time_avg
- metric: response_time_max
  datasourceMetric: response_time_max
- metric: response_time_min
  datasourceMetric: response_time_min
- metric: response_time_25th
  datasourceMetric: response_time_25th
- metric: response_time_median
  datasourceMetric: response_time_median
- metric: response_time_75th
  datasourceMetric: response_time_75th
- metric: response_time_90th
  datasourceMetric: response_time_90th
- metric: response_time_95th
  datasourceMetric: response_time_95th
- metric: response_time_99th
  datasourceMetric: response_time_99th

Please make sure the telemetry configuration is correct for your environment (e.g. hostname, username, key, file paths, etc.).

You can create the telemetry instance and attach it to the system by running:

akamas create telemetry-instance csv.yaml MySQL-ResourceStresser

Study

In this example, we are going to leverage Akamas AI-driven optimization capabilities to maximize MySQL database query throughput, as measured by the OLTPBench benchmark.

Here is the Akamas study definition (study.yaml):

name: MySQL Tuning
description: Tuning of mysql-8 with OLTPBenchmark using benchmark resourcestresser
system: MySQL-ResourceStresser
workflow: MySQL-ResourceStresser

goal:
  objective: maximize
  function:
    formula: max(1, OLTP.throughput)
  constraints: []

windowing:
  task: test
  type: trim
  trim: [1m, 1m]


parametersSelection:
  # postgres
  - name: mysql.mysql_innodb_buffer_pool_size
    domain: [5242880, 10485760]
  - name: mysql.mysql_innodb_thread_sleep_delay
  - name: mysql.mysql_innodb_flush_method
  - name: mysql.mysql_innodb_log_file_size
  - name: mysql.mysql_innodb_thread_concurrency
    domain: [0, 4]
  - name: mysql.mysql_innodb_max_dirty_pages_pct
  - name: mysql.mysql_innodb_read_ahead_threshold


trialAggregation: AVG
numberOfTrials: 1

steps:
  - name: baseline
    type: baseline
    renderParameters: ["mysql.*"]

  - name: optimize
    type: optimize
    optimizer: AKAMAS
    numberOfExperiments: 200
    maxFailedExperiments: 200
    renderParameters: ["mysql.*"]

You can create the study by running:

akamas create study study.yaml

You can then start it by running:

akamas start study "MySql Tuning"

You can now follow the study progress using the UI and explore the results using the Analysis and Metrics tabs.

To install OLTP you can download a pre-built version or build it from the . In the following, we will assume that OLTP is installed in the /home/ubuntu/oltp folder.

We are going to use Akamas telemetry capability to import the metrics related to OLTPBench benchmark results, in particular the throughput of operations. To achieve this we can leverage the Akamas , which extracts metrics from CSV files. The CSV file is the one produced in the last task of the workflow of the study.

OLTPBench
CSV provider
here
official repository
CSV provider