Optimizing a MySQL server database running OLTPBench
In this example study, we are going to optimize a MySQL instance by setting the performance goal of maximizing the throughput of operations towards the database.
As regards the workload generation, in this example we are going to use OLTPBench, a popular open-source benchmarking suite for databases. OLTPBench supports several benchmarks, in this example we will be using Synthetic Resource Stresser.
To import the results of the benchmark into Akamas, we are going to use a custom script to convert its output to a CSV file that can be parsed by the CSV provider.
Environment Setup
In order to run the OLTP Benchmark suite against a MySQL installation, you need to first install and configure the two software. In the following, we will assume that both MySQL and OLTP will run on the same machine, to obtain more significant results in terms of performance you might want to run them on separate hosts.
MySQL Installation
To install MySQL please follow the official documentation. In the following, we will make a few assumptions on the location of the configuration files, the user running the server, and the location of the datafiles. These assumptions are based on a default installation of MySQL on an Ubuntu instance performed via apt.
Datafile location: /var/lib/mysql
Configuration file: /etc/mysql/conf.d/mysql.cnf
MySQL user: mysql
MySQL root user password: root
This is a template for the configuration file mysql.cnf.template
If your installation of MySQL has different default values for these parameters please update the provided scripts accordingly.
OLTP Installation
To install OLTP you can download a pre-built version here or build it from the official repository. In the following, we will assume that OLTP is installed in the /home/ubuntu/oltp folder.
To verify your installation of OLTP and initialize the database you can download the following set of scripts and place them in the /home/ubuntu/scripts folder. Move in the folder and run the init-db.sh script.
This is the init-db.sh script:
#!/bin/bash
set -e
cd "$(dirname "$0")"
cd ../oltp
mysql -u root -proot -e "CREATE DATABASE resourcestresser"
./oltpbenchmark --bench resourcestresser --config scripts/resourcestresser.xml --create=true --load=true
sleep 5
sudo systemctl stop mysql
#Create the backup
echo "Backing up the database"
sudo rm -rf /tmp/backup
sudo mkdir /tmp/backup
sudo rsync -r --progress /var/lib/mysql /tmp/backup/
sleep 2
sudo systemctl start mysql
sudo systemctl status mysql
This script will:
connect to your MySQL installation
create a resourcestresser database for the test
run the OLTP data generation phase to populate the database
backup the initialized database under /tmp/backup
The resourcestresser.xml file contains the workload for the application. The default setting is quite small and should be used for testing purposes. You can then modify the test to suit your benchmarking needs.
Optimization Setup
Here follow a step-by-step explanation of all the required configurations for this example.
System
In this example, we are interested in optimizing MySQL settings and measuring the peak throughput measured using OLTPBench. Hence, we are going to create two components:
A mysql component which represents the MySQL instance, including all the configuration parameters
An OLTP component which represents OLTPBench and contains the custom metrics reported by the benchmark
The OLTP component
MySQL is a widespread technology and Akamas provides a specific Optimization Pack to support its optimization. OLTP, on the other hand, is a benchmark application and is not yet supported by a specific optimization pack. In order to use it in our study, we will need to define its metrics first. This operation can be done once and the created component type can be used across many systems.
First, build a metrics.yamlfile with the following content:
---
metrics:
- name: throughput
description: The throughput of the database
unit: tps
- name: response_time_avg
description: The average response time of the database
unit: milliseconds
- name: response_time_min
description: The minimum response time of the database
unit: milliseconds
- name: response_time_25th
description: The response time 25th percentile of the database
unit: milliseconds
- name: response_time_median
description: The response time median of the database
unit: milliseconds
- name: response_time_75th
description: The response time 75th percentile of the database
unit: milliseconds
- name: response_time_90th
description: The response time 90th percentile of the database
unit: milliseconds
- name: response_time_95th
description: The response time 95th percentile of the database
unit: milliseconds
- name: response_time_99th
description: The response time 99th percentile of the database
unit: milliseconds
- name: response_time_max
description: The maximum response time of the database
unit: milliseconds
- name: duration
description: The duration of the task (load or benchmark execution)
unit: seconds
You can now create the metrics by issuing the following command:
akamas create metrics metrics.yaml
Finally, create a file named resourcestresser.yaml with the following definition of the component:
name: ResourceStresser
description: >
ResourceStresser benchmark from OLTPBench for database systems. It is a
purely synthetic benchmark that can create isolated contention on the system
resources. Each of the benchmark’s transactions imposes some load on three
specific resources: CPU, disk I/O, and locks.
parameters: []
metrics:
- name: throughput
- name: response_time_avg
- name: response_time_max
- name: response_time_min
- name: response_time_25th
- name: response_time_median
- name: response_time_75th
- name: response_time_90th
- name: response_time_95th
- name: response_time_99th
- name: duration
You can now create the metrics by issuing the following command:
A workflow for optimizing MySQL can be structured into 6 tasks:
Reset OLTPBench data
Configure MySQL
Restart MySQL
Launch the benchmark
Parse the benchmark results
Here below you can find the scripts that codify these tasks.
This is the restart-mysql.sh script:
#!/usr/bin/env bash
set -e
cd "$(dirname "$0")"
#Stop the DB
echo "Stopping MySQL"
sudo systemctl stop mysql &> /dev/null
#sudo systemctl status mysql
#Apply Configuration
echo "Copying the configuration"
sudo cp my.cnf /etc/mysql/conf.d/mysql.cnf
#Drop data
echo "Dropping the data"
sudo rm -rf /var/lib/mysql
#Create the backup
# sudo rsync -r --progress /var/lib/mysql /tmp/backup/
#Restore the backup data
echo "Restoring the DB"
sudo rsync -r --progress /tmp/backup/mysql /var/lib/
sudo chown -R mysql: /var/lib/mysql
sync; sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"; sync
#Restart DB
echo "Restarting the database"
sudo systemctl start mysql &> /dev/null
#sudo systemctl status mysql
sleep 2
This is the clean_bench.sh script:
#!/usr/bin/env bash
set -e
cd "$(dirname "$0")"
if ! test -d results || [[ -z "$(ls -A results)" ]]; then
echo "First iteration"
mkdir -p results
exit 0
fi
rm -rf results
mkdir -p results
This is the run_test.sh script:
#!/bin/bash
set -e
cd "$(dirname "$0")"
HOST="--mysql-host=127.0.0.1 --mysql-port=3306 --mysql-user=root --mysql-password=root"
sysbench oltp_read_only --tables=10 --table_size=1000000 --threads=100 $HOST --time=60 --max-requests=0 --report-interval=1 --rand-type=uniform --db-driver=mysql --mysql-db=sbtest --mysql-ssl=off run | tee -a results/res.txt
Here is the complete Akamas workflow for this example (workflow.yaml):
name: MySQL-ResourceStresser
tasks:
- name: Reset OLTP data
operator: Executor
arguments:
command: "bash /home/ubuntu/scripts/clean_bench.sh"
component: mysql
- name: Configure MySQL
operator: FileConfigurator
arguments:
component: mysql
- name: Restart MySQL
operator: Executor
arguments:
command: "/home/ubuntu/scripts/restart-mysql.sh"
component: mysql
- name: test
operator: Executor
arguments:
command: "cd /home/ubuntu/oltp && ./oltpbenchmark --bench resourcestresser --config /home/ubuntu/scripts/resourcestresser.xml --execute=true -s 5 --output out"
component: mysql
- name: Parse csv results
operator: Executor
arguments:
command: "bash /home/ubuntu/scripts/parse_csv.sh"
component: mysql
You can create the workflow by running:
akamas create workflow workflow.yaml
Telemetry
We are going to use Akamas telemetry capability to import the metrics related to OLTPBench benchmark results, in particular the throughput of operations. To achieve this we can leverage the Akamas CSV provider, which extracts metrics from CSV files. The CSV file is the one produced in the last task of the workflow of the study.
This telemetry provider can be installed by running:
In this example, we are going to leverage Akamas AI-driven optimization capabilities to maximize MySQL database query throughput, as measured by the OLTPBench benchmark.