Akamas Docs
3.2.0
Search
⌃K

Optimizing an Oracle Database server instance

In this example, we are going to tune the initialization parameters of an Oracle Database server instance in order to maximize its throughput while stressed by a load generator.
For the workload, we’ll use the OLTPBench's implementation of TPC-C, a popular transaction processing benchmarking suite, while to extract the metrics we are going to leverage the Oracle Prometheus exporter.

Environment setup

Environment

For the purpose of this experiment we are going to use two dedicated machines:
We assume to be working with Linux hosts

Prometheus and exporters

Install the OracleDB Prometheus exporter

The OracleDB Prometheus exporter publishes as metrics the results of the queries defined in the configuration file. In our case, we’ll use it to extract valuable performance metrics from Oracle’s Dynamic Performance (V$) Views.
We can spin up the exporter using the official Docker image using the following command, where cust-metrics.toml is our custom metrics file:
docker run -d --name orabench_exporter --restart always \
-p 9161:9161 \
-v ~/oracledb_exporter/cust-metrics.toml:/cust-metrics.toml \
-e CUSTOM_METRICS=/cust-metrics.toml \
-e DATA_SOURCE_NAME='system/[email protected]//oraxe.mycompany.com:1521/XE' \
iamseth/oracledb_exporter
The exporter will publish the metrics on port 9161.
Here’s the example metrics file used to run the exporter:
1
[[metric]]
2
context= "memory"
3
labels= [ "component" ]
4
metricsdesc= { size="Component memory extracted from v$memory_dynamic_components in Oracle." }
5
request = '''
6
SELECT component, current_size as "size"
7
FROM V$MEMORY_DYNAMIC_COMPONENTS
8
UNION
9
SELECT name, bytes as "size"
10
FROM V$SGAINFO
11
WHERE name in ('Free SGA Memory Available', 'Redo Buffers', 'Maximum SGA Size')
12
'''
13
14
[[metric]]
15
context = "activity"
16
metricsdesc = { value="Generic counter metric from v$sysstat view in Oracle." }
17
fieldtoappend = "name"
18
request = '''
19
SELECT name, value
20
FROM V$SYSSTAT WHERE name IN (
21
'execute count',
22
'user commits', 'user rollbacks',
23
'db block gets from cache', 'consistent gets from cache', 'physical reads cache', /* CACHE */
24
'redo log space requests'
25
)
26
'''
27
28
[[metric]]
29
context = "system_event"
30
labels = [ "event", "wait_class" ]
31
request = '''
32
SELECT
33
event, wait_class,
34
total_waits, time_waited
35
FROM V$SYSTEM_EVENT
36
'''
37
[metric.metricsdesc]
38
total_waits= "Total number of waits for the event as per V$SYSTEM_EVENT in Oracle."
39
time_waited= "Total time waited for the event (in hundredths of seconds) as per V$SYSTEM_EVENT in Oracle."

Install and configure Prometheus

You can check how to configure Prometheus here; by default, it will run on port 9090.
In order to configure the OracleDB exporter you can add the following snippet to the configuration file:
1
scrape_configs:
2
- job_name: oraxe-exporter
3
scrape_interval: 15s
4
static_configs:
5
- targets: [oltpbench.mycompany.com:9161]
6
relabel_configs:
7
- source_labels: [__address__]
8
regex: (.*)
9
target_label: instance
10
replacement: oraxe

Optimization setup

System

In order to model the system composed of the tuned database and the workload generator we need two different components:
  • An oracle component that represents the Oracle Database instance and maps directly to oraxe.mycompany.com.
  • A tpcc component that represents the TPC-C workload from the OLTPBench suite and maps to oltpbench.mycompany.com.
For the tpcc component, we’ll need first to define some custom metrics and a new component-type. The following is the definition of the metrics (tpcc-metrics.yaml):
1
metrics:
2
- name: throughput
3
description: throughput
4
unit: requests/s
5
6
- name: resp_time
7
description: resp_time
8
unit: milliseconds
9
10
- name: resp_time_min
11
description: resp_time_min
12
unit: milliseconds
13
14
- name: resp_time90th
15
description: resp_time90th
16
unit: milliseconds
17
18
- name: resp_time_max
19
description: resp_time_max
20
unit: milliseconds
The following is the definition of the new component-type (tpcc-ctype.yaml):
1
name: TPCC Benchmarck
2
description: OLTP TPCC Benchmarck
3
4
parameters: []
5
6
metrics:
7
- name: throughput
8
- name: resp_time
9
- name: resp_time_min
10
- name: resp_time90th
11
- name: resp_time_max
We can then create the new component type running the commands:
akamas create metrics tpcc-metrics.yaml
akamas create component-type tpcc-ctype.yaml
As a next step, we can proceed then with the definition of our system (system.yaml):
1
name: oracle system
2
description: oracle system
Here’s the definition of our oracle component (oracle.yaml):
1
name: oracle
2
description: Oracle DB
3
componentType: Oracle Database 18c
4
properties:
5
instance: oraxe
6
7
connection:
8
user: system
9
password: passwd
10
dsn: oraxe.mycompany.com:1521/XE
11
12
hostname: oraxe.mycompany.com # needed to run docker restart
13
username: ubuntu
14
sshPort: 22
15
key: rsa_key_file
Here’s the definition of the tpcc component (tpcc.yaml):
1
name: tpcc
2
description: OLTP TPC-C load benchmarck
3
componentType: TPC-C Benchmarck
4
properties:
5
hostname: oltpbench.mycompany.com
6
username: ubuntu
7
sshPort: 22
8
key: rsa_key_file
We can create the system by running:
akamas create system system.yaml
We can then create the components by running:
akamas create component oracle.yaml 'oracle system'
akamas create component tpcc.yaml 'oracle system'

Telemetry

Prometheus

Since we are using Prometheus to extract the database metrics we can leverage the Prometheus provider, which already includes the queries needed for the Oracle metrics we need. To use the Prometheus provider we need to define a telemetry instance (prom.yaml):
provider: Prometheus
config:
address: prometheus
port: 9090
We can now create the telemetry instance and attach it to our system by running:
akamas create telemetry-instance prom.yaml 'oracle system'

CSV

Other than the telemetry of the Oracle instance, we need also the metrics in the output CSVs from the TPC-C workload runs. To ingest these metrics we can leverage the CSV Provider, defining the following telemetry instance (csv.yaml):
1
provider: csv
2
config:
3
address: oltpbench.mycompany.com
4
port: 22
5
username: ubuntu
6
protocol: scp
7
authType: key
8
auth: rsa_key_file
9
10
remoteFilePattern: /home/ubuntu/oltpbench/results/output.csv
11
componentColumn: component
12
timestampColumn: ts
13
timestampFormat: yyyy-MM-dd HH:mm:ss
14
15
metrics:
16
- metric: throughput
17
datasourceMetric: throughput
18
staticLabels: {}
19
20
- metric: resp_time
21
datasourceMetric: avg_lat
22
staticLabels: {}
23
24
- metric: resp_time_min
25
datasourceMetric: min_lat
26
staticLabels: {}
27
28
- metric: resp_time90th
29
datasourceMetric: 90th_lat
30
staticLabels: {}
31
32
- metric: resp_time_max
33
datasourceMetric: max_lat
34
staticLabels: {}
We can create the telemetry instance and attach it to our system by running:
akamas create telemetry-instance csv.yaml 'oracle system'

Workflow

Remove previous executions' data

Using an Executor operator we run a command to clean the results folder that may contain files from previous executions
1
name: Clean results
2
operator: Executor
3
arguments:
4
command: rm -f ~/oltpbench/results/*
5
component: tpcc

Configure the Oracle instance

We define a task that uses the OracleConfigurator operator to update the Oracle initialization parameters:
1
name: Update parameters
2
operator: OracleConfigurator
3
arguments:
4
component: oracle

Restart the instance

We define a task that uses the Executor operator that reboots the Oracle container for the parameters that need a restart to take effect:
1
name: Restart Oracle container
2
operator: Executor
3
arguments:
4
command: docker restart oraxe
5
component: oracle

Run the workload

We define a task that uses the Executor operator to launch the TPC-C benchmark against the Oracle instance:
1
name: Execute load test
2
operator: Executor
3
arguments:
4
command: cd ~/oltpbench ; ./oltpbenchmark --bench tpcc --config tpcc_conf.xml --execute=true -s 5 --output out
5
component: tpcc

Prepare test results

We define a workflow task that runs a script that parses the TPC-C output files and generates a file compatible with the CSV Provider:
1
name: Parse TPC-C results
2
operator: Executor
3
arguments:
4
command: cd ~/oltpbench ; ./tpcc_parse_csv.sh
5
component: tpcc
Where tpcc_parse_csv.sh is the following script:
1
#!/bin/bash
2
3
OUTFILE=output.csv
4
5
COMP_NAME=tpcc
6
7
BASETS=`tail -n+2 results/out.csv | head -n1 | cut -d',' -f3`
8
echo 'component,ts,throughput,avg_lat,min_lat,90th_lat,max_lat' > $OUTFILE
9
awk -F, "BEGIN{OFS=\",\"} NR>1 {\$1=strftime(\"%F %T\", ${BASETS}+\$1); print \"${COMP_NAME}\",\$0}" < results/out.res | cut -d',' -f1-5,9,12 >> $OUTFILE
10

Complete workflow

By putting together all the tasks defined above we come up with the following workflow definition (workflow.yaml):
1
name: oracle workflow
2
tasks:
3
- name: Clean results
4
operator: Executor
5
arguments:
6
command: rm -f ~/oltpbench/results/*
7
component: tpcc
8
9
- name: Update parameters
10
operator: OracleConfigurator
11
arguments:
12
component: oracle
13
14
- name: Restart Oracle container
15
operator: Executor
16
arguments:
17
command: docker restart oraxe
18
component: oracle
19
20
- name: Execute load test
21
operator: Executor
22
arguments:
23
command: cd ~/oltpbench ; ./oltpbenchmark --bench tpcc --config tpcc_conf.xml --execute=true -s 5 --output out
24
component: tpcc
25
26
- name: Parse TPC-C results
27
operator: Executor
28
arguments:
29
command: cd ~/oltpbench ; ./tpcc_parse_csv.sh
30
component: tpcc
We can create the workflow by running:
akamas create workflow workflow.yaml

Study

The objective of this study is to maximize the transaction throughput while stressed by the TPC-C load generator, and to achieve this goal the study will tune the size of the most important areas of the Oracle instance.

Goal

Here’s the definition of the goal for our study, which is to maximize the tpcc.throughput metric:
1
goal:
2
objective: maximize
3
function:
4
formula: tpcc.throughput

Windowing

We define a window in order to consider only the data points after the ramp-up time of the load test:
1
windowing:
2
type: trim
3
trim: [4m, 1m]
4
task: Execute load test

Parameters to optimize

For this study, we are trying to achieve our goal by tuning the size of several areas in the memory of the database instance. In particular, we will tune the overall size of the Program Global Area (containing the work area of the active sessions) and the size of the components of the Shared Global Area.
The domains are configured to explore, for each parameter, the values around the default values.
1
parametersSelection:
2
- name: oracle.pga_aggregate_target
3
domain: [1128, 4512]
4
- name: oracle.db_cache_size
5
domain: [512, 6144]
6
- name: oracle.java_pool_size
7
domain: [1, 1024]
8
- name: oracle.large_pool_size
9
domain: [1, 256]
10
- name: oracle.log_buffer
11
domain: [2, 256]
12
- name: oracle.shared_pool_size
13
domain: [128, 1024]
14
- name: oracle.streams_pool_size
15
domain: [1, 1024]

Constraints

The following constraint allows the study to explore different size configurations without exceeding the maximum overall memory available for the instance:
1
parameterConstraints:
2
- name: Cap total memory to 10G
3
formula: oracle.db_cache_size + oracle.java_pool_size + oracle.large_pool_size + oracle.log_buffer + oracle.shared_pool_size + oracle.streams_pool_size + oracle.pga_aggregate_target < 10240

Steps

We are going to add to our study two steps:
  • A baseline step, in which we configure the default values for the memory parameters as discovered from previous manual executions.
  • An optimization step, where we perform 200 experiments to search the set of parameters that best satisfies our goal.
The baseline step contains some additional parameters (oracle.memory_target, oracle.sga_target) that are required by Oracle in order to disable the automatic management of the SGA components.
Here’s what these steps look like:
1
steps:
2
- name: baseline
3
type: baseline
4
values:
5
oracle.pga_aggregate_target: 1128
6
oracle.db_cache_size: 2496
7
oracle.java_pool_size: 16
8
oracle.large_pool_size: 16
9
oracle.log_buffer: 13
10
oracle.shared_pool_size: 640
11
oracle.streams_pool_size: 0
12
oracle.memory_target: 0
13
oracle.sga_target: 0
14
15
- name: optimization
16
type: optimize
17
numberOfExperiments: 200
18
maxFailedExperiments: 200

Complete study

Here’s the study definition (study.yaml) for optimizing the Oracle instance:
1
name: Oracle: tune memory
2
description: Tune memory minimizing reponse
3
system: oracle system
4
workflow: oracle workflow
5
6
goal:
7
objective: maximize
8
function:
9
formula: throughput
10
variables:
11
throughput:
12
metric: tpcc.throughput
13
14
windowing:
15
type: trim
16
trim: [4m, 1m]
17
task: Execute load test
18
19
parametersSelection:
20
- name: oracle.pga_aggregate_target
21
domain: [1128, 4512]
22
- name: oracle.db_cache_size
23
domain: [1024, 6144]
24
- name: oracle.java_pool_size
25
domain: [1, 1024]
26
- name: oracle.large_pool_size
27
domain: [1, 256]
28
- name: oracle.log_buffer
29
domain: [2, 256]
30
- name: oracle.shared_pool_size
31
domain: [128, 1024]
32
- name: oracle.streams_pool_size
33
domain: [1, 1024]
34
35
parameterConstraints:
36
- name: Cap total memory to 10G
37
formula: oracle.db_cache_size + oracle.java_pool_size + oracle.large_pool_size + oracle.log_buffer + oracle.shared_pool_size + oracle.streams_pool_size + oracle.pga_aggregate_target < 10240
38
39
steps:
40
- name: baseline
41
type: baseline
42
values:
43
oracle.pga_aggregate_target: 1128
44
oracle.db_cache_size: 2496
45
oracle.java_pool_size: 16
46
oracle.large_pool_size: 16
47
oracle.log_buffer: 13
48
oracle.shared_pool_size: 640
49
oracle.streams_pool_size: 0
50
oracle.memory_target: 0
51
oracle.sga_target: 0
52
53
- name: optimization
54
type: optimize
55
numberOfExperiments: 200
56
maxFailedExperiments: 200
You can create the study by running:
akamas create study study.yaml
You can then start it by running:
akamas start study 'Oracle: tune memory'