Akamas Docs
3.5
3.5
  • Home
  • Getting started
    • Introduction
    • Free Trial
    • Licensing
    • Deployment
      • Cloud Hosting
    • Security
    • Maintenance & Support (M&S) Services
      • Customer Support Services
      • Support levels for Customer Support Services
      • Support levels for software versions
      • Support levels with Akamas
  • Installing
    • Architecture
    • Docker compose installation
      • Prerequisites
        • Hardware Requirements
        • Software Requirements
        • Network requirements
      • Install Akamas dependencies
      • Install the Akamas Server
        • Online installation mode
          • Online installation behind a Proxy server
        • Offline installation mode
        • Changing UI Ports
        • Setup HTTPS configuration
      • Troubleshoot Docker installation issues
    • Kubernetes installation
      • Prerequisites
        • Cluster Requirements
        • Software Requirements
      • Install Akamas
        • Online Installation
        • Offline Installation - Private registry
      • Installing on OpenShift
      • Accessing Akamas
      • Useful commands
    • Install the CLI
      • Setup the CLI
      • Initialize the CLI
      • Change CLI configuration
      • Use a proxy server
    • Verify the installation
    • Installing the toolbox
    • Install the license
    • Manage anonymous data collection
  • Managing Akamas
    • Akamas logs
    • Audit logs
    • Upgrade Akamas
      • Docker compose
      • Kubernetes
    • Monitor Akamas status
    • Backup & Recover of the Akamas Server
    • Users management
      • Accessing Keycloak admin console
      • Configure an external identity provider
        • Azure Active Directory
        • Google
      • Limit users sessions
        • Local users
        • Identity provider users
    • Collecting support information
  • Using
    • System
    • Telemetry
    • Workflow
    • Study
      • Offline Study
      • Live Study
        • Analyzing results of live optimization studies
      • Windowing
      • Parameters and constraints
  • Optimization Guides
    • Optimize application costs and resource efficiency
      • Kubernetes microservices
        • Optimize cost of a Kubernetes deployment subject to Horizontal Pod Autoscaler
        • Optimize cost of a Kubernetes microservice while preserving SLOs in production
        • Optimize cost of a Java microservice on Kubernetes while preserving SLOs in production
      • Application runtime
        • Optimizing a sample Java OpenJDK application
        • Optimizing cost of a Node.js application with performance tests
        • Optimizing cost of a Golang application with performance tests
        • Optimizing cost of a .NET application with performance tests
      • Applications running on cloud instances
        • Optimizing a sample application running on AWS
      • Spark applications
        • Optimizing a Spark application
    • Optimize application performance and reliability
      • Kubernetes microservices
        • Optimizing cost of a Kubernetes microservice while preserving SLOs in production
        • Optimizing cost of a Java microservice on Kubernetes while preserving SLOs in production
      • Applications running on cloud instances
      • Spark applications
  • Integrating
    • Integrating Telemetry Providers
      • CSV provider
        • Install CSV provider
        • Create CSV telemetry instances
      • Dynatrace provider
        • Install Dynatrace provider
        • Create Dynatrace telemetry instances
          • Import Key Requests
      • Prometheus provider
        • Install Prometheus provider
        • Create Prometheus telemetry instances
        • CloudWatch Exporter
        • OracleDB Exporter
      • Spark History Server provider
        • Install Spark History Server provider
        • Create Spark History Server telemetry instances
      • NeoLoadWeb provider
        • Install NeoLoadWeb telemetry provider
        • Create NeoLoadWeb telemetry instances
      • LoadRunner Professional provider
        • Install LoadRunner Professional provider
        • Create LoadRunner Professional telemetry instances
      • LoadRunner Enterprise provider
        • Install LoadRunner Enterprise provider
        • Create LoadRunner Enterprise telemetry instances
      • AWS provider
        • Install AWS provider
        • Create AWS telemetry instances
    • Integrating Configuration Management
    • Integrating with pipelines
    • Integrating Load Testing
      • Integrating NeoLoad
      • Integrating LoadRunner Professional
      • Integrating LoadRunner Enterprise
  • Reference
    • Glossary
      • System
      • Component
      • Metric
      • Parameter
      • Component Type
      • Workflow
      • Telemetry Provider
      • Telemetry Instance
      • Optimization Pack
      • Goals & Constraints
      • KPI
      • Optimization Study
      • Workspace
      • Safety Policies
    • Construct templates
      • System template
      • Component template
      • Parameter template
      • Metric template
      • Component Types template
      • Telemetry Provider template
      • Telemetry Instance template
      • Workflows template
      • Study template
        • Goal & Constraints
        • Windowing policy
          • Trim windowing
          • Stability windowing
        • Parameter selection
        • Metric selection
        • Workload selection
        • KPIs
        • Steps
          • Baseline step
          • Bootstrap step
          • Preset step
          • Optimize step
        • Parameter rendering
        • Optimizer Options
    • Workflow Operators
      • General operator arguments
      • Executor Operator
      • FileConfigurator Operator
      • LinuxConfigurator Operator
      • WindowsExecutor Operator
      • WindowsFileConfigurator Operator
      • Sleep Operator
      • OracleExecutor Operator
      • OracleConfigurator Operator
      • SparkSSHSubmit Operator
      • SparkSubmit Operator
      • SparkLivy Operator
      • NeoLoadWeb Operator
      • LoadRunner Operator
      • LoadRunnerEnteprise Operator
    • Telemetry metric mapping
      • Dynatrace metrics mapping
      • Prometheus metrics mapping
      • NeoLoadWeb metrics mapping
      • Spark History Server metrics mapping
      • LoadRunner metrics mapping
    • Optimization Packs
      • Linux optimization pack
        • Amazon Linux
        • Amazon Linux 2
        • Amazon Linux 2022
        • CentOS 7
        • CentOS 8
        • RHEL 7
        • RHEL 8
        • Ubuntu 16.04
        • Ubuntu 18.04
        • Ubuntu 20.04
      • DotNet optimization pack
        • DotNet Core 3.1
      • Java OpenJDK optimization pack
        • Java OpenJDK 8
        • Java OpenJDK 11
        • Java OpenJDK 17
      • OpenJ9 optimization pack
        • IBM J9 VM 6
        • IBM J9 VM 8
        • Eclipse Open J9 11
      • Node JS optimization pack
        • Node JS 18
      • GO optimization pack
        • GO 1
      • Web Application optimization pack
        • Web Application
      • Docker optimization pack
        • Container
      • Kubernetes optimization pack
        • Kubernetes Pod
        • Kubernetes Container
        • Kubernetes Workload
        • Kubernetes Namespace
        • Kubernetes Cluster
      • WebSphere optimization pack
        • WebSphere 8.5
        • WebSphere Liberty ND
      • AWS optimization pack
        • EC2
        • Lambda
      • PostgreSQL optimization pack
        • PostgreSQL 11
        • PostgreSQL 12
      • Cassandra optimization pack
        • Cassandra
      • MySQL Database optimization pack
        • MySQL 8.0
      • Oracle Database optimization pack
        • Oracle Database 12c
        • Oracle Database 18c
        • Oracle Database 19c
        • RDS Oracle Database 11g
        • RDS Oracle Database 12c
      • MongoDB optimization pack
        • MongoDB 4
        • MongoDB 5
      • Elasticsearch optimization pack
        • Elasticsearch 6
      • Spark optimization pack
        • Spark Application 2.2.0
        • Spark Application 2.3.0
        • Spark Application 2.4.0
    • Command Line commands
      • Administration commands
      • User and Workspace management commands
      • Authentication commands
      • Resource management commands
      • Optimizer options commands
    • Release Notes
  • Knowledge Base
    • Creating custom optimization packs
    • Setting up a Konakart environment for testing Akamas
    • Modeling a sample Java-based e-commerce application (Konakart)
    • Optimizing a web application
    • Optimizing a sample Java OpenJ9 application
    • Optimizing a sample Linux system
    • Optimizing a MongoDB server instance
    • Optimizing a Kubernetes application
    • Leveraging Ansible to automate AWS instance management
    • Guidelines for optimizing AWS EC2 instances
    • Optimizing an Oracle Database server instance
    • Optimizing an Oracle Database for an e-commerce service
    • Guidelines for optimizing Oracle RDS
    • Optimizing a MySQL server database running Sysbench
    • Optimizing a MySQL server database running OLTPBench
    • Optimizing a live full-stack deployment (K8s + JVM)
    • Setup Instana integration
Powered by GitBook
On this page
  • Environment setup
  • Environment
  • Prometheus and exporters
  • Optimization setup
  • System
  • Telemetry
  • Prometheus
  • CSV
  • Workflow
  • Study

Was this helpful?

Export as PDF
  1. Knowledge Base

Optimizing an Oracle Database server instance

Last updated 1 year ago

Was this helpful?

In this example, we are going to tune the initialization parameters of an Oracle Database server instance in order to maximize its throughput while stressed by a load generator.

For the workload, we’ll use the , a popular transaction processing benchmarking suite, while to extract the metrics we are going to leverage the .

Environment setup

Environment

For the purpose of this experiment we are going to use two dedicated machines:

  • , hosting a single Oracle 18c XE instance running inside a docker container (provisioned using the scripts on the )

  • , that generates the workload using and will host the instance

We assume to be working with Linux hosts

Prometheus and exporters

Install the OracleDB Prometheus exporter

The OracleDB Prometheus exporter publishes as metrics the results of the queries defined in the configuration file. In our case, we’ll use it to extract valuable performance metrics from Oracle’s Dynamic Performance (V$) Views.

We can spin up the exporter using the using the following command, where cust-metrics.toml is our custom metrics file:

docker run -d --name orabench_exporter --restart always \
  -p 9161:9161 \
  -v ~/oracledb_exporter/cust-metrics.toml:/cust-metrics.toml \
  -e CUSTOM_METRICS=/cust-metrics.toml \
  -e DATA_SOURCE_NAME='system/passwd@//oraxe.mycompany.com:1521/XE' \
  iamseth/oracledb_exporter

The exporter will publish the metrics on the port 9161.

Here’s the example metrics file used to run the exporter:

[[metric]]
context= "memory"
labels= [ "component" ]
metricsdesc= { size="Component memory extracted from v$memory_dynamic_components in Oracle." }
request = '''
SELECT component, current_size as "size"
FROM V$MEMORY_DYNAMIC_COMPONENTS
UNION
SELECT name, bytes as "size"
FROM V$SGAINFO
WHERE name in ('Free SGA Memory Available', 'Redo Buffers', 'Maximum SGA Size')
'''

[[metric]]
context = "activity"
metricsdesc = { value="Generic counter metric from v$sysstat view in Oracle." }
fieldtoappend = "name"
request = '''
SELECT name, value
FROM V$SYSSTAT WHERE name IN (
  'execute count',
  'user commits', 'user rollbacks',
  'db block gets from cache', 'consistent gets from cache', 'physical reads cache', /* CACHE */
  'redo log space requests'
 )
 '''

[[metric]]
context = "system_event"
labels = [ "event", "wait_class" ]
request = '''
SELECT
  event, wait_class,
  total_waits, time_waited
FROM V$SYSTEM_EVENT
'''
[metric.metricsdesc]
  total_waits= "Total number of waits for the event as per V$SYSTEM_EVENT in Oracle."
  time_waited= "Total time waited for the event (in hundredths of seconds) as per V$SYSTEM_EVENT in Oracle."

Install and configure Prometheus

To configure the OracleDB exporter you can add the following snippet to the configuration file:

scrape_configs:
  - job_name: oraxe-exporter
    scrape_interval: 15s
    static_configs:
      - targets: [oltpbench.mycompany.com:9161]
    relabel_configs:
      - source_labels: [__address__]
        regex: (.*)
        target_label: instance
        replacement: oraxe

Optimization setup

System

To model the system composed of the tuned database and the workload generator we need two different components:

For the tpcc component, we’ll need first to define some custom metrics and a new component-type. The following is the definition of the metrics (tpcc-metrics.yaml):

metrics:
  - name: throughput
    description: throughput
    unit: requests/s

  - name: resp_time
    description: resp_time
    unit: milliseconds

  - name: resp_time_min
    description: resp_time_min
    unit: milliseconds

  - name: resp_time90th
    description: resp_time90th
    unit: milliseconds

  - name: resp_time_max
    description: resp_time_max
    unit: milliseconds

The following is the definition of the new component-type (tpcc-ctype.yaml):

name: TPCC Benchmarck
description: OLTP TPCC Benchmarck

parameters: []

metrics:
  - name: throughput
  - name: resp_time
  - name: resp_time_min
  - name: resp_time90th
  - name: resp_time_max

We can then create the new component type running the commands:

akamas create metrics tpcc-metrics.yaml
akamas create component-type tpcc-ctype.yaml

As a next step, we can proceed then with the definition of our system (system.yaml):

name: oracle system
description: oracle system

Here’s the definition of our oracle component (oracle.yaml):

name: oracle
description: Oracle DB
componentType: Oracle Database 18c
properties:
  instance: oraxe

  connection:
    user: system
    password: passwd
    dsn: oraxe.mycompany.com:1521/XE

  hostname: oraxe.mycompany.com   # needed to run docker restart
  username: ubuntu
  sshPort: 22
  key: rsa_key_file

Here’s the definition of the tpcc component (tpcc.yaml):

name: tpcc
description: OLTP TPC-C load benchmarck
componentType: TPC-C Benchmarck
properties:
  hostname: oltpbench.mycompany.com
  username: ubuntu
  sshPort: 22
  key: rsa_key_file

We can create the system by running:

akamas create system system.yaml

We can then create the components by running:

akamas create component oracle.yaml 'oracle system'
akamas create component tpcc.yaml 'oracle system'

Telemetry

Prometheus

provider: Prometheus
config:
  address: prometheus
  port: 9090

We can now create the telemetry instance and attach it to our system by running:

akamas create telemetry-instance prom.yaml 'oracle system'

CSV

provider: csv
config:
  address: oltpbench.mycompany.com
  port: 22
  username: ubuntu
  protocol: scp
  authType: key
  auth: rsa_key_file

  remoteFilePattern: /home/ubuntu/oltpbench/results/output.csv
  componentColumn: component
  timestampColumn: ts
  timestampFormat: yyyy-MM-dd HH:mm:ss

metrics:
  - metric: throughput
    datasourceMetric: throughput
    staticLabels: {}

  - metric: resp_time
    datasourceMetric: avg_lat
    staticLabels: {}

  - metric: resp_time_min
    datasourceMetric: min_lat
    staticLabels: {}

  - metric: resp_time90th
    datasourceMetric: 90th_lat
    staticLabels: {}

  - metric: resp_time_max
    datasourceMetric: max_lat
    staticLabels: {}

We can create the telemetry instance and attach it to our system by running:

akamas create telemetry-instance csv.yaml 'oracle system'

Workflow

Remove previous executions' data

name: Clean results
operator: Executor
arguments:
  command: rm -f ~/oltpbench/results/*
  component: tpcc

Configure the Oracle instance

name: Update parameters
operator: OracleConfigurator
arguments:
  component: oracle

Restart the instance

name: Restart Oracle container
operator: Executor
arguments:
  command: docker restart oraxe
  component: oracle

Run the workload

name: Execute load test
operator: Executor
arguments:
  command: cd ~/oltpbench ; ./oltpbenchmark --bench tpcc --config tpcc_conf.xml --execute=true -s 5 --output out
  component: tpcc

Prepare test results

name: Parse TPC-C results
operator: Executor
arguments:
  command: cd ~/oltpbench ; ./tpcc_parse_csv.sh
  component: tpcc

Where tpcc_parse_csv.sh is the following script:

#!/bin/bash

OUTFILE=output.csv

COMP_NAME=tpcc

BASETS=`tail -n+2 results/out.csv | head -n1 | cut -d',' -f3`
echo 'component,ts,throughput,avg_lat,min_lat,90th_lat,max_lat' > $OUTFILE
awk -F, "BEGIN{OFS=\",\"} NR>1 {\$1=strftime(\"%F %T\", ${BASETS}+\$1); print \"${COMP_NAME}\",\$0}" < results/out.res | cut -d',' -f1-5,9,12 >> $OUTFILE

Complete workflow

By putting together all the tasks defined above we come up with the following workflow definition (workflow.yaml):

name: oracle workflow
tasks:
  - name: Clean results
    operator: Executor
    arguments:
      command: rm -f ~/oltpbench/results/*
      component: tpcc

  - name: Update parameters
    operator: OracleConfigurator
    arguments:
      component: oracle

  - name: Restart Oracle container
    operator: Executor
    arguments:
      command: docker restart oraxe
      component: oracle

  - name: Execute load test
    operator: Executor
    arguments:
      command: cd ~/oltpbench ; ./oltpbenchmark --bench tpcc --config tpcc_conf.xml --execute=true -s 5 --output out
      component: tpcc

  - name: Parse TPC-C results
    operator: Executor
    arguments:
      command: cd ~/oltpbench ; ./tpcc_parse_csv.sh
      component: tpcc

We can create the workflow by running:

akamas create workflow workflow.yaml

Study

The objective of this study is to maximize the transaction throughput while stressed by the TPC-C load generator, and to achieve this goal the study will tune the size of the most important areas of the Oracle instance.

Goal

Here’s the definition of the goal of our study, which is to maximize the tpcc.throughput metric:

goal:
  objective: maximize
  function:
    formula: tpcc.throughput

Windowing

We define a window to consider only the data points after the ramp-up time of the load test:

windowing:
  type: trim
  trim: [4m, 1m]
  task: Execute load test

Parameters to optimize

For this study, we are trying to achieve our goal by tuning the size of several areas in the memory of the database instance. In particular, we will tune the overall size of the Program Global Area (containing the work area of the active sessions) and the size of the components of the Shared Global Area.

The domains are configured to explore, for each parameter, the values around the default values.

parametersSelection:
  - name: oracle.pga_aggregate_target
    domain: [1128, 4512]
  - name: oracle.db_cache_size
    domain: [512, 6144]
  - name: oracle.java_pool_size
    domain: [1, 1024]
  - name: oracle.large_pool_size
    domain: [1, 256]
  - name: oracle.log_buffer
    domain: [2, 256]
  - name: oracle.shared_pool_size
    domain: [128, 1024]
  - name: oracle.streams_pool_size
    domain: [1, 1024]

Constraints

The following constraint allows the study to explore different size configurations without exceeding the maximum overall memory available for the instance:

parameterConstraints:
  - name: Cap total memory to 10G
    formula: oracle.db_cache_size + oracle.java_pool_size + oracle.large_pool_size + oracle.log_buffer + oracle.shared_pool_size + oracle.streams_pool_size + oracle.pga_aggregate_target < 10240

Steps

We are going to add to our study two steps:

  • A baseline step, in which we configure the default values for the memory parameters as discovered from previous manual executions.

  • An optimization step, where we perform 200 experiments to search the set of parameters that best satisfies our goal.

The baseline step contains some additional parameters (oracle.memory_target, oracle.sga_target) that are required by Oracle in order to disable the automatic management of the SGA components.

Here’s what these steps look like:

steps:
  - name: baseline
    type: baseline
    values:
      oracle.pga_aggregate_target: 1128
      oracle.db_cache_size: 2496
      oracle.java_pool_size: 16
      oracle.large_pool_size: 16
      oracle.log_buffer: 13
      oracle.shared_pool_size: 640
      oracle.streams_pool_size: 0
      oracle.memory_target: 0
      oracle.sga_target: 0

  - name: optimization
    type: optimize
    numberOfExperiments: 200
    maxFailedExperiments: 200

Complete study

Here’s the study definition (study.yaml) for optimizing the Oracle instance:

name: Oracle: tune memory
description: Tune memory minimizing response
system: oracle system
workflow: oracle workflow

goal:
  objective: maximize
  function:
    formula: throughput
    variables:
      throughput:
        metric: tpcc.throughput

windowing:
  type: trim
  trim: [4m, 1m]
  task: Execute load test

parametersSelection:
  - name: oracle.pga_aggregate_target
    domain: [1128, 4512]
  - name: oracle.db_cache_size
    domain: [1024, 6144]
  - name: oracle.java_pool_size
    domain: [1, 1024]
  - name: oracle.large_pool_size
    domain: [1, 256]
  - name: oracle.log_buffer
    domain: [2, 256]
  - name: oracle.shared_pool_size
    domain: [128, 1024]
  - name: oracle.streams_pool_size
    domain: [1, 1024]

parameterConstraints:
  - name: Cap total memory to 10G
    formula: oracle.db_cache_size + oracle.java_pool_size + oracle.large_pool_size + oracle.log_buffer + oracle.shared_pool_size + oracle.streams_pool_size + oracle.pga_aggregate_target < 10240

steps:
  - name: baseline
    type: baseline
    values:
      oracle.pga_aggregate_target: 1128
      oracle.db_cache_size: 2496
      oracle.java_pool_size: 16
      oracle.large_pool_size: 16
      oracle.log_buffer: 13
      oracle.shared_pool_size: 640
      oracle.streams_pool_size: 0
      oracle.memory_target: 0
      oracle.sga_target: 0

  - name: optimization
    type: optimize
    numberOfExperiments: 200
    maxFailedExperiments: 200

You can create the study by running:

akamas create study study.yaml

You can then start it by running:

akamas start study 'Oracle: tune memory'

You can check how to configure Prometheus ; by default, it will run on port 9090.

An oracle component that represents the Oracle Database instance and maps directly to .

A tpcc component that represents the TPC-C workload from the OLTPBench suite and maps to .

Since we are using Prometheus to extract the database metrics we can leverage the , which already includes the queries needed for the Oracle metrics we need. To use the Prometheus provider we need to define a telemetry instance (prom.yaml):

Other than the telemetry of the Oracle instance, we need also the metrics in the output CSVs from the TPC-C workload runs. To ingest these metrics we can leverage the , defining the following telemetry instance (csv.yaml):

Using an we run a command to clean the results folder that may contain files from previous executions

We define a task that uses the to update the Oracle initialization parameters:

We define a task that uses the that reboots the Oracle container for the parameters that need a restart to take effect:

We define a task that uses the to launch the TPC-C benchmark against the Oracle instance:

We define a workflow task that runs a script that parses the TPC-C output files and generates a file compatible with the :

OLTPBench's implementation of TPC-C
Oracle Prometheus exporter
oraxe.mycompany.com
official Oracle GitHub repository
oltpbench.mycompany.com
OLTPBench's TPC-C
OracleDB Prometheus exporter
official Docker image
here
oraxe.mycompany.com
oltpbench.mycompany.com
Prometheus provider
CSV Provider
Executor operator
OracleConfigurator operator
Executor operator
Executor operator
CSV Provider