Akamas Docs
3.1.2
3.1.2
  • How to use this documentation
  • Getting started with Akamas
    • Introduction to Akamas
    • Licensing
    • Deployment
      • Cloud Hosting
    • Security
    • Maintenance & Support (M&S) Services
      • Customer Support Services
      • Support levels for Customer Support Services
      • Support levels for software versions
      • Support levels with Akamas 3.1
  • Installing Akamas
    • Akamas Architecture
    • Prerequisites
      • Hardware Requirements
      • Software Requirements
      • Network requirements
    • Install Akamas dependencies
    • Install the Akamas Server
      • Online installation mode
        • Online installation behind a Proxy server
      • Offline installation mode
      • Changing UI Ports
      • Setup HTTPS configuration
    • Install the Akamas CLI
      • Setup the Akamas CLI
      • Verify the Akamas CLI
      • Initialize Akamas CLI
      • Change CLI configuration
    • Verify the Akamas Server
    • Install the Akamas license
    • Manage anonymous data collection
    • Install an Akamas Workstation
    • Troubleshoot install issues
    • Manage the Akamas Server
      • Akamas logs
      • Audit logs
      • Install upgrades and patches
      • Monitor the Akamas Server
      • Backup & Recover of the Akamas Server
  • Using Akamas
    • General optimization process and methodology
    • Preparing optimization studies
      • Modeling systems
      • Modeling components
        • Creating custom optimization packs
        • Managing optimization packs
      • Creating telemetry instances
      • Creating automation workflows
        • Creating workflows for offline studies
        • Performing load testing to support optimization activities
        • Creating workflows for live optimizations
      • Creating optimization studies
        • Defining optimization goal & constraints
        • Defining windowing policies
        • Defining KPIs
        • Defining parameters & metrics
        • Defining workloads
        • Defining optimization steps
        • Setting safety policies
    • Running optimization studies
      • Before running optimization studies
      • Analyzing results of offline optimization studies
        • Optimization Insights
      • Analyzing results of live optimization studies
      • Before applying optimization results
    • Guidelines for choosing optimization parameters
      • Guidelines for JVM (OpenJ9)
      • Guidelines for JVM layer (OpenJDK)
      • Guidelines for Oracle Database
      • Guidelines for PostgreSQL
    • Guidelines for defining optimization studies
      • Optimizing Linux
      • Optimizing Java OpenJDK
      • Optimizing OpenJ9
      • Optimizing Web Applications
      • Optimizing Kubernetes
      • Optimizing Spark
      • Optimizing Oracle Database
      • Optimizing MongoDB
      • Optimizing MySQL Database
      • Optimizing PostgreSQL
  • Integrating Akamas
    • Integrating Telemetry Providers
      • CSV provider
        • Install CSV provider
        • Create CSV provider instances
      • Dynatrace provider
        • Install Dynatrace provider
        • Create Dynatrace provider instances
      • Prometheus provider
        • Install Prometheus provider
        • Create Prometheus provider instances
        • CloudWatch Exporter
        • OracleDB Exporter
      • Spark History Server provider
        • Install Spark History Server provider
        • Create Spark History Server provider instances
      • NeoLoadWeb provider
        • Setup NeoLoadWeb telemetry provider
        • Create NeoLoadWeb provider instances
      • LoadRunner Professional provider
        • Install LoadRunner Professional provider
        • Create LoadRunner Professional provider instances
      • LoadRunner Enterprise provider
        • Install LoadRunner Enterprise provider
        • Create LoadRunner Enterprise provider instances
      • AWS provider
        • Install AWS provider
        • Create AWS provider instances
    • Integrating Configuration Management
    • Integrating Value Stream Delivery
    • Integrating Load Testing
      • Integrating NeoLoad
      • Integrating Load Runner Professional
      • Integrating LoadRunner Enterprise
  • Akamas Reference
    • Glossary
      • System
      • Component
      • Metric
      • Parameter
      • Component Type
      • Workflow
      • Telemetry Provider
      • Telemetry Instance
      • Optimization Pack
      • Goals & Constraints
      • KPI
      • Optimization Study
      • Offline Optimization Study
      • Live Optimization Study
      • Workspace
    • Construct templates
      • System template
      • Component template
      • Parameter template
      • Metric template
      • Component Types template
      • Telemetry Provider template
      • Telemetry Instance template
      • Workflows template
      • Study template
        • Goal & Constraints
        • Windowing policy
          • Trim windowing
          • Stability windowing
        • Parameter selection
        • Metric selection
        • Workload selection
        • KPIs
        • Steps
          • Baseline step
          • Bootstrap step
          • Preset step
          • Optimize step
        • Parameter rendering
    • Workflow Operators
      • General operator arguments
      • Executor Operator
      • FileConfigurator Operator
      • LinuxConfigurator Operator
      • WindowsExecutor Operator
      • WindowsFileConfigurator Operator
      • Sleep Operator
      • OracleExecutor Operator
      • OracleConfigurator Operator
      • SparkSSHSubmit Operator
      • SparkSubmit Operator
      • SparkLivy Operator
      • NeoLoadWeb Operator
      • LoadRunner Operator
      • LoadRunnerEnteprise Operator
    • Telemetry metric mapping
      • Dynatrace metrics mapping
      • Prometheus metrics mapping
      • NeoLoadWeb metrics mapping
      • Spark History Server metrics mapping
      • LoadRunner metrics mapping
    • Optimization Packs
      • Linux optimization pack
        • Amazon Linux
        • Amazon Linux 2
        • Amazon Linux 2022
        • CentOS 7
        • CentOS 8
        • RHEL 7
        • RHEL 8
        • Ubuntu 16.04
        • Ubuntu 18.04
        • Ubuntu 20.04
      • DotNet optimization pack
        • DotNet Core 3.1
      • Java-OpenJDK optimization pack
        • Java OpenJDK 8
        • Java OpenJDK 11
      • OpenJ9 optimization pack
        • IBM J9 VM 6
        • IBM J9 VM 8
        • Eclipse Open J9 11
      • NodeJS optimization pack
        • NodeJS
      • GO optimization pack
        • GO 1
      • Web Application optimization pack
        • Web Application
      • Docker optimization pack
        • Container
      • Kubernetes optimization pack
        • Kubernetes Pod
        • Kubernetes Container
        • Kubernetes Workload
        • Kubernetes Namespace
        • Kubernetes Cluster
      • WebSphere optimization pack
        • WebSphere 8.5
        • WebSphere Liberty ND
      • AWS optimization pack
        • EC2
        • Lambda
      • PostgreSQL optimization pack
        • PostgreSQL 11
        • PostgreSQL 12
      • Cassandra optimization pack
        • Cassandra
      • MySQL Database optimization pack
        • MySQL 8.0
      • Oracle Database optimization pack
        • Oracle Database 12c
        • Oracle Database 18c
        • Oracle Database 19c
        • RDS Oracle Database 11g
        • RDS Oracle Database 12c
      • MongoDB optimization pack
        • MongoDB 4
        • MongoDB 5
      • Elasticsearch optimization pack
        • Elasticsearch 6
      • Spark optimization pack
        • Spark Application 2.2.0
        • Spark Application 2.3.0
        • Spark Application 2.4.0
    • Command Line commands
      • Administration commands
      • User and Workspace management commands
      • Authentication commands
      • Resource management commands
      • Optimizer options commands
  • Knowledge Base
    • Setting up a Konakart environment for testing Akamas
    • Modeling a sample Java-based e-commerce application (Konakart)
    • Optimizing a web application
    • Optimizing a sample Java OpenJ9 application
    • Optimizing a sample Java OpenJDK application
    • Optimizing a sample Linux system
    • Optimizing a MongoDB server instance
    • Optimizing a Kubernetes application
    • Leveraging Ansible to automate AWS instance management
    • Guidelines for optimizing AWS EC2 instances
    • Optimizing a sample application running on AWS
    • Optimizing a Spark application
    • Optimizing an Oracle Database server instance
    • Optimizing an Oracle Database for an e-commerce service
    • Guidelines for optimizing Oracle RDS
    • Optimizing a MySQL server database running Sysbench
    • Optimizing a MySQL server database running OLTPBench
    • Optimizing a live K8s deployment
    • Optimizing a live full-stack deployment (K8s + JVM)
  • Akamas Free Trial
Powered by GitBook
On this page
  • Memory Allocation Sub-spaces
  • Tuning the Total Memory
  • Tuning the Shared and Program Memory Global Areas
  • Tuning the Shared Memory

Was this helpful?

Export as PDF
  1. Using Akamas
  2. Guidelines for choosing optimization parameters

Guidelines for Oracle Database

This page provides a list of best practices when optimizing an Oracle database with Akamas.

Memory Allocation Sub-spaces

This section provides some guidelines on the most relevant memory-related parameters and how to configure them to perform a high-level optimization of a generic Oracle Database instance.

Oracle DBAs can choose, depending on their needs or expertise, the desired level of granularity when configuring the memory allocated to the database areas and components, and let the Oracle instance automatically manage the lower layers. In the same way, Akamas can tune a target instance with different levels of granularity.

In particular, we can configure an Akamas study so that it simply tunes the overall memory of the instance, leaving Oracle automatically manage how to allocate it between shared memory (SGA) and program memory (PGA); alternatively, we can tune the target values of both of these areas and let Oracle take care of their components, or go even deeper and have total control of the sizing of every single component.

Notice: running the queries in this guide requires a user with the ALTER SYSTEM, SELECT ON V_$PARAMETER, and SELECT ON V_$OSSTAT privileges

Also notice that to define the domain of some of the parameters you need to know the physical memory of the instance. You can find the value in MiB running the query select round(value/1024/1024)||'M' "physical_memory" from v$osstat where stat_name='PHYSICAL_MEMORY_BYTES'. Otherwise, if you have access to the underlying machine, you can run the bash command free -m

Tuning the Total Memory

This is the simplest of the memory-optimization set of parameters, where the study configures only the overall memory available for the instance and lets Oracle’s Automatic Memory Management (AMM) dynamically assign the space to the SGA and PGA. This is useful for simple studies where you want to minimize the overall used memory, usually coupled with constraints to make sure the performances of the overall system remain within acceptable values.

  • memory_target: this parameter specifies the total memory used by the Oracle instance. When AMM is enabled can find the default value with the query select display_value "memory_target" from v$parameter where name='memory_target'. Otherwise, you can get an estimate summing the configured SGA size found running select display_value "sga_target" from v$parameter where name LIKE 'sga_target' and the size of the PGA found with select ceil(value/1024/1024)||'M' "physical_memory" from v$pgastat where name='maximum PGA allocated'. The explored domain strongly depends on your application and hardware, but an acceptable range goes from 152M (the minimum configurable value) to the physical size of your instance. Over time, Akamas will learn to avoid automatically the configuration with not-enough memory.

To configure the Automatic Memory Management you also need to make sure that the parameters sga_target and pga_aggregate_limit are set to 0 by configuring them among the default values of a study, or manually running the configuration queries.

The following snippet shows the parameter selection to tune the total memory of the instance. The domain is configured to go from the minimum value to the maximum physical memory (7609M in our example).

parametersSelection:
- name: ora.memory_target
  domain: [152, 7609]

Tuning the Shared and Program Memory Global Areas

With the following set of parameters, Akamas tunes the individual sizes of the SGA and PGA, letting Oracle’s Automatic Shared Memory Management (ASMM) dynamically size their underlying SGA components. You can leverage these parameters for studies where, like the previous scenario, you want to find the configuration with the lowest memory allocation that still performs within your SLOs. Another possible scenario is to find the balance in the allocation of the memory available that best fits your optimization goals.

  • sga_target: this parameter specifies the target SGA size. When ASMM is configured, you can find the default value with the query select display_value "sga_target" from v$parameter where name='sga_size'. The explored domain strongly depends on your application and hardware, but an acceptable range goes from 64M (the minimum configurable value) to the physical size of your instance minus a reasonable size for the PGA (usually up to 80% of physical memory).

  • pga_aggregate_target: this parameter specifies the target PGA size. You can find the default value with the query select display_value "pga_aggregate_target" from v$parameter where name='pga_aggregate_target'. The explored domain strongly depends on your application and hardware, but an acceptable range goes from 10M (the minimum configurable value) to the physical size of your instance minus a reasonable size for the SGA.

To tune the SGA and PGA, you also must set the memory_target to 0 to disable AMM by configuring them among the default values of a study, or manually running the configuration queries. ASMM will dynamically tune all the SGA components whose size is not specified, so set to 0 all the parameters (db_cache_size, log_buffer, java_pool_size, large_pool_size, shared_pool_size, and streams_pool_size) unless you have any specific requirements.

The following snippet shows the parameter selection to tune both SGA and PGA sizes. Each parameter is configured to go from the minimum value to 90% of the maximum physical memory (6848M in our example), allowing Akamas to explore all the possible ways to partition the space between the two areas and find the best configuration for our use case:

parametersSelection:
- name: ora.sga_target
  domain: [64, 6848]
- name: ora.pga_aggregate_target
  domain: [10, 6848]

The following code snippet forces Akamas to explore configuration spaces where the total memory, expressed in MiB, does not exceed the total memory available (7609M in our example). This allows speeding up the optimization avoiding configurations that won’t work correctly.

parameterConstraints:
- name: Limit total memory
  formula: ora.sga_target + ora.pga_aggregate_target <= 7609

Tuning the Shared Memory

With the following set of parameters, Akamas tunes the space allocated to one or more of the components that make the System Global Area, along with the size of the Program Global Area size. This scenario is useful for studies where you want to find the memory distribution that best fits your optimization goals.

  • pga_aggregate_target: this parameter specifies the size of the PGA. You can find the default value with the query select display_value "pga_aggregate_target" from v$parameter where name='pga_aggregate_target'. The explored domain strongly depends on your application and hardware, but an acceptable range goes from 10M (the minimum configurable value) to the physical size of your instance.

  • db_cache_size: this parameter specifies the size of the default buffer pool. You can find the default value with the query select * from v$sgainfo where name='Buffer Cache Size'.

  • log_buffer: this parameter specifies the size of the log buffer. You can find the default value with the query select * from v$sgainfo where name='Redo Buffers'.

  • java_pool_size: this parameter specifies the size of the java pool. You can find the default value with the query select * from v$sgainfo where name='Java Pool Size'.

  • large_pool_size: this parameter specifies the size of the large pool. You can find the default value with the query select * from v$sgainfo where name='Large Pool Size'.

  • streams_pool_size: this parameter specifies the size of the streams pool. You can find the default value with the query select * from v$sgainfo where name='Streams Pool Size'.

  • shared_pool_size: this parameter specifies the size of the shared pool. You can find the default value with the query select * from v$sgainfo where name='Shared Pool Size'.

The explored domains of the SGA components strongly depend on your application and hardware; an approach is to scale both up and down the baseline value by a reasonable factor to define the domain boundaries (eg: from 20% to 500% of the baseline).

To tune all the components set both the memory_target and sga_target parameters to 0 by configuring them among the default values of a study, or manually running the configuration queries.

Notice: if your system leverages non-standard block-size buffers you should consider tuning also the db_Nk_cache_size parameters.

The following snippet shows the parameter selection to tune the size of the PGA and the SGA components. The PGA parameter is configured to go from the minimum value to 90% of the maximum physical memory (6848M in our example), while the domains for the SGA components are configured scaling their default value by approximatively a factor of 10. Along with the constraint defined below, these domains give Akamas great flexibility while exploring how to distribute the available memory space:

parametersSelection:
- name: ora.pga_aggregate_target
  domain: [10, 6848]
- name: ora.db_cache_size
  domain: [128, 6848]
- name: ora.log_buffer
  domain: [1, 128]
- name: ora.java_pool_size
  domain: [4, 240]
- name: ora.large_pool_size
  domain: [12, 1024]
- name: ora.shared_pool_size
  domain: [12, 1024]

The following code snippet forces Akamas to explore configuration spaces where the total memory, expressed in MiB, does not exceed the total memory available (7609M in our example).

parameterConstraints:
- name: Limit total memory
  formula: ora.db_cache_size + name: ora.log_buffer + ora.java_pool_size + ora.large_pool_size + ora.shared_pool_size + ora.pga_aggregate_target <= 7609

You should also add to the equation any db_Nk_cache_size tuned in the study.

Last updated 2 years ago

Was this helpful?