Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The Linux optimization pack helps you optimize Linux-based systems. The optimization pack provides component types for various Linux distributions, thus enabling performance improvements on a plethora of different configurations.
Through this optimization pack, Akamas is able to tackle the problem of performance of Linux-based systems from both the point of you of cost savings, as well as quality and level of service: the included component types bring in parameters that act on the memory footprint of systems, on their ability to sustain higher levels of traffic, on their capacity of leveraging all the available resources and on their potential for lower latency transactions.
Each component type providers parameters that cover four main areas of tuning:
CPU tasks scheduling (for example, if to auto-group and schedule together similar tasks)
Memory (for example, the limit on memory usage for which start swapping pages on disk)
Network (for example, the size of the buffers used to write/read network packets)
Storage (for example, the type of storage scheduler)
Here’s the command to install the Linux optimization pack using the Akamas CLI:
For more information on the process of installing or upgrading an optimization pack refer to Install Optimization Packs.
Component Type | Description |
---|---|
Metric | Description |
---|
Metric | Description |
---|
Metric | Description |
---|
Metric | Description |
---|
Metric | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
Metric | Description |
---|
Metric | Description |
---|
Metric | Description |
---|
Metric | Description |
---|
Metric | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
Amazon Linux AMI
Amazon Linux 2 AMI
Amazon Linux 2022 AMI
CentOS Linux distribution version 7.x
CentOS Linux distribution version 8.x
Red Hat Enterprise Linux distribution version 7.x
Red Hat Enterprise Linux distribution version 8.x
Ubuntu Linux distribution by Canonical version 16.04 (LTS)
Ubuntu Linux distribution by Canonical version 18.04 (LTS)
Ubuntu Linux distribution by Canonical version 20.04 (LTS)
cpu_load_avg | tasks | The system load average (i.e., the number of active tasks in the system) |
cpu_num | CPUs | The number of CPUs available in the system (physical and logical) |
cpu_util | percent | The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work) |
cpu_used | CPUs | The average number of CPUs used in the system (physical and logical) |
cpu_util_details | percent | The average CPU utilization % broken down by usage type and CPU number (e.g., cp1 user, cp2 system, cp3 soft-irq) |
mem_fault | faults/s | The number of memory faults (minor+major) |
mem_fault_major | faults/s | The number of major memory faults (i.e., faults that cause disk access) per second |
mem_fault_minor | faults/s | The number of minor memory faults (i.e., faults that do not cause disk access) per second |
mem_swapins | pages/s | The number of memory pages swapped in per second |
mem_swapouts | pages/s | The number of memory pages swapped out per second |
mem_total | bytes | The total amount of installed memory |
mem_used | bytes | The total amount of memory used |
mem_used_nocache | bytes | The total amount of memory used without considering memory reserved for caching purposes |
mem_util | percent | The memory utilization % (i.e, the % of memory used) |
mem_util_details | percent | The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory) |
mem_util_nocache | percent | The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes |
disk_io_inflight_details | ops | The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01) |
disk_iops | ops/s | The average number of IO disk operations per second across all disks |
disk_iops_details | ops/s | The number of IO disk-write operations per second broken down by disk (e.g., disk /dev/nvme01) |
disk_iops_reads | ops/s | The average number of IO disk-read operations per second across all disks |
disk_iops_writes | ops/s | The average number of IO disk-write operations per second across all disks |
disk_read_bytes | bytes/s | The number of bytes per second read across all disks |
disk_read_bytes_details | bytes/s | The average response time of IO disk operations broken down by disk (e.g., disk C://) |
disk_read_write_bytes | bytes/s | The number of bytes per second written across all disks |
disk_response_time_details | seconds | The average response time of IO disk operations broken down by disk (e.g., disk C://) |
disk_response_time_read | seconds | The average response time of read disk operations |
disk_response_time_worst | seconds | The average response time of IO disk operations of the slowest disk |
disk_response_time_write | seconds | The average response time of write on disk operations |
disk_swap_used | bytes | The total amount of space used by swap disks |
disk_swap_util | percent | The average space utilization % of swap disks |
disk_util_details | percent | The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://) |
disk_write_bytes | bytes/s | The number of bytes per second written across all disks |
disk_write_bytes_details | bytes/s | The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE) |
filesystem_size | bytes | The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01) |
filesystem_used | bytes | The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01) |
filesystem_util | percent | The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1) |
network_in_bytes_details | bytes/s | The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0) |
network_out_bytes_details | bytes/s | The number of outbound network packets in bytes per second broken down by network device (e.g., eth01) |
network_tcp_retrans | retrans/s | The number of network TCP retransmissions per second |
os_context_switch | switches/s | The number of context switches per second |
proc_blocked | processes | The number of processes blocked (e.g, for IO or swapping reasons) |
os_cpuSchedMinGranularity | integer | nanoseconds | 1500000 | 300000 → 30000000 | no | Minimal preemption granularity (in nanoseconds) for CPU bound tasks |
os_cpuSchedWakeupGranularity | integer | nanoseconds | 2000000 | 400000 → 40000000 | no | Scheduler Wakeup Granularity (in nanoseconds) |
os_CPUSchedMigrationCost | integer | nanoseconds | 500000 | 100000 → 5000000 | no | Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations |
os_CPUSchedChildRunsFirst | integer | 0 |
| no | A freshly forked child runs before the parent continues execution |
os_CPUSchedLatency | integer | nanoseconds | 12000000 | 2400000 → 240000000 | no | Targeted preemption latency (in nanoseconds) for CPU bound tasks |
os_CPUSchedAutogroupEnabled | integer | 0 |
| no | Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads |
os_CPUSchedNrMigrate | integer | 32 | 3 → 320 | no | Scheduler NR Migrate |
os_MemorySwappiness | integer | percent | 60 | 0 → 100 | no | The percentage of RAM free space for which the kernel will start swapping pages to disk |
os_MemoryVmVfsCachePressure | integer | 100 | 10 → 100 | no | VFS Cache Pressure |
os_MemoryVmCompactionProactiveness | integer | Determines how aggressively compaction is done in the background |
os_MemoryVmMinFree | integer | 67584 | 10240 → 1024000 | no | Minimum Free Memory (in kbytes) |
os_MemoryTransparentHugepageEnabled | categorical |
|
| no | Transparent Hugepage Enablement Flag |
os_MemoryTransparentHugepageDefrag | categorical |
|
| no | Transparent Hugepage Enablement Defrag |
os_MemorySwap | categorical |
|
| no | Memory Swap |
os_MemoryVmDirtyRatio | integer | 20 | 1 → 99 | no | When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write |
os_MemoryVmDirtyBackgroundRatio | integer | 10 | 1 → 99 | no | When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background |
os_MemoryVmDirtyExpire | integer | centiseconds | 3000 | 300 → 30000 | no | When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write |
os_MemoryVmDirtyWriteback | integer | centiseconds | 500 | 50 → 5000 | no | Memory Dirty Writeback (in centisecs) |
os_NetworkNetCoreSomaxconn | integer | megabytes | 128 | 12 → 8192 | no | Network Max Connections |
os_NetworkNetCoreNetdevMaxBacklog | integer | megabytes/s | 1000 | 100 → 10000 | no | Network Max Backlog |
os_NetworkNetIpv4TcpMaxSynBacklog | integer | milliseconds | 256 | 52 → 5120 | no | Network IPV4 Max Sync Backlog |
os_NetworkNetCoreNetdevBudget | integer | 300 | 30 → 30000 | no | Network Budget |
os_NetworkNetCoreRmemMax | integer | 212992 | 21299 → 2129920 | no | Maximum network receive buffer size that applications can request |
os_NetworkNetCoreWmemMax | integer | 212992 | 21299 → 2129920 | no | Maximum network transmit buffer size that applications can request |
os_NetworkNetIpv4TcpSlowStartAfterIdle | integer | 1 |
| no | Network Slow Start After Idle Flag |
os_NetworkNetIpv4TcpFinTimeout | integer | 60 | 6 → 600 | no | Network TCP timeout |
os_NetworkRfs | integer | 0 | 0 → 131072 | no | If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running |
os_StorageReadAhead | integer | kilobytes | 128 | 0 → 4096 | no | Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk |
os_StorageNrRequests | integer | 32 | 12 → 1280 | no | Storage Number of Requests |
os_StorageRqAffinity | integer | 1 |
| no | Storage Requests Affinity |
os_StorageNomerges | integer | 0 | 0 → 2 | no | Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried |
os_StorageMaxSectorsKb | integer | kilobytes | 256 | 32 → 256 | no | The largest IO size that the OS can issue to a block device |
cpu_load_avg | tasks | The system load average (i.e., the number of active tasks in the system) |
cpu_num | CPUs | The number of CPUs available in the system (physical and logical) |
cpu_util | percent | The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work) |
cpu_used | CPUs | The average number of CPUs used in the system (physical and logical) |
cpu_util_details | percent | The average CPU utilization % broken down by usage type and CPU number (e.g., cp1 user, cp2 system, cp3 soft-irq) |
mem_fault | faults/s | The number of memory faults (minor+major) |
mem_fault_major | faults/s | The number of major memory faults (i.e., faults that cause disk access) per second |
mem_fault_minor | faults/s | The number of minor memory faults (i.e., faults that do not cause disk access) per second |
mem_swapins | pages/s | The number of memory pages swapped in per second |
mem_swapouts | pages/s | The number of memory pages swapped out per second |
mem_total | bytes | The total amount of installed memory |
mem_used | bytes | The total amount of memory used |
mem_used_nocache | bytes | The total amount of memory used without considering memory reserved for caching purposes |
mem_util | percent | The memory utilization % (i.e, the % of memory used) |
mem_util_details | percent | The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory) |
mem_util_nocache | percent | The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes |
disk_io_inflight_details | ops | The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01) |
disk_iops | ops/s | The average number of IO disk operations per second across all disks |
disk_iops_details | ops/s | The number of IO disk-write operations per second broken down by disk (e.g., disk /dev/nvme01) |
disk_iops_reads | ops/s | The average number of IO disk-read operations per second across all disks |
disk_iops_writes | ops/s | The average number of IO disk-write operations per second across all disks |
disk_read_bytes | bytes/s | The number of bytes per second read across all disks |
disk_read_bytes_details | bytes/s | The average response time of IO disk operations broken down by disk (e.g., disk C://) |
disk_read_write_bytes | bytes/s | The number of bytes per second written across all disks |
disk_response_time_details | seconds | The average response time of IO disk operations broken down by disk (e.g., disk C://) |
disk_response_time_read | seconds | The average response time of read disk operations |
disk_response_time_worst | seconds | The average response time of IO disk operations of the slowest disk |
disk_response_time_write | seconds | The average response time of write on disk operations |
disk_swap_used | bytes | The total amount of space used by swap disks |
disk_swap_util | percent | The average space utilization % of swap disks |
disk_util_details | percent | The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://) |
disk_write_bytes | bytes/s | The number of bytes per second written across all disks |
disk_write_bytes_details | bytes/s | The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE) |
filesystem_size | bytes | The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01) |
filesystem_used | bytes | The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01) |
filesystem_util | percent | The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1) |
network_in_bytes_details | bytes/s | The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0) |
network_out_bytes_details | bytes/s | The number of outbound network packets in bytes per second broken down by network device (e.g., eth01) |
network_tcp_retrans | retrans/s | The number of network TCP retransmissions per second |
os_context_switch | switches/s | The number of context switches per second |
proc_blocked | processes | The number of processes blocked (e.g, for IO or swapping reasons) |
os_cpuSchedMinGranularity | integer | nanoseconds | 1500000 | 300000 → 30000000 | no | Minimal preemption granularity (in nanoseconds) for CPU bound tasks |
os_cpuSchedWakeupGranularity | integer | nanoseconds | 2000000 | 400000 → 40000000 | no | Scheduler Wakeup Granularity (in nanoseconds) |
os_CPUSchedMigrationCost | integer | nanoseconds | 500000 | 100000 → 5000000 | no | Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations |
os_CPUSchedChildRunsFirst | integer | 0 |
| no | A freshly forked child runs before the parent continues execution |
os_CPUSchedLatency | integer | nanoseconds | 12000000 | 2400000 → 240000000 | no | Targeted preemption latency (in nanoseconds) for CPU bound tasks |
os_CPUSchedAutogroupEnabled | integer | 0 |
| no | Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads |
os_CPUSchedNrMigrate | integer | 32 | 3 → 320 | no | Scheduler NR Migrate |
os_MemorySwappiness | integer | percent | 60 | 0 → 100 | no | The percentage of RAM free space for which the kernel will start swapping pages to disk |
os_MemoryVmVfsCachePressure | integer | 100 | 10 → 100 | no | VFS Cache Pressure |
os_MemoryVmCompactionProactiveness | integer | 20 | 0 → 100 | Determines how aggressively compaction is done in the background |
os_MemoryVmPageLockUnfairness | integer | 5 | 0 → 1000 | no | Set the level of unfairness in the page lock queue. |
os_MemoryVmWatermarkScaleFactor | integer | 10 | 0 → 1000 | no | The amount of memory, expressed as fractions of 10'000, left in a node/system before kswapd is woken up and how much memory needs to be free before kswapd goes back to sleep |
os_MemoryVmWatermarkBoostFactor | integer | 15000 | 0 → 30000 | no | The level of reclaim when the memory is being fragmented, expressed as fractions of 10'000 of a zone's high watermark |
os_MemoryVmMinFree | integer | 67584 | 10240 → 1024000 | no | Minimum Free Memory (in kbytes) |
os_MemoryTransparentHugepageEnabled | categorical |
|
| no | Transparent Hugepage Enablement Flag |
os_MemoryTransparentHugepageDefrag | categorical |
|
| no | Transparent Hugepage Enablement Defrag |
os_MemorySwap | categorical |
|
| no | Memory Swap |
os_MemoryVmDirtyRatio | integer | 20 | 1 → 99 | no | When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write |
os_MemoryVmDirtyBackgroundRatio | integer | 10 | 1 → 99 | no | When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background |
os_MemoryVmDirtyExpire | integer | centiseconds | 3000 | 300 → 30000 | no | When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write |
os_MemoryVmDirtyWriteback | integer | centiseconds | 500 | 50 → 5000 | no | Memory Dirty Writeback (in centisecs) |
os_NetworkNetCoreSomaxconn | integer | megabytes | 128 | 12 → 8192 | no | Network Max Connections |
os_NetworkNetCoreNetdevMaxBacklog | integer | megabytes/s | 1000 | 100 → 10000 | no | Network Max Backlog |
os_NetworkNetIpv4TcpMaxSynBacklog | integer | milliseconds | 256 | 52 → 5120 | no | Network IPV4 Max Sync Backlog |
os_NetworkNetCoreNetdevBudget | integer | 300 | 30 → 30000 | no | Network Budget |
os_NetworkNetCoreRmemMax | integer | 212992 | 21299 → 2129920 | no | Maximum network receive buffer size that applications can request |
os_NetworkNetCoreWmemMax | integer | 212992 | 21299 → 2129920 | no | Maximum network transmit buffer size that applications can request |
os_NetworkNetIpv4TcpSlowStartAfterIdle | integer | 1 |
| no | Network Slow Start After Idle Flag |
os_NetworkNetIpv4TcpFinTimeout | integer | 60 | 6 → 600 | no | Network TCP timeout |
os_NetworkRfs | integer | 0 | 0 → 131072 | no | If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running |
os_StorageReadAhead | integer | kilobytes | 128 | 0 → 4096 | no | Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk |
os_StorageNrRequests | integer | 32 | 12 → 1280 | no | Storage Number of Requests |
os_StorageRqAffinity | integer | 1 |
| no | Storage Requests Affinity |
os_StorageQueueScheduler | integer |
|
| no | Storage Queue Scheduler Type |
os_StorageNomerges | integer | 0 | 0 → 2 | no | Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried |
os_StorageMaxSectorsKb | integer | kilobytes | 256 | 32 → 256 | no | The largest IO size that the OS can issue to a block device |
This section documents Akamas out-of-the-box optimization packs.
This page describes the Optimization Pack for the component type CentOS 7.
Notice: you can use a device
custom filter to monitor a specific disk with Prometheus. You can find more information on Prometheus queries and the %FILTERS%
placeholder here: Prometheus provider and here: Prometheus provider metrics mapping.
There are no general constraints among CentOS 7 parameters.
Metric | Description | |
---|---|---|
Metric | Description | |
---|---|---|
Metric | Description | |
---|---|---|
Metric | Description | |
---|---|---|
Metric | Description | |
---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Optimization Pack | Support for applications |
---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Notice: you can use a device
custom filter to monitor a specific disk with Prometheus. You can find more information on Prometheus queries and the %FILTERS%
placeholder here: and here: .
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Parameter | Default Value | Domain | Description |
---|
Parameter | Default Value | Domain | Description |
---|
Parameter | Default value | Domain | Description |
---|
Parameter | Default value | Domain | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Notice: you can use a device
custom filter to monitor a specific disk with Prometheus. You can find more information on Prometheus queries and the %FILTERS%
placeholder here: and here: .
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Parameter | Default Value | Domain | Description |
---|
Parameter | Default Value | Domain | Description |
---|
Parameter | Default value | Domain | Description |
---|
Parameter | Default value | Domain | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Notice: you can use a device
custom filter to monitor a specific disk with Prometheus. You can find more information on Prometheus queries and the %FILTERS%
placeholder here: and here: .
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Parameter | Default Value | Domain | Description |
---|
Parameter | Default Value | Domain | Description |
---|
Parameter | Default value | Domain | Description |
---|
Parameter | Default value | Domain | Description |
---|
cpu_load_avg
tasks
The system load average (i.e., the number of active tasks in the system)
cpu_num
CPUs
The number of CPUs available in the system (physical and logical)
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
cpu_used
CPUs
The average number of CPUs used in the system (physical and logical)
cpu_util_details
percent
The average CPU utilization % broken down by usage type and CPU number (e.g., cp1 user, cp2 system, cp3 soft-irq)
mem_fault
faults/s
The number of memory faults (minor+major)
mem_fault_major
faults/s
The number of major memory faults (i.e., faults that cause disk access) per second
mem_fault_minor
faults/s
The number of minor memory faults (i.e., faults that do not cause disk access) per second
mem_swapins
pages/s
The number of memory pages swapped in per second
mem_swapouts
pages/s
The number of memory pages swapped out per second
mem_total
bytes
The total amount of installed memory
mem_used
bytes
The total amount of memory used
mem_used_nocache
bytes
The total amount of memory used without considering memory reserved for caching purposes
mem_util
percent
The memory utilization % (i.e, the % of memory used)
mem_util_details
percent
The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory)
mem_util_nocache
percent
The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes
disk_io_inflight_details
ops
The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01)
disk_iops
ops/s
The average number of IO disk operations per second across all disks
disk_iops_details
ops/s
The number of IO disk-write operations per second broken down by disk (e.g., disk /dev/nvme01)
disk_iops_reads
ops/s
The average number of IO disk-read operations per second across all disks
disk_iops_writes
ops/s
The average number of IO disk-write operations per second across all disks
disk_read_bytes
bytes/s
The number of bytes per second read across all disks
disk_read_bytes_details
bytes/s
The average response time of IO disk operations broken down by disk (e.g., disk C://)
disk_read_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_response_time_details
seconds
The average response time of IO disk operations broken down by disk (e.g., disk C://)
disk_response_time_read
seconds
The average response time of read disk operations
disk_response_time_worst
seconds
The average response time of IO disk operations of the slowest disk
disk_response_time_write
seconds
The average response time of write on disk operations
disk_swap_used
bytes
The total amount of space used by swap disks
disk_swap_util
percent
The average space utilization % of swap disks
disk_util_details
percent
The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://)
disk_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_write_bytes_details
bytes/s
The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE)
filesystem_size
bytes
The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01)
filesystem_used
bytes
The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01)
filesystem_util
percent
The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1)
network_in_bytes_details
bytes/s
The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0)
network_out_bytes_details
bytes/s
The number of outbound network packets in bytes per second broken down by network device (e.g., eth01)
network_tcp_retrans
retrans/s
The number of network TCP retransmissions per second
os_context_switch
switches/s
The number of context switches per second
proc_blocked
processes
The number of processes blocked (e.g, for IO or swapping reasons)
os_cpuSchedMinGranularity
integer
nanoseconds
1500000
300000 → 30000000
no
Minimal preemption granularity (in nanoseconds) for CPU bound tasks
os_cpuSchedWakeupGranularity
integer
nanoseconds
2000000
400000 → 40000000
no
Scheduler Wakeup Granularity (in nanoseconds)
os_CPUSchedMigrationCost
integer
nanoseconds
500000
100000 → 5000000
no
Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations
os_CPUSchedChildRunsFirst
integer
0
0
, 1
no
A freshly forked child runs before the parent continues execution
os_CPUSchedLatency
integer
nanoseconds
12000000
2400000 → 240000000
no
Targeted preemption latency (in nanoseconds) for CPU bound tasks
os_CPUSchedAutogroupEnabled
integer
0
0
, 1
no
Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads
os_CPUSchedNrMigrate
integer
32
3 → 320
no
Scheduler NR Migrate
os_MemorySwappiness
integer
percent
60
0 → 100
no
The percentage of RAM free space for which the kernel will start swapping pages to disk
os_MemoryVmVfsCachePressure
integer
100
10 → 100
no
VFS Cache Pressure
os_MemoryVmCompactionProactiveness
integer
20
10 → 100
no
Determines how aggressively compaction is done in the background
os_MemoryVmPageLockUnfairness
integer
5
0 → 1000
no
Set the level of unfairness in the page lock queue.
os_MemoryVmWatermarkScaleFactor
integer
10
0 → 1000
no
The amount of memory, expressed as fractions of 10'000, left in a node/system before kswapd is woken up and how much memory needs to be free before kswapd goes back to sleep
os_MemoryVmWatermarkBoostFactor
integer
15000
0 → 30000
no
The level of reclaim when the memory is being fragmented, expressed as fractions of 10'000 of a zone's high watermark
os_MemoryVmMinFree
integer
67584
10240 → 1024000
no
Minimum Free Memory (in kbytes)
os_MemoryTransparentHugepageEnabled
categorical
madvise
always
, never
, madvise
no
Transparent Hugepage Enablement Flag
os_MemoryTransparentHugepageDefrag
categorical
madvise
always
, never
, defer+madvise
, madvise
, defer
no
Transparent Hugepage Enablement Defrag
os_MemorySwap
categorical
swapon
swapon
, swapoff
no
Memory Swap
os_MemoryVmDirtyRatio
integer
20
1 → 99
no
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyBackgroundRatio
integer
10
1 → 99
no
When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background
os_MemoryVmDirtyExpire
integer
centiseconds
3000
300 → 30000
no
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyWriteback
integer
centiseconds
500
50 → 5000
no
Memory Dirty Writeback (in centisecs)
os_NetworkNetCoreSomaxconn
integer
megabytes
128
12 → 8192
no
Network Max Connections
os_NetworkNetCoreNetdevMaxBacklog
integer
megabytes/s
1000
100 → 10000
no
Network Max Backlog
os_NetworkNetIpv4TcpMaxSynBacklog
integer
milliseconds
256
52 → 5120
no
Network IPV4 Max Sync Backlog
os_NetworkNetCoreNetdevBudget
integer
300
30 → 30000
no
Network Budget
os_NetworkNetCoreRmemMax
integer
212992
21299 → 2129920
no
Maximum network receive buffer size that applications can request
os_NetworkNetCoreWmemMax
integer
212992
21299 → 2129920
no
Maximum network transmit buffer size that applications can request
os_NetworkNetIpv4TcpSlowStartAfterIdle
integer
1
0
, 1
no
Network Slow Start After Idle Flag
os_NetworkNetIpv4TcpFinTimeout
integer
60
6 → 600
no
Network TCP timeout
os_NetworkRfs
integer
0
0 → 131072
no
If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running
os_StorageReadAhead
integer
kilobytes
128
0 → 4096
no
Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk
os_StorageNrRequests
integer
32
12 → 1280
no
Storage Number of Requests
os_StorageRqAffinity
integer
1
1
, 2
no
Storage Requests Affinity
os_StorageQueueScheduler
integer
none
none
, kyber
, mq-deadline
, bfq
no
Storage Queue Scheduler Type
os_StorageNomerges
integer
0
0 → 2
no
Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried
os_StorageMaxSectorsKb
integer
kilobytes
256
32 → 256
no
The largest IO size that the OS can issue to a block device
based on Linux operating system
based on MS .Net technology
based on OpenJDK and Oracle HotSpot JVM
based on Eclipse OpenJ9 VM (formerly known as IBM J9)
based on NodeJS
based on GO runtime (aka Golang)
exposed as web applications
based on Docker containters
based on Kubernetes containters
based on WebSphere middleware
based on Apache Spark middleware
based on PostgreSQL database
based on Cassandra database
based on MySQL database
based on Oracle database
based on MongoDB database
based on Elasticsearch database
based on AWS EC2 or Lambda resources
cpu_num
CPUs
The number of CPUs available in the system (physical and logical)
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
cpu_util_details
percent
The average CPU utilization % broken down by usage type and cpu number (e.g., cp1 user, cp2 system, cp3 soft-irq)
cpu_load_avg
tasks
The system load average (i.e., the number of active tasks in the system)
mem_util
percent
The memory utilization % (i.e, the % of memory used)
mem_util_nocache
percent
The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes
mem_util_details
percent
The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory)
mem_used
bytes
The total amount of memory used
mem_used_nocache
bytes
The total amount of memory used without considering memory reserved for caching purposes
mem_total
bytes
The total amount of installed memory
mem_fault_minor
faults/s
The number of minor memory faults (i.e., faults that do not cause disk access) per second
mem_fault_major
faults/s
The number of major memory faults (i.e., faults that cause disk access) per second
mem_fault
faults/s
The number of memory faults (major + minor)
mem_swapins
pages/s
The number of memory pages swapped in per second
mem_swapouts
pages/s
The number of memory pages swapped out per second
network_tcp_retrans
retrans/s
The number of network TCP retransmissions per second
network_in_bytes_details
bytes/s
The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0)
network_out_bytes_details
bytes/s
The number of outbound network packets in bytes per second broken down by network device (e.g., eth01)
disk_swap_util
percent
The average space utilization % of swap disks
disk_swap_used
bytes
The total amount of space used by swap disks
disk_util_details
percent
The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://)
disk_iops_writes
ops/s
The average number of IO disk-write operations per second across all disks
disk_iops_reads
ops/s
The average number of IO disk-read operations per second across all disks
disk_iops
ops/s
The average number of IO disk operations per second across all disks
disk_response_time_read
seconds
The average response time of IO read-disk operations
disk_response_time_worst
seconds
The average response time of IO disk operations of the slowest disk
disk_response_time_write
seconds
The average response time of IO write-disk operations
disk_response_time_details
ops/s
The average response time of IO disk operations broken down by disk (e.g., disk /dev/nvme01 )
disk_iops_details
ops/s
The number of IO disk-write operations of per second broken down by disk (e.g., disk /dev/nvme01)
disk_io_inflight_details
ops
The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01)
disk_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_read_bytes
bytes/s
The number of bytes per second read across all disks
disk_read_write_bytes
bytes/s
The number of bytes per second read and written across all disks
disk_write_bytes_details
bytes/s
The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE)
disk_read_bytes_details
bytes/s
The number of bytes per second read from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation READ)
filesystem_util
percent
The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1)
filesystem_used
bytes
The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01)
filesystem_size
bytes
The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01)
proc_blocked
processes
The number of processes blocked (e.g, for IO or swapping reasons)
os_context_switch
switches/s
The number of context switches per second
os_cpuSchedMinGranularity
2250000 ns
300000→30000000 ns
Minimal preemption granularity (in nanoseconds) for CPU bound tasks
os_cpuSchedWakeupGranularity
3000000 ns
400000→40000000 ns
Scheduler Wakeup Granularity (in nanoseconds)
os_CPUSchedMigrationCost
500000 ns
100000→5000000 ns
Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations
os_CPUSchedChildRunsFirst
0
0→1
A freshly forked child runs before the parent continues execution
os_CPUSchedLatency
18000000 ns
2400000→240000000 ns
Targeted preemption latency (in nanoseconds) for CPU bound tasks
os_CPUSchedAutogroupEnabled
1
0→1
Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads
os_CPUSchedNrMigrate
32
3→320
Scheduler NR Migrate
os_MemorySwappiness
1
0→100
Memory Swappiness
os_MemoryVmVfsCachePressure
100 %
10→100 %
VFS Cache Pressure
os_MemoryVmMinFree
67584 KB
10240→1024000 KB
Minimum Free Memory
os_MemoryVmDirtyRatio
20 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyBackgroundRatio
10 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background
os_MemoryTransparentHugepageEnabled
always
always
never
Transparent Hugepage Enablement
os_MemoryTransparentHugepageDefrag
always
always
never
Transparent Hugepage Enablement Defrag
os_MemorySwap
swapon
swapon
swapoff
Memory Swap
os_MemoryVmDirtyExpire
3000 centisecs
300→30000 centisecs
Memory Dirty Expiration Time
os_MemoryVmDirtyWriteback
500 centisecs
50→5000 centisecs
Memory Dirty Writeback
os_NetworkNetCoreSomaxconn
128 connections
12→1200 connections
Network Max Connections
os_NetworkNetCoreNetdevMaxBacklog
1000 packets
100→10000 packets
Network Max Backlog
os_NetworkNetIpv4TcpMaxSynBacklog
1024 packets
52→15120 packets
Network IPV4 Max Sync Backlog
os_NetworkNetCoreNetdevBudget
300 packets
30→3000 packets
Network Budget
os_NetworkNetCoreRmemMax
212992 bytes
21299→2129920 bytes
Maximum network receive buffer size that applications can request
os_NetworkNetCoreWmemMax
21299→2129920 bytes
21299→2129920 bytes
Maximum network transmit buffer size that applications can request
os_NetworkNetIpv4TcpSlowStartAfterIdle
1
0→1
Network Slow Start After Idle Flag
os_NetworkNetIpv4TcpFinTimeout
60
6 →600 seconds
Network TCP timeout
os_NetworkRfs
0
0→131072
If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running
os_StorageReadAhead
128 KB
0→1024 KB
Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk
os_StorageNrRequests
1000 packets
100→10000 packets
Network Max Backlog
os_StorageRqAffinity
1
1→2
Storage Requests Affinity
os_StorageQueueScheduler
none
none
kyber
Storage Queue Scheduler Type
os_StorageNomerges
0
0→2
Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried
os_StorageMaxSectorsKb
128 KB
32→128 KB
The largest IO size that the OS c
cpu_num | CPUs | The number of CPUs available in the system (physical and logical) |
cpu_util | percent | The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work) |
cpu_util_details | percent | The average CPU utilization % broken down by usage type and cpu number (e.g., cp1 user, cp2 system, cp3 soft-irq) |
cpu_load_avg | tasks | The system load average (i.e., the number of active tasks in the system) |
mem_util | percent | The memory utilization % (i.e, the % of memory used) |
mem_util_nocache | percent | The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes |
mem_util_details | percent | The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory) |
mem_used | bytes | The total amount of memory used |
mem_used_nocache | bytes | The total amount of memory used without considering memory reserved for caching purposes |
mem_total | bytes | The total amount of installed memory |
mem_fault_minor | faults/s | The number of minor memory faults (i.e., faults that do not cause disk access) per second |
mem_fault_major | faults/s | The number of major memory faults (i.e., faults that cause disk access) per second |
mem_fault | faults/s | The number of memory faults (major + minor) |
mem_swapins | pages/s | The number of memory pages swapped in per second |
mem_swapouts | pages/s | The number of memory pages swapped out per second |
network_tcp_retrans | retrans/s | The number of network TCP retransmissions per second |
network_in_bytes_details | bytes/s | The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0) |
network_out_bytes_details | bytes/s | The number of outbound network packets in bytes per second broken down by network device (e.g., eth01) |
disk_swap_util | percent | The average space utilization % of swap disks |
disk_swap_used | bytes | The total amount of space used by swap disks |
disk_util_details | percent | The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://) |
disk_iops_writes | ops/s | The average number of IO disk-write operations per second across all disks |
disk_iops_reads | ops/s | The average number of IO disk-read operations per second across all disks |
disk_iops | ops/s | The average number of IO disk operations per second across all disks |
disk_response_time_read | seconds | The average response time of IO read-disk operations |
disk_response_time_worst | seconds | The average response time of IO disk operations of the slowest disk |
disk_response_time_write | seconds | The average response time of IO write-disk operations |
disk_response_time_details | ops/s | The average response time of IO disk operations broken down by disk (e.g., disk /dev/nvme01 ) |
disk_iops_details | ops/s | The number of IO disk-write operations of per second broken down by disk (e.g., disk /dev/nvme01) |
disk_io_inflight_details | ops | The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01) |
disk_write_bytes | bytes/s | The number of bytes per second written across all disks |
disk_read_bytes | bytes/s | The number of bytes per second read across all disks |
disk_read_write_bytes | bytes/s | The number of bytes per second read and written across all disks |
disk_write_bytes_details | bytes/s | The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE) |
disk_read_bytes_details | bytes/s | The number of bytes per second read from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation READ) |
filesystem_util | percent | The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1) |
filesystem_used | bytes | The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01) |
filesystem_size | bytes | The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01) |
proc_blocked | processes | The number of processes blocked (e.g, for IO or swapping reasons) |
os_context_switch | switches/s | The number of context switches per second |
os_cpuSchedMinGranularity | 2250000 ns | 300000→30000000 ns | Minimal preemption granularity (in nanoseconds) for CPU bound tasks |
os_cpuSchedWakeupGranularity | 3000000 ns | 400000→40000000 ns | Scheduler Wakeup Granularity (in nanoseconds) |
os_CPUSchedMigrationCost | 500000 ns | 100000→5000000 ns | Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations |
os_CPUSchedChildRunsFirst | 0 | 0→1 | A freshly forked child runs before the parent continues execution |
os_CPUSchedLatency | 18000000 ns | 2400000→240000000 ns | Targeted preemption latency (in nanoseconds) for CPU bound tasks |
os_CPUSchedAutogroupEnabled | 1 | 0→1 | Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads |
os_CPUSchedNrMigrate | 32 | 3→320 | Scheduler NR Migrate |
os_MemorySwappiness | 1 | 0→100 | Memory Swappiness |
os_MemoryVmVfsCachePressure | 100 % | 10→100 % | VFS Cache Pressure |
os_MemoryVmMinFree | 67584 KB | 10240→1024000 KB | Minimum Free Memory |
os_MemoryVmDirtyRatio | 20 % | 1→99 % | When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write |
os_MemoryVmDirtyBackgroundRatio | 10 % | 1→99 % | When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background |
os_MemoryTransparentHugepageEnabled |
|
| Transparent Hugepage Enablement |
os_MemoryTransparentHugepageDefrag |
|
| Transparent Hugepage Enablement Defrag |
os_MemorySwap |
|
| Memory Swap |
os_MemoryVmDirtyExpire | 3000 centisecs | 300→30000 centisecs | Memory Dirty Expiration Time |
os_MemoryVmDirtyWriteback | 500 centisecs | 50→5000 centisecs | Memory Dirty Writeback |
os_NetworkNetCoreSomaxconn | 128 connections | 12→1200 connections | Network Max Connections |
os_NetworkNetCoreNetdevMaxBacklog | 1000 packets | 100→10000 packets | Network Max Backlog |
os_NetworkNetIpv4TcpMaxSynBacklog | 512 packets | 52→15120 packets | Network IPV4 Max Sync Backlog |
os_NetworkNetCoreNetdevBudget | 300 packets | 30→3000 packets | Network Budget |
os_NetworkNetCoreRmemMax | 212992 bytes | 21299→2129920 bytes | Maximum network receive buffer size that applications can request |
os_NetworkNetCoreWmemMax | 21299→2129920 bytes | 21299→2129920 bytes | Maximum network transmit buffer size that applications can request |
os_NetworkNetIpv4TcpSlowStartAfterIdle | 1 | 0→1 | Network Slow Start After Idle Flag |
os_NetworkNetIpv4TcpFinTimeout | 60 | 6 →600 seconds | Network TCP timeout |
os_NetworkRfs | 0 | 0→131072 | If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running |
os_StorageReadAhead | 128 KB | 0→1024 KB | Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk |
os_StorageNrRequests | 1000 packets | 100→10000 packets | Network Max Backlog |
os_StorageRqAffinity | 1 | 1→2 | Storage Requests Affinity |
os_StorageQueueScheduler |
|
| Storage Queue Scheduler Type |
os_StorageNomerges | 0 | 0→2 | Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried |
os_StorageMaxSectorsKb | 128 KB | 32→128 KB | The largest IO size that the OS c |
cpu_num | CPUs | The number of CPUs available in the system (physical and logical) |
cpu_util | percent | The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work) |
cpu_util_details | percent | The average CPU utilization % broken down by usage type and cpu number (e.g., cp1 user, cp2 system, cp3 soft-irq) |
cpu_load_avg | tasks | The system load average (i.e., the number of active tasks in the system) |
mem_util | percent | The memory utilization % (i.e, the % of memory used) |
mem_util_nocache | percent | The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes |
mem_util_details | percent | The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory) |
mem_used | bytes | The total amount of memory used |
mem_used_nocache | bytes | The total amount of memory used without considering memory reserved for caching purposes |
mem_total | bytes | The total amount of installed memory |
mem_fault_minor | faults/s | The number of minor memory faults (i.e., faults that do not cause disk access) per second |
mem_fault_major | faults/s | The number of major memory faults (i.e., faults that cause disk access) per second |
mem_fault | faults/s | The number of memory faults (major + minor) |
mem_swapins | pages/s | The number of memory pages swapped in per second |
mem_swapouts | pages/s | The number of memory pages swapped out per second |
network_tcp_retrans | retrans/s | The number of network TCP retransmissions per second |
network_in_bytes_details | bytes/s | The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0) |
network_out_bytes_details | bytes/s | The number of outbound network packets in bytes per second broken down by network device (e.g., eth01) |
disk_swap_util | percent | The average space utilization % of swap disks |
disk_swap_used | bytes | The total amount of space used by swap disks |
disk_util_details | percent | The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://) |
disk_iops_writes | ops/s | The average number of IO disk-write operations per second across all disks |
disk_iops_reads | ops/s | The average number of IO disk-read operations per second across all disks |
disk_iops | ops/s | The average number of IO disk operations per second across all disks |
disk_response_time_read | seconds | The average response time of IO read-disk operations |
disk_response_time_worst | seconds | The average response time of IO disk operations of the slowest disk |
disk_response_time_write | seconds | The average response time of IO write-disk operations |
disk_response_time_details | ops/s | The average response time of IO disk operations broken down by disk (e.g., disk /dev/nvme01 ) |
disk_iops_details | ops/s | The number of IO disk-write operations of per second broken down by disk (e.g., disk /dev/nvme01) |
disk_io_inflight_details | ops | The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01) |
disk_write_bytes | bytes/s | The number of bytes per second written across all disks |
disk_read_bytes | bytes/s | The number of bytes per second read across all disks |
disk_read_write_bytes | bytes/s | The number of bytes per second read and written across all disks |
disk_write_bytes_details | bytes/s | The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE) |
disk_read_bytes_details | bytes/s | The number of bytes per second read from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation READ) |
filesystem_util | percent | The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1) |
filesystem_used | bytes | The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01) |
filesystem_size | bytes | The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01) |
proc_blocked | processes | The number of processes blocked (e.g, for IO or swapping reasons) |
os_context_switch | switches/s | The number of context switches per second |
os_cpuSchedMinGranularity | 2250000 ns | 300000→30000000 ns | Minimal preemption granularity (in nanoseconds) for CPU bound tasks |
os_cpuSchedWakeupGranularity | 3000000 ns | 400000→40000000 ns | Scheduler Wakeup Granularity (in nanoseconds) |
os_CPUSchedMigrationCost | 500000 ns | 100000→5000000 ns | Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations |
os_CPUSchedChildRunsFirst | 0 | 0→1 | A freshly forked child runs before the parent continues execution |
os_CPUSchedLatency | 18000000 ns | 2400000→240000000 ns | Targeted preemption latency (in nanoseconds) for CPU bound tasks |
os_CPUSchedAutogroupEnabled | 1 | 0→1 | Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads |
os_CPUSchedNrMigrate | 32 | 3→320 | Scheduler NR Migrate |
os_MemorySwappiness | 1 | 0→100 | Memory Swappiness |
os_MemoryVmVfsCachePressure | 100 % | 10→100 % | VFS Cache Pressure |
os_MemoryVmMinFree | 67584 KB | 10240→1024000 KB | Minimum Free Memory |
os_MemoryVmDirtyRatio | 20 % | 1→99 % | When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write |
os_MemoryVmDirtyBackgroundRatio | 10 % | 1→99 % | When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background |
os_MemoryTransparentHugepageEnabled |
|
| Transparent Hugepage Enablement |
os_MemoryTransparentHugepageDefrag |
|
| Transparent Hugepage Enablement Defrag |
os_MemorySwap |
|
| Memory Swap |
os_MemoryVmDirtyExpire | 3000 centisecs | 300→30000 centisecs | Memory Dirty Expiration Time |
os_MemoryVmDirtyWriteback | 500 centisecs | 50→5000 centisecs | Memory Dirty Writeback |
os_NetworkNetCoreSomaxconn | 128 connections | 12→1200 connections | Network Max Connections |
os_NetworkNetCoreNetdevMaxBacklog | 1000 packets | 100→10000 packets | Network Max Backlog |
os_NetworkNetIpv4TcpMaxSynBacklog | 1024 packets | 52→15120 packets | Network IPV4 Max Sync Backlog |
os_NetworkNetCoreNetdevBudget | 300 packets | 30→3000 packets | Network Budget |
os_NetworkNetCoreRmemMax | 212992 bytes | 21299→2129920 bytes | Maximum network receive buffer size that applications can request |
os_NetworkNetCoreWmemMax | 21299→2129920 bytes | 21299→2129920 bytes | Maximum network transmit buffer size that applications can request |
os_NetworkNetIpv4TcpSlowStartAfterIdle | 1 | 0→1 | Network Slow Start After Idle Flag |
os_NetworkNetIpv4TcpFinTimeout | 60 | 6 →600 seconds | Network TCP timeout |
os_NetworkRfs | 0 | 0→131072 | If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running |
os_StorageReadAhead | 128 KB | 0→1024 KB | Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk |
os_StorageNrRequests | 1000 packets | 100→10000 packets | Network Max Backlog |
os_StorageRqAffinity | 1 | 1→2 | Storage Requests Affinity |
os_StorageQueueScheduler |
|
| Storage Queue Scheduler Type |
os_StorageNomerges | 0 | 0→2 | Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried |
os_StorageMaxSectorsKb | 128 KB | 32→128 KB | The largest IO size that the OS c |
cpu_num | CPUs | The number of CPUs available in the system (physical and logical) |
cpu_util | percent | The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work) |
cpu_util_details | percent | The average CPU utilization % broken down by usage type and cpu number (e.g., cp1 user, cp2 system, cp3 soft-irq) |
cpu_load_avg | tasks | The system load average (i.e., the number of active tasks in the system) |
mem_util | percent | The memory utilization % (i.e, the % of memory used) |
mem_util_nocache | percent | The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes |
mem_util_details | percent | The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory) |
mem_used | bytes | The total amount of memory used |
mem_used_nocache | bytes | The total amount of memory used without considering memory reserved for caching purposes |
mem_total | bytes | The total amount of installed memory |
mem_fault_minor | faults/s | The number of minor memory faults (i.e., faults that do not cause disk access) per second |
mem_fault_major | faults/s | The number of major memory faults (i.e., faults that cause disk access) per second |
mem_fault | faults/s | The number of memory faults (major + minor) |
mem_swapins | pages/s | The number of memory pages swapped in per second |
mem_swapouts | pages/s | The number of memory pages swapped out per second |
network_tcp_retrans | retrans/s | The number of network TCP retransmissions per second |
network_in_bytes_details | bytes/s | The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0) |
network_out_bytes_details | bytes/s | The number of outbound network packets in bytes per second broken down by network device (e.g., eth01) |
disk_swap_util | percent | The average space utilization % of swap disks |
disk_swap_used | bytes | The total amount of space used by swap disks |
disk_util_details | percent | The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://) |
disk_iops_writes | ops/s | The average number of IO disk-write operations per second across all disks |
disk_iops_reads | ops/s | The average number of IO disk-read operations per second across all disks |
disk_iops | ops/s | The average number of IO disk operations per second across all disks |
disk_response_time_read | seconds | The average response time of IO read-disk operations |
disk_response_time_worst | seconds | The average response time of IO disk operations of the slowest disk |
disk_response_time_write | seconds | The average response time of IO write-disk operations |
disk_response_time_details | ops/s | The average response time of IO disk operations broken down by disk (e.g., disk /dev/nvme01 ) |
disk_iops_details | ops/s | The number of IO disk-write operations of per second broken down by disk (e.g., disk /dev/nvme01) |
disk_io_inflight_details | ops | The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01) |
disk_write_bytes | bytes/s | The number of bytes per second written across all disks |
disk_read_bytes | bytes/s | The number of bytes per second read across all disks |
disk_read_write_bytes | bytes/s | The number of bytes per second read and written across all disks |
disk_write_bytes_details | bytes/s | The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE) |
disk_read_bytes_details | bytes/s | The number of bytes per second read from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation READ) |
filesystem_util | percent | The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1) |
filesystem_used | bytes | The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01) |
filesystem_size | bytes | The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01) |
proc_blocked | processes | The number of processes blocked (e.g, for IO or swapping reasons) |
os_context_switch | switches/s | The number of context switches per second |
os_cpuSchedMinGranularity | 2250000 ns | 300000→30000000 ns | Minimal preemption granularity (in nanoseconds) for CPU bound tasks |
os_cpuSchedWakeupGranularity | 3000000 ns | 400000→40000000 ns | Scheduler Wakeup Granularity (in nanoseconds) |
os_CPUSchedMigrationCost | 500000 ns | 100000→5000000 ns | Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations |
os_CPUSchedChildRunsFirst | 0 | 0→1 | A freshly forked child runs before the parent continues execution |
os_CPUSchedLatency | 18000000 ns | 2400000→240000000 ns | Targeted preemption latency (in nanoseconds) for CPU bound tasks |
os_CPUSchedAutogroupEnabled | 1 | 0→1 | Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads |
os_CPUSchedNrMigrate | 32 | 3→320 | Scheduler NR Migrate |
os_MemorySwappiness | 1 | 0→100 | Memory Swappiness |
os_MemoryVmVfsCachePressure | 100 % | 10→100 % | VFS Cache Pressure |
os_MemoryVmMinFree | 67584 KB | 10240→1024000 KB | Minimum Free Memory |
os_MemoryVmDirtyRatio | 20 % | 1→99 % | When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write |
os_MemoryVmDirtyBackgroundRatio | 10 % | 1→99 % | When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background |
os_MemoryTransparentHugepageEnabled |
|
| Transparent Hugepage Enablement |
os_MemoryTransparentHugepageDefrag |
|
| Transparent Hugepage Enablement Defrag |
os_MemorySwap |
|
| Memory Swap |
os_MemoryVmDirtyExpire | 3000 centisecs | 300→30000 centisecs | Memory Dirty Expiration Time |
os_MemoryVmDirtyWriteback | 500 centisecs | 50→5000 centisecs | Memory Dirty Writeback |
os_NetworkNetCoreSomaxconn | 128 connections | 12→1200 connections | Network Max Connections |
os_NetworkNetCoreNetdevMaxBacklog | 1000 packets | 100→10000 packets | Network Max Backlog |
os_NetworkNetIpv4TcpMaxSynBacklog | 1024 packets | 52→15120 packets | Network IPV4 Max Sync Backlog |
os_NetworkNetCoreNetdevBudget | 300 packets | 30→3000 packets | Network Budget |
os_NetworkNetCoreRmemMax | 212992 bytes | 21299→2129920 bytes | Maximum network receive buffer size that applications can request |
os_NetworkNetCoreWmemMax | 21299→2129920 bytes | 21299→2129920 bytes | Maximum network transmit buffer size that applications can request |
os_NetworkNetIpv4TcpSlowStartAfterIdle | 1 | 0→1 | Network Slow Start After Idle Flag |
os_NetworkNetIpv4TcpFinTimeout | 60 | 6 →600 seconds | Network TCP timeout |
os_NetworkRfs | 0 | 0→131072 | If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running |
os_StorageReadAhead | 128 KB | 0→1024 KB | Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk |
os_StorageNrRequests | 1000 packets | 100→10000 packets | Network Max Backlog |
os_StorageRqAffinity | 1 | 1→2 | Storage Requests Affinity |
os_StorageQueueScheduler |
|
| Storage Queue Scheduler Type |
os_StorageNomerges | 0 | 0→2 | Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried |
os_StorageMaxSectorsKb | 128 KB | 32→128 KB | The largest IO size that the OS c |
This page describes the Optimization Pack for the component type RHEL 8.
Notice: you can use a device
custom filter to monitor a specific disk with Prometheus. You can find more information on Prometheus queries and the %FILTERS%
placeholder here: Prometheus provider and here: Prometheus provider metrics mapping.
This page describes the Optimization Pack for the component type Ubuntu 18.04.
Notice: you can use a device
custom filter to monitor a specific disk with Prometheus. You can find more information on Prometheus queries and the %FILTERS%
placeholder here: Prometheus provider and here: Prometheus provider metrics mapping.
This page describes the Optimization Pack for the component type Ubuntu 20.04.
Notice: you can use a device
custom filter to monitor a specific disk with Prometheus. You can find more information on Prometheus queries and the %FILTERS%
placeholder here: Prometheus provider and here: Prometheus provider metrics mapping.
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
Component Type | Description |
---|
Metric | Unit | Description |
---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|
cpu_num
CPUs
The number of CPUs available in the system (physical and logical)
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
cpu_util_details
percent
The average CPU utilization % broken down by usage type and cpu number (e.g., cp1 user, cp2 system, cp3 soft-irq)
cpu_load_avg
tasks
The system load average (i.e., the number of active tasks in the system)
mem_util
percent
The memory utilization % (i.e, the % of memory used)
mem_util_nocache
percent
The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes
mem_util_details
percent
The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory)
mem_used
bytes
The total amount of memory used
mem_used_nocache
bytes
The total amount of memory used without considering memory reserved for caching purposes
mem_total
bytes
The total amount of installed memory
mem_fault_minor
faults/s
The number of minor memory faults (i.e., faults that do not cause disk access) per second
mem_fault_major
faults/s
The number of major memory faults (i.e., faults that cause disk access) per second
mem_fault
faults/s
The number of memory faults (major + minor)
mem_swapins
pages/s
The number of memory pages swapped in per second
mem_swapouts
pages/s
The number of memory pages swapped out per second
network_tcp_retrans
retrans/s
The number of network TCP retransmissions per second
network_in_bytes_details
bytes/s
The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0)
network_out_bytes_details
bytes/s
The number of outbound network packets in bytes per second broken down by network device (e.g., eth01)
disk_swap_util
percent
The average space utilization % of swap disks
disk_swap_used
bytes
The total amount of space used by swap disks
disk_util_details
percent
The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://)
disk_iops_writes
ops/s
The average number of IO disk-write operations per second across all disks
disk_iops_reads
ops/s
The average number of IO disk-read operations per second across all disks
disk_iops
ops/s
The average number of IO disk operations per second across all disks
disk_response_time_read
seconds
The average response time of IO read-disk operations
disk_response_time_worst
seconds
The average response time of IO disk operations of the slowest disk
disk_response_time_write
seconds
The average response time of IO write-disk operations
disk_response_time_details
ops/s
The average response time of IO disk operations broken down by disk (e.g., disk /dev/nvme01 )
disk_iops_details
ops/s
The number of IO disk-write operations of per second broken down by disk (e.g., disk /dev/nvme01)
disk_io_inflight_details
ops
The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01)
disk_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_read_bytes
bytes/s
The number of bytes per second read across all disks
disk_read_write_bytes
bytes/s
The number of bytes per second read and written across all disks
disk_write_bytes_details
bytes/s
The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE)
disk_read_bytes_details
bytes/s
The number of bytes per second read from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation READ)
filesystem_util
percent
The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1)
filesystem_used
bytes
The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01)
filesystem_size
bytes
The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01)
proc_blocked
processes
The number of processes blocked (e.g, for IO or swapping reasons)
os_context_switch
switches/s
The number of context switches per second
os_cpuSchedMinGranularity
2250000 ns
300000→30000000 ns
Minimal preemption granularity (in nanoseconds) for CPU bound tasks
os_cpuSchedWakeupGranularity
3000000 ns
400000→40000000 ns
Scheduler Wakeup Granularity (in nanoseconds)
os_CPUSchedMigrationCost
500000 ns
100000→5000000 ns
Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations
os_CPUSchedChildRunsFirst
0
0→1
A freshly forked child runs before the parent continues execution
os_CPUSchedLatency
18000000 ns
2400000→240000000 ns
Targeted preemption latency (in nanoseconds) for CPU bound tasks
os_CPUSchedAutogroupEnabled
1
0→1
Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads
os_CPUSchedNrMigrate
32
3→320
Scheduler NR Migrate
os_MemorySwappiness
30
0→100
Memory Swappiness
os_MemoryVmVfsCachePressure
100 %
10→100 %
VFS Cache Pressure
os_MemoryVmMinFree
67584 KB
10240→1024000 KB
Minimum Free Memory
os_MemoryVmDirtyRatio
30 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyBackgroundRatio
10 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background
os_MemoryTransparentHugepageEnabled
never
always
never
madvise
Transparent Hugepage Enablement
os_MemoryTransparentHugepageDefrag
always
always
never
madvise
defer
defer+madvise
Transparent Hugepage Enablement Defrag
os_MemorySwap
swapon
swapon
swapoff
Memory Swap
os_MemoryVmDirtyExpire
3000 centisecs
300→30000 centisecs
Memory Dirty Expiration Time
os_MemoryVmDirtyWriteback
500 centisecs
50→5000 centisecs
Memory Dirty Writeback
os_NetworkNetCoreSomaxconn
128 connections
12→1200 connections
Network Max Connections
os_NetworkNetCoreNetdevMaxBacklog
1000 packets
100→10000 packets
Network Max Backlog
os_NetworkNetIpv4TcpMaxSynBacklog
512 packets
52→15120 packets
Network IPV4 Max Sync Backlog
os_NetworkNetCoreNetdevBudget
300 packets
30→3000 packets
Network Budget
os_NetworkNetCoreRmemMax
212992 bytes
21299→2129920 bytes
Maximum network receive buffer size that applications can request
os_NetworkNetCoreWmemMax
21299→2129920 bytes
21299→2129920 bytes
Maximum network transmit buffer size that applications can request
os_NetworkNetIpv4TcpSlowStartAfterIdle
1
0→1
Network Slow Start After Idle Flag
os_NetworkNetIpv4TcpFinTimeout
60
6 →600 seconds
Network TCP timeout
os_NetworkRfs
0
0→131072
If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running
os_StorageReadAhead
128 KB
0→1024 KB
Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk
os_StorageNrRequests
1000 packets
100→10000 packets
Network Max Backlog
os_StorageRqAffinity
1
1→2
Storage Requests Affinity
os_StorageQueueScheduler
none
none
kyber
mq-deadline
bfq
Storage Queue Scheduler Type
os_StorageNomerges
0
0→2
Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried
os_StorageMaxSectorsKb
256 KB
32→256 KB
The largest IO size that the OS c
cpu_num
CPUs
The number of CPUs available in the system (physical and logical)
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
cpu_util_details
percent
The average CPU utilization % broken down by usage type and cpu number (e.g., cp1 user, cp2 system, cp3 soft-irq)
cpu_load_avg
tasks
The system load average (i.e., the number of active tasks in the system)
mem_util
percent
The memory utilization % (i.e, the % of memory used)
mem_util_nocache
percent
The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes
mem_util_details
percent
The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory)
mem_used
bytes
The total amount of memory used
mem_used_nocache
bytes
The total amount of memory used without considering memory reserved for caching purposes
mem_total
bytes
The total amount of installed memory
mem_fault_minor
faults/s
The number of minor memory faults (i.e., faults that do not cause disk access) per second
mem_fault_major
faults/s
The number of major memory faults (i.e., faults that cause disk access) per second
mem_fault
faults/s
The number of memory faults (major + minor)
mem_swapins
pages/s
The number of memory pages swapped in per second
mem_swapouts
pages/s
The number of memory pages swapped out per second
network_tcp_retrans
retrans/s
The number of network TCP retransmissions per second
network_in_bytes_details
bytes/s
The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0)
network_out_bytes_details
bytes/s
The number of outbound network packets in bytes per second broken down by network device (e.g., eth01)
disk_swap_util
percent
The average space utilization % of swap disks
disk_swap_used
bytes
The total amount of space used by swap disks
disk_util_details
percent
The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://)
disk_iops_writes
ops/s
The average number of IO disk-write operations per second across all disks
disk_iops_reads
ops/s
The average number of IO disk-read operations per second across all disks
disk_iops
ops/s
The average number of IO disk operations per second across all disks
disk_response_time_read
seconds
The average response time of IO read-disk operations
disk_response_time_worst
seconds
The average response time of IO disk operations of the slowest disk
disk_response_time_write
seconds
The average response time of IO write-disk operations
disk_response_time_details
ops/s
The average response time of IO disk operations broken down by disk (e.g., disk /dev/nvme01 )
disk_iops_details
ops/s
The number of IO disk-write operations of per second broken down by disk (e.g., disk /dev/nvme01)
disk_io_inflight_details
ops
The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01)
disk_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_read_bytes
bytes/s
The number of bytes per second read across all disks
disk_read_write_bytes
bytes/s
The number of bytes per second read and written across all disks
disk_write_bytes_details
bytes/s
The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE)
disk_read_bytes_details
bytes/s
The number of bytes per second read from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation READ)
filesystem_util
percent
The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1)
filesystem_used
bytes
The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01)
filesystem_size
bytes
The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01)
proc_blocked
processes
The number of processes blocked (e.g, for IO or swapping reasons)
os_context_switch
switches/s
The number of context switches per second
os_cpuSchedMinGranularity
2250000 ns
300000→30000000 ns
Minimal preemption granularity (in nanoseconds) for CPU bound tasks
os_cpuSchedWakeupGranularity
3000000 ns
400000→40000000 ns
Scheduler Wakeup Granularity (in nanoseconds)
os_CPUSchedMigrationCost
500000 ns
100000→5000000 ns
Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations
os_CPUSchedChildRunsFirst
0
0→1
A freshly forked child runs before the parent continues execution
os_CPUSchedLatency
18000000 ns
2400000→240000000 ns
Targeted preemption latency (in nanoseconds) for CPU bound tasks
os_CPUSchedAutogroupEnabled
1
0→1
Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads
os_CPUSchedNrMigrate
32
3→320
Scheduler NR Migrate
os_MemorySwappiness
1
0→100
Memory Swappiness
os_MemoryVmVfsCachePressure
100 %
10→100 %
VFS Cache Pressure
os_MemoryVmMinFree
67584 KB
10240→1024000 KB
Minimum Free Memory
os_MemoryVmDirtyRatio
20 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyBackgroundRatio
10 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background
os_MemoryTransparentHugepageEnabled
madvise
always
never
madvise
Transparent Hugepage Enablement
os_MemoryTransparentHugepageDefrag
madvise
always
never
madvise
defer
defer+madvise
Transparent Hugepage Enablement Defrag
os_MemorySwap
swapon
swapon
swapoff
Memory Swap
os_MemoryVmDirtyExpire
3000 centisecs
300→30000 centisecs
Memory Dirty Expiration Time
os_MemoryVmDirtyWriteback
500 centisecs
50→5000 centisecs
Memory Dirty Writeback
os_NetworkNetCoreSomaxconn
128 connections
12→1200 connections
Network Max Connections
os_NetworkNetCoreNetdevMaxBacklog
1000 packets
100→10000 packets
Network Max Backlog
os_NetworkNetIpv4TcpMaxSynBacklog
1024 packets
52→15120 packets
Network IPV4 Max Sync Backlog
os_NetworkNetCoreNetdevBudget
300 packets
30→3000 packets
Network Budget
os_NetworkNetCoreRmemMax
212992 bytes
21299→2129920 bytes
Maximum network receive buffer size that applications can request
os_NetworkNetCoreWmemMax
21299→2129920 bytes
21299→2129920 bytes
Maximum network transmit buffer size that applications can request
os_NetworkNetIpv4TcpSlowStartAfterIdle
1
0→1
Network Slow Start After Idle Flag
os_NetworkNetIpv4TcpFinTimeout
60
6 →600 seconds
Network TCP timeout
os_NetworkRfs
0
0→131072
If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running
os_StorageReadAhead
128 KB
0→1024 KB
Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk
os_StorageNrRequests
1000 packets
100→10000 packets
Network Max Backlog
os_StorageRqAffinity
1
1→2
Storage Requests Affinity
os_StorageQueueScheduler
none
none
mq-deadline
Storage Queue Scheduler Type
os_StorageNomerges
0
0→2
Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried
os_StorageMaxSectorsKb
128 KB
32→128 KB
The largest IO size that the OS c
cpu_num
CPUs
The number of CPUs available in the system (physical and logical)
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
cpu_util_details
percent
The average CPU utilization % broken down by usage type and cpu number (e.g., cp1 user, cp2 system, cp3 soft-irq)
cpu_load_avg
tasks
The system load average (i.e., the number of active tasks in the system)
mem_util
percent
The memory utilization % (i.e, the % of memory used)
mem_util_nocache
percent
The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes
mem_util_details
percent
The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory)
mem_used
bytes
The total amount of memory used
mem_used_nocache
bytes
The total amount of memory used without considering memory reserved for caching purposes
mem_total
bytes
The total amount of installed memory
mem_fault_minor
faults/s
The number of minor memory faults (i.e., faults that do not cause disk access) per second
mem_fault_major
faults/s
The number of major memory faults (i.e., faults that cause disk access) per second
mem_fault
faults/s
The number of memory faults (major + minor)
mem_swapins
pages/s
The number of memory pages swapped in per second
mem_swapouts
pages/s
The number of memory pages swapped out per second
network_tcp_retrans
retrans/s
The number of network TCP retransmissions per second
network_in_bytes_details
bytes/s
The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0)
network_out_bytes_details
bytes/s
The number of outbound network packets in bytes per second broken down by network device (e.g., eth01)
disk_swap_util
percent
The average space utilization % of swap disks
disk_swap_used
bytes
The total amount of space used by swap disks
disk_util_details
percent
The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://)
disk_iops_writes
ops/s
The average number of IO disk-write operations per second across all disks
disk_iops_reads
ops/s
The average number of IO disk-read operations per second across all disks
disk_iops
ops/s
The average number of IO disk operations per second across all disks
disk_response_time_read
seconds
The average response time of IO read-disk operations
disk_response_time_worst
seconds
The average response time of IO disk operations of the slowest disk
disk_response_time_write
seconds
The average response time of IO write-disk operations
disk_response_time_details
ops/s
The average response time of IO disk operations broken down by disk (e.g., disk /dev/nvme01 )
disk_iops_details
ops/s
The number of IO disk-write operations of per second broken down by disk (e.g., disk /dev/nvme01)
disk_io_inflight_details
ops
The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01)
disk_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_read_bytes
bytes/s
The number of bytes per second read across all disks
disk_read_write_bytes
bytes/s
The number of bytes per second read and written across all disks
disk_write_bytes_details
bytes/s
The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE)
disk_read_bytes_details
bytes/s
The number of bytes per second read from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation READ)
filesystem_util
percent
The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1)
filesystem_used
bytes
The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01)
filesystem_size
bytes
The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01)
proc_blocked
processes
The number of processes blocked (e.g, for IO or swapping reasons)
os_context_switch
switches/s
The number of context switches per second
os_cpuSchedMinGranularity
2250000 ns
300000→30000000 ns
Minimal preemption granularity (in nanoseconds) for CPU bound tasks
os_cpuSchedWakeupGranularity
3000000 ns
400000→40000000 ns
Scheduler Wakeup Granularity (in nanoseconds)
os_CPUSchedMigrationCost
500000 ns
100000→5000000 ns
Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations
os_CPUSchedChildRunsFirst
0
0→1
A freshly forked child runs before the parent continues execution
os_CPUSchedLatency
18000000 ns
2400000→240000000 ns
Targeted preemption latency (in nanoseconds) for CPU bound tasks
os_CPUSchedAutogroupEnabled
1
0→1
Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads
os_CPUSchedNrMigrate
32
3→320
Scheduler NR Migrate
os_MemorySwappiness
1
0→100
Memory Swappiness
os_MemoryVmVfsCachePressure
100 %
10→100 %
VFS Cache Pressure
os_MemoryVmMinFree
67584 KB
10240→1024000 KB
Minimum Free Memory
os_MemoryVmDirtyRatio
20 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyBackgroundRatio
10 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background
os_MemoryTransparentHugepageEnabled
madvise
always
never
madvise
Transparent Hugepage Enablement
os_MemoryTransparentHugepageDefrag
madvise
always
never
madvise
defer
defer+madvise
Transparent Hugepage Enablement Defrag
os_MemorySwap
swapon
swapon
swapoff
Memory Swap
os_MemoryVmDirtyExpire
3000 centisecs
300→30000 centisecs
Memory Dirty Expiration Time
os_MemoryVmDirtyWriteback
500 centisecs
50→5000 centisecs
Memory Dirty Writeback
os_NetworkNetCoreSomaxconn
128 connections
12→1200 connections
Network Max Connections
os_NetworkNetCoreNetdevMaxBacklog
1000 packets
100→10000 packets
Network Max Backlog
os_NetworkNetIpv4TcpMaxSynBacklog
1024 packets
52→15120 packets
Network IPV4 Max Sync Backlog
os_NetworkNetCoreNetdevBudget
300 packets
30→3000 packets
Network Budget
os_NetworkNetCoreRmemMax
212992 bytes
21299→2129920 bytes
Maximum network receive buffer size that applications can request
os_NetworkNetCoreWmemMax
21299→2129920 bytes
21299→2129920 bytes
Maximum network transmit buffer size that applications can request
os_NetworkNetIpv4TcpSlowStartAfterIdle
1
0→1
Network Slow Start After Idle Flag
os_NetworkNetIpv4TcpFinTimeout
60
6 →600 seconds
Network TCP timeout
os_NetworkRfs
0
0→131072
If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running
os_StorageReadAhead
128 KB
0→1024 KB
Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk
os_StorageNrRequests
1000 packets
100→10000 packets
Network Max Backlog
os_StorageRqAffinity
1
1→2
Storage Requests Affinity
os_StorageQueueScheduler
none
none
mq-deadline
Storage Queue Scheduler Type
os_StorageNomerges
0
0→2
Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried
os_StorageMaxSectorsKb
128 KB
32→128 KB
The largest IO size that the OS c
gc_count | collections/s | The total number of garbage collections |
gc_duration | seconds | The garbage collection duration |
heap_hard_limit | bytes | The size of the heap |
csproj_System_GC_Server | categorical | CPUs |
|
| yes | The main flavor of the GC: set it to false for workstation GC or true for server GC. To be set in csproj file and requires rebuild. |
csproj_System_GC_Concurrent | categorical | boolean |
|
| yes | Configures whether background (concurrent) garbage collection is enabled (setting to true). To be set in csproj file and requires rebuild. |
runtime_System_GC_Server | categorical | boolean |
|
| yes | The main flavor of the GC: set it to false for workstation GC or true for server GC. To be set in csproj file and requires rebuild. |
runtime_System_GC_Concurrent | categorical | boolean |
|
| yes | Configures whether background (concurrent) garbage collection is enabled (setting to true). To be set in csproj file and requires rebuild. |
runtime_System_GC_HeapCount | integer | heapcount |
|
| no | Limits the number of heaps created by the garbage collector. To be set in runtimeconfig.json in runtimeOptions: configProperties |
runtime_System_GC_CpuGroup | categorical | boolean |
|
| no | Configures whether the garbage collector uses CPU groups or not. Default is false. To be set in runtimeconfig.json |
runtime_System_GC_NoAffinitize | categorical | boolean |
|
| no | Specifies whether to affinitize garbage collection threads with processors. To affinitize a GC thread means that it can only run on its specific CPU. To be set in runtimeconfig.json in runtimeOptions: configProperties |
runtime_System_GC_HeapHardLimit | integer | bytes |
|
| no | Specifies the maximum commit size, in bytes, for the GC heap and GC bookkeeping. To be set in runtimeconfig.json in runtimeOptions: configProperties |
runtime_System_GC_HeapHardLimitPercent | real | percent |
|
| no | Specifies the allowable GC heap usage as a percentage of the total physical memory. To be set in runtimeconfig.json in runtimeOptions: configProperties. |
runtime_System_GC_HighMemoryPercent | integer | bytes |
|
| no | Specify the memory threshold that triggers the execution of a garbage collection. To be set in runtimeconfig.json. |
runtime_System_GC_RetainVM | categorical | boolean |
|
| no | Configures whether segments that should be deleted are put on a standby list for future use or are released back to the operating system (OS). Default is false. To be set in runtimeconfig.json in runtimeOptions: configProperties |
runtime_System_GC_LOHThreshold | integer | bytes |
|
| no | Specifies the threshold size, in bytes, that causes objects to go on the large object heap (LOH). To be set in runtimeconfig.json in runtimeOptions: configProperties |
webconf_maxconnection | integer | connections |
|
| no | This setting controls the maximum number of outgoing HTTP connections that you can initiate from a client. To be set in web.config (target app only) or machine.config (global) |
webconf_maxIoThreads | integer | threads |
|
| no | Controls the maximum number of I/O threads in the .NET thread pool. Automatically multiplied by the number of available CPUs. To be set in web.config (target app only) or machine.config (global). It requires autoConfig=false |
webconf_minIoThreads | integer | threads |
|
| no | The minIoThreads setting enable you to configure a minimum number of worker threads and I/O threads for load conditions. To be set in web.config (target app only) or machine.config (global). It requires autoConfig=false |
webconf_maxWorkerThreads | integer | threads |
|
| no | This setting controls the maximum number of worker threads in the thread pool. This number is then automatically multiplied by the number of available CPUs.To be set in web.config (target app only) or machine.config (global).It requires autoConfig=false |
webconf_minWorkerThreads | integer | threads |
|
| no | The minWorkerThreads setting enable you to configure a minimum number of worker threads and I/O threads for load conditions. To be set in web.config (target app only) or machine.config (global). It requires autoConfig=false |
webconf_minFreeThreads | integer | threads |
|
| no | Used by the worker process to queue all the incoming requests if the number of available threads in the thread pool falls below its value. To be set in web.config (target app only) or machine.config (global). It requires autoConfig=false |
webconf_minLocalRequestFreeThreads | integer | threads |
|
| no | Used to queue requests from localhost (where a Web application sends requests to a local Web service) if the number of available threads falls below it. To be set in web.config (target app only) or machine.config (global). It requires autoConfig=false |
webconf_autoConfig | categori | boolean |
|
| no | Enable settings the system.web configuration parameters. To be set in web.config (target app only) or machine.config (global) |
MS .NET 3.1 |
The Java-OpenJDK optimization pack enables the ability to optimize Java applications based on the OpenJDK and Oracle HotSpot JVM. Through this optimization pack, Akamas is able to tackle the problem of performance of JVM-based applications from both the point of view of cost savings and quality of service.
To achieve these goals the optimization pack provides parameters that focus on the following areas:
Garbage collection
Heap
JIT
Similarly, the bundled metrics provide visibility on the following aspects of tuned applications:
Heap and memory utilization
Garbage collection
Execution threads
The optimization pack supports the most used versions of OpenJDK and Oracle HotSpot JVM.
Here’s the command to install the Java OpenJDK optimization pack using the Akamas CLI:
For more information on the process of installing or upgrading an optimization pack refer to Install Optimization Packs.
This page describes the Optimization Pack for Java OpenJDK 8 JVM.
The following parameters require their ranges or default values to be updated according to the described rules:
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters:
This page describes the Optimization Pack for Java OpenJDK 11 JVM.
The following parameters require their ranges or default values to be updated according to the described rules:
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters:
This page describes the Optimization Pack for Java OpenJDK 17 JVM.
The following parameters require their ranges or default values to be updated according to the described rules:
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters:
This page describes the Optimization Pack for Eclipse OpenJ9 (formerly known as IBM J9) version 11.
The following parameters require their ranges or default values to be updated according to the described rules:
Notice that the value nocompressedreferences
for j9vm_compressedReferences
can only be specified for JVMs compiled with the proper --with-noncompressedrefs
flag. If this is not the case you cannot actively disable compressed references, meaning:
for Xmx <= 57GB is useless to tune this parameter since compressed references are active by default and it is not possible to explicitly disable it
for Xmx > 57GB, since the by default (blank value) compressed references are disabled, Akamas can try to enable it. This requires removing the nocompressedreferences
from the domain
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters:
Notice that
j9vm_newSpaceFixed
is mutually exclusive with j9vm_minNewSpace
and j9vm_maxNewSpace
j9vm_oldSpaceFixed
is mutually exclusive with j9vm_minOldSpace
and j9vm_maxOldSpace
the sum of j9vm_minNewSpace
and j9vm_minOldSpace
must be equal to j9vm_minHeapSize
, so it's useless to tune all of them together. Max values seem to be more complex.
Component Type | Description |
---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Name | Type | Unit | Dafault | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Formula | Notes |
---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Formula | Notes |
---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Formula | Notes |
---|---|
Name | Unit | Description |
---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Default value | Domain |
---|---|---|
Formula | Notes |
---|---|
Component Type | Description |
---|
For more information on the process of installing or upgrading an optimization pack refer to .
Java OpenJDK 8 JVM
Java OpenJDK 11 JVM
Java OpenJDK 17 JVM
mem_used
bytes
The total amount of memory used
jvm_heap_size
bytes
The size of the JVM heap memory
jvm_heap_used
bytes
The amount of heap memory used
jvm_heap_util
percent
The utilization % of heap memory
jvm_off_heap_used
bytes
The amount of non-heap memory used
jvm_heap_old_gen_used
bytes
The amount of heap memory used (old generation)
jvm_heap_young_gen_used
bytes
The amount of heap memory used (young generation)
jvm_heap_old_gen_size
bytes
The size of the JVM heap memory (old generation)
jvm_heap_young_gen_size
bytes
The size of the JVM heap memory (young generation)
jvm_memory_used
bytes
The total amount of memory used across all the JVM memory pools
jvm_heap_committed
bytes
The size of the JVM committed memory
jvm_memory_buffer_pool_used
bytes
The total amount bytes used by buffers within the JVM buffer memory pool
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
cpu_used
CPUs
The total amount of CPUs used
jvm_gc_time
percent
The % of wall clock time the JVM spent doing stop the world garbage collection activities
jvm_gc_count
collections/s
The total number of stop the world JVM garbage collections that have occurred per second
jvm_gc_duration
seconds
The average duration of a stop the world JVM garbage collection
jvm_threads_current
threads
The total number of active threads within the JVM
jvm_threads_deadlocked
threads
The total number of deadlocked threads within the JVM
jvm_compilation_time
milliseconds
The total time spent by the JVM JIT compiler compiling bytecode
jvm_minHeapSize
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The minimum heap size.
jvm_maxHeapSize
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The maximum heap size.
jvm_maxRAM
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The maximum amount of memory used by the JVM.
jvm_initialRAMPercentage
real
percent
1.563
0.1
→ 100
yes
The initial percentage of memory used by the JVM.
jvm_maxRAMPercentage
real
percent
25.0
0.1
→ 100.0
yes
The percentage of memory used for maximum heap size, on systems with large physical memory size (more than 512MB). Requires Java 10, Java 8 Update 191 or later.
jvm_alwaysPreTouch
categorical
-AlwaysPreTouch
+AlwaysPreTouch
, -AlwaysPreTouch
yes
Pretouch pages during initialization.
jvm_metaspaceSize
integer
megabytes
20
You should select your own domain within 1 and 1024
yes
The initial size of the allocated class metadata space.
jvm_maxMetaspaceSize
integer
megabytes
20
You should select your own domain within 1 and 1024
yes
The maximum size of the allocated class metadata space.
jvm_useTransparentHugePages
categorical
-UseTransparentHugePages
+UseTransparentHugePages
, -UseTransparentHugePages
yes
Enables the use of large pages that can dynamically grow or shrink.
jvm_allocatePrefetchInstr
integer
0
0
→ 3
yes
Prefetch ahead of the allocation pointer.
jvm_allocatePrefetchDistance
integer
bytes
0
0
→ 512
yes
Distance to prefetch ahead of allocation pointer. -1 use system-specific value (automatically determined).
jvm_allocatePrefetchLines
integer
lines
3
1
→ 64
yes
The number of lines to prefetch ahead of array allocation pointer.
jvm_allocatePrefetchStyle
integer
1
0
→ 3
yes
Selects the prefetch instruction to generate.
jvm_useLargePages
categorical
+UseLargePages
+UseLargePages
, -UseLargePages
yes
Enable the use of large page memory.
jvm_newRatio
integer
2
0
→ 2147483647
yes
The ratio of old/new generation sizes.
jvm_newSize
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
Sets the initial and maximum size of the heap for the young generation (nursery).
jvm_maxNewSize
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
Specifies the upper bound for the young generation size.
jvm_survivorRatio
integer
8
1
→ 100
yes
The ratio between the Eden and each Survivor-space within the JVM. For example, a jvm_survivorRatio would mean that the Eden-space is 6 times one Survivor-space.
jvm_useAdaptiveSizePolicy
categorical
+UseAdaptiveSizePolicy
+UseAdaptiveSizePolicy
, -UseAdaptiveSizePolicy
yes
Enable adaptive generation sizing. Disable coupled with jvm_targetSurvivorRatio.
jvm_adaptiveSizePolicyWeight
integer
10
0
→ 100
yes
The weighting given to the current Garbage Collection time versus previous GC times when checking the timing goal.
jvm_targetSurvivorRatio
integer
50
1
→ 100
yes
The desired percentage of Survivor-space used after young garbage collection.
jvm_minHeapFreeRatio
integer
40
1
→ 99
yes
The minimum percentage of heap free after garbage collection to avoid shrinking.
jvm_maxHeapFreeRatio
integer
70
0
→ 100
yes
The maximum percentage of heap free after garbage collection to avoid shrinking.
jvm_maxTenuringThreshold
integer
15
0
→ 15
yes
The maximum value for the tenuring threshold.
jvm_gcType
categorical
Parallel
Serial
, Parallel
, ConcMarkSweep
, G1
, ParNew
yes
Type of the garbage collection algorithm.
jvm_concurrentGCThreads
integer
threads
You should select your own default value.
You should select your own domain.
yes
The number of threads concurrent garbage collection will use.
jvm_parallelGCThreads
integer
threads
You should select your own default value.
You should select your own domain.
yes
The number of threads garbage collection will use for parallel phases.
jvm_maxGCPauseMillis
integer
milliseconds
200
1
→ 1000
yes
Adaptive size policy maximum GC pause time goal in millisecond.
jvm_resizePLAB
categorical
+ResizePLAB
+ResizePLAB
, -ResizePLAB
yes
Enables the dynamic resizing of promotion LABs.
jvm_GCTimeRatio
integer
99
0
→ 100
yes
The target fraction of time that can be spent in garbage collection before increasing the heap, computet as 1 / (1 + GCTimeRatio).
jvm_initiatingHeapOccupancyPercent
integer
45
0
→ 100
yes
Sets the percentage of the heap occupancy at which to start a concurrent GC cycle.
jvm_youngGenerationSizeIncrement
integer
20
0
→ 100
yes
The increment size for Young Generation adaptive resizing.
jvm_tenuredGenerationSizeIncrement
integer
20
0
→ 100
yes
The increment size for Old/Tenured Generation adaptive resizing.
jvm_adaptiveSizeDecrementScaleFactor
integer
4
1
→ 1024
yes
Specifies the scale factor for goal-driven generation resizing.
jvm_CMSTriggerRatio
integer
80
0
→ 100
yes
The percentage of MinHeapFreeRatio allocated before CMS GC starts
jvm_CMSInitiatingOccupancyFraction
integer
-1
-1
→ 99
yes
Configure oldgen occupancy fraction threshold for CMS GC. Negative values default to CMSTriggerRatio.
jvm_CMSClassUnloadingEnabled
categorical
+CMSClassUnloadingEnabled
+CMSClassUnloadingEnabled
, -CMSClassUnloadingEnabled
yes
Enables class unloading when using CMS.
jvm_useCMSInitiatingOccupancyOnly
categorical
-UseCMSInitiatingOccupancyOnly
+UseCMSInitiatingOccupancyOnly
, -UseCMSInitiatingOccupancyOnly
yes
Use of the occupancy value as the only criterion for initiating the CMS collector.
jvm_G1HeapRegionSize
integer
megabytes
8
1
→32
yes
Sets the size of the regions for G1.
jvm_G1ReservePercent
integer
10
0
→ 50
yes
Sets the percentage of the heap that is reserved as a false ceiling to reduce the possibility of promotion failure for the G1 collector.
jvm_G1NewSizePercent
integer
5
0
→ 100
yes
Sets the percentage of the heap to use as the minimum for the young generation size.
jvm_G1MaxNewSizePercent
integer
60
0
→ 100
yes
Sets the percentage of the heap size to use as the maximum for young generation size.
jvm_G1MixedGCLiveThresholdPercent
integer
85
0
→ 100
yes
Sets the occupancy threshold for an old region to be included in a mixed garbage collection cycle.
jvm_G1HeapWastePercent
integer
5
0
→ 100
yes
The maximum percentage of the reclaimable heap before starting mixed GC.
jvm_G1MixedGCCountTarget
integer
collections
8
0
→ 100
yes
Sets the target number of mixed garbage collections after a marking cycle to collect old regions with at most G1MixedGCLIveThresholdPercent live data. The default is 8 mixed garbage collections.
jvm_G1OldCSetRegionThresholdPercent
integer
10
0
→ 100
yes
The upper limit on the number of old regions to be collected during mixed GC.
jvm_reservedCodeCacheSize
integer
megabytes
240
3
→ 2048
yes
The maximum size of the compiled code cache pool.
jvm_tieredCompilation
categorical
+TieredCompilation
+TieredCompilation
, -TieredCompilation
yes
The type of the garbage collection algorithm.
jvm_tieredCompilationStopAtLevel
integer
4
0
→ 4
yes
Overrides the number of detected CPUs that the VM will use to calculate the size of thread pools.
jvm_compilationThreads
integer
threads
You should select your own default value.
You should select your own domain.
yes
The number of compilation threads.
jvm_backgroundCompilation
categorical
+BackgroundCompilation
+BackgroundCompilation
, -BackgroundCompilation
yes
Allow async interpreted execution of a method while it is being compiled.
jvm_inline
categorical
+Inline
+Inline
, -Inline
yes
Enable inlining.
jvm_maxInlineSize
integer
bytes
35
1
→ 2097152
yes
The bytecode size limit (in bytes) of the inlined methods.
jvm_inlineSmallCode
integer
bytes
2000
1
→ 16384
yes
The maximum compiled code size limit (in bytes) of the inlined methods.
jvm_aggressiveOpts
categorical
-AggressiveOpts
+AggressiveOpts
, -AggressiveOpts
yes
Turn on point performance compiler optimizations.
jvm_usePerfData
categorical
+UsePerfData
+UsePerfData
, -UsePerfData
yes
Enable monitoring of performance data.
jvm_useNUMA
categorical
-UseNUMA
+UseNUMA
, -UseNUMA
yes
Enable NUMA.
jvm_useBiasedLocking
categorical
+UseBiasedLocking
+UseBiasedLocking
, -UseBiasedLocking
yes
Manage the use of biased locking.
jvm_activeProcessorCount
integer
CPUs
1
1
→ 512
yes
Overrides the number of detected CPUs that the VM will use to calculate the size of thread pools.
Parameter
Default value
Domain
jvm_minHeapSize
Depends on the instance available memory
jvm_maxHeapSize
Depends on the instance available memory
jvm_newSize
Depends on the configured heap
jvm_maxNewSize
Depends on the configured heap
jvm_concurrentGCThreads
Depends on the available CPU cores
Depends on the available CPU cores
jvm_parallelGCThreads
Depends on the available CPU cores
Depends on the available CPU cores
jvm_compilation_threads
Depends on the available CPU cores
Depends on the available CPU cores
jvm.jvm_minHeapSize <= jvm.jvm_maxHeapSize
jvm.jvm_minHeapFreeRatio <= jvm.jvm_maxHeapFreeRatio
jvm.jvm_maxNewSize < jvm.jvm_maxHeapSize
jvm.jvm_concurrentGCThreads <= jvm.jvm_parallelGCThreads
mem_used
bytes
The total amount of memory used
jvm_heap_size
bytes
The size of the JVM heap memory
jvm_heap_used
bytes
The amount of heap memory used
jvm_heap_util
percent
The utilization % of heap memory
jvm_off_heap_used
bytes
The amount of non-heap memory used
jvm_heap_old_gen_used
bytes
The amount of heap memory used (old generation)
jvm_heap_young_gen_used
bytes
The amount of heap memory used (young generation)
jvm_heap_old_gen_size
bytes
The size of the JVM heap memory (old generation)
jvm_heap_young_gen_size
bytes
The size of the JVM heap memory (young generation)
jvm_memory_used
bytes
The total amount of memory used across all the JVM memory pools
jvm_heap_committed
bytes
The size of the JVM committed memory
jvm_memory_buffer_pool_used
bytes
The total amount bytes used by buffers within the JVM buffer memory pool
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
cpu_used
CPUs
The total amount of CPUs used
jvm_gc_time
percent
The % of wall clock time the JVM spent doing stop the world garbage collection activities
jvm_gc_count
collections/s
The total number of stop the world JVM garbage collections that have occurred per second
jvm_gc_duration
seconds
The average duration of a stop the world JVM garbage collection
jvm_threads_current
threads
The total number of active threads within the JVM
jvm_threads_deadlocked
threads
The total number of deadlocked threads within the JVM
jvm_compilation_time
milliseconds
The total time spent by the JVM JIT compiler compiling bytecode
jvm_minHeapSize
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The minimum heap size.
jvm_maxHeapSize
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The maximum heap size.
jvm_maxRAM
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The maximum amount of memory used by the JVM.
jvm_initialRAMPercentage
real
percent
1.563
0.1
→ 100
yes
The percentage of memory used for initial heap size.
jvm_maxRAMPercentage
real
percent
25.0
0.1
→ 100.0
yes
The percentage of memory used for maximum heap size, on systems with large physical memory size (more than 512MB).
jvm_alwaysPreTouch
categorical
-AlwaysPreTouch
+AlwaysPreTouch
, -AlwaysPreTouch
yes
Pretouch pages during initialization.
jvm_metaspaceSize
integer
megabytes
20
You should select your own domain within 1 and 1024
yes
The initial size of the allocated class metadata space.
jvm_maxMetaspaceSize
integer
megabytes
20
You should select your own domain within 1 and 1024
yes
The maximum size of the allocated class metadata space.
jvm_useTransparentHugePages
categorical
-UseTransparentHugePages
+UseTransparentHugePages
, -UseTransparentHugePages
yes
Enables the use of large pages that can dynamically grow or shrink.
jvm_allocatePrefetchInstr
integer
0
0
→ 3
yes
Prefetch ahead of the allocation pointer.
jvm_allocatePrefetchDistance
integer
bytes
0
0
→ 512
yes
Distance to prefetch ahead of allocation pointer. -1 use system-specific value (automatically determined).
jvm_allocatePrefetchLines
integer
lines
3
1
→ 64
yes
The number of lines to prefetch ahead of array allocation pointer.
jvm_allocatePrefetchStyle
integer
1
0
→ 3
yes
Selects the prefetch instruction to generate.
jvm_useLargePages
categorical
+UseLargePages
+UseLargePages
, -UseLargePages
yes
Enable the use of large page memory.
jvm_aggressiveHeap
categorical
-AggressiveHeap
-AggressiveHeap
, +AggressiveHeap
yes
Optimize heap options for long-running memory intensive apps.
jvm_newRatio
integer
2
0
→ 2147483647
yes
The ratio of old/new generation sizes.
jvm_newSize
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
Sets the initial and maximum size of the heap for the young generation (nursery).
jvm_maxNewSize
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
Specifies the upper bound for the young generation size.
jvm_survivorRatio
integer
8
1
→ 100
yes
The ratio between the Eden and each Survivor-space within the JVM. For example, a jvm_survivorRatio would mean that the Eden-space is 6 times one Survivor-space.
jvm_useAdaptiveSizePolicy
categorical
+UseAdaptiveSizePolicy
+UseAdaptiveSizePolicy
, -UseAdaptiveSizePolicy
yes
Enable adaptive generation sizing. Disable coupled with jvm_targetSurvivorRatio.
jvm_adaptiveSizePolicyWeight
integer
10
0 → 100
yes
The weighting given to the current Garbage Collection time versus previous GC times when checking the timing goal.
jvm_targetSurvivorRatio
integer
50
1
→ 100
yes
The desired percentage of Survivor-space used after young garbage collection.
jvm_minHeapFreeRatio
integer
40
1
→ 99
yes
The minimum percentage of heap free after garbage collection to avoid shrinking.
jvm_maxHeapFreeRatio
integer
70
0
→ 100
yes
The maximum percentage of heap free after garbage collection to avoid shrinking.
jvm_maxTenuringThreshold
integer
15
0
→ 15
yes
The maximum value for the tenuring threshold.
jvm_gcType
categorical
G1
Serial
, Parallel
, ConcMarkSweep
, G1
yes
Type of the garbage collection algorithm.
jvm_concurrentGCThreads
integer
threads
You should select your own default value.
You should select your own domain.
yes
The number of threads concurrent garbage collection will use.
jvm_parallelGCThreads
integer
threads
You should select your own default value.
You should select your own domain.
yes
The number of threads garbage collection will use for parallel phases.
jvm_maxGCPauseMillis
integer
milliseconds
200
1
→ 1000
yes
Adaptive size policy maximum GC pause time goal in millisecond.
jvm_resizePLAB
categorical
+ResizePLAB
+ResizePLAB
, -ResizePLAB
yes
Enables the dynamic resizing of promotion LABs.
jvm_GCTimeRatio
integer
99
2
→ 100
yes
The target fraction of time that can be spent in garbage collection before increasing the heap, computet as 1 / (1 + GCTimeRatio).
jvm_initiatingHeapOccupancyPercent
integer
45
5
→ 90
yes
Sets the percentage of the heap occupancy at which to start a concurrent GC cycle.
jvm_youngGenerationSizeIncrement
integer
20
0
→ 100
yes
The increment size for Young Generation adaptive resizing.
jvm_tenuredGenerationSizeIncrement
integer
20
0
→ 100
yes
The increment size for Old/Tenured Generation adaptive resizing.
jvm_adaptiveSizeDecrementScaleFactor
integer
4
1
→ 1024
yes
Specifies the scale factor for goal-driven generation resizing.
jvm_CMSTriggerRatio
integer
80
0
→ 100
yes
The percentage of MinHeapFreeRatio allocated before CMS GC starts
jvm_CMSInitiatingOccupancyFraction
integer
-1
-1
→ 99
yes
Configure oldgen occupancy fraction threshold for CMS GC. Negative values default to CMSTriggerRatio.
jvm_CMSClassUnloadingEnabled
categorical
+CMSClassUnloadingEnabled
+CMSClassUnloadingEnabled
, -CMSClassUnloadingEnabled
yes
Enables class unloading when using CMS.
jvm_useCMSInitiatingOccupancyOnly
categorical
-UseCMSInitiatingOccupancyOnly
+UseCMSInitiatingOccupancyOnly
, -UseCMSInitiatingOccupancyOnly
yes
Use of the occupancy value as the only criterion for initiating the CMS collector.
jvm_G1HeapRegionSize
integer
megabytes
8
1
→32
yes
Sets the size of the regions for G1.
jvm_G1ReservePercent
integer
10
0
→ 50
yes
Sets the percentage of the heap that is reserved as a false ceiling to reduce the possibility of promotion failure for the G1 collector.
jvm_G1NewSizePercent
integer
5
0
→ 100
yes
Sets the percentage of the heap to use as the minimum for the young generation size.
jvm_G1MaxNewSizePercent
integer
60
0
→ 100
yes
Sets the percentage of the heap size to use as the maximum for young generation size.
jvm_G1MixedGCLiveThresholdPercent
integer
85
0
→ 100
yes
Sets the occupancy threshold for an old region to be included in a mixed garbage collection cycle.
jvm_G1HeapWastePercent
integer
5
0
→ 100
yes
The maximum percentage of the reclaimable heap before starting mixed GC.
jvm_G1MixedGCCountTarget
integer
collections
8
0
→ 100
yes
Sets the target number of mixed garbage collections after a marking cycle to collect old regions with at most G1MixedGCLIveThresholdPercent
live data. The default is 8 mixed garbage collections.
jvm_G1OldCSetRegionThresholdPercent
integer
10
0
→ 100
yes
The upper limit on the number of old regions to be collected during mixed GC.
jvm_G1AdaptiveIHOPNumInitialSamples
integer
3
1
→2097152
yes
The number of completed time periods from initial mark to first mixed GC required to use the input values for prediction of the optimal occupancy to start marking.
jvm_G1UseAdaptiveIHOP
categorical
+G1UseAdaptiveIHOP
+G1UseAdaptiveIHOP
, -G1UseAdaptiveIHOP
yes
Adaptively adjust the initiating heap occupancy from the initial value of InitiatingHeapOccupancyPercent
.
jvm_reservedCodeCacheSize
integer
megabytes
240
3
→ 2048
yes
The maximum size of the compiled code cache pool.
jvm_tieredCompilation
categorical
+TieredCompilation
+TieredCompilation
, -TieredCompilation
yes
The type of the garbage collection algorithm.
jvm_tieredCompilationStopAtLevel
integer
4
0
→ 4
yes
Overrides the number of detected CPUs that the VM will use to calculate the size of thread pools.
jvm_compilationThreads
integer
threads
You should select your own default value.
You should select your own domain.
yes
The number of compilation threads.
jvm_backgroundCompilation
categorical
+BackgroundCompilation
+BackgroundCompilation
, -BackgroundCompilation
yes
Allow async interpreted execution of a method while it is being compiled.
jvm_inline
categorical
+Inline
+Inline
, -Inline
yes
Enable inlining.
jvm_maxInlineSize
integer
bytes
35
1
→ 2097152
yes
The bytecode size limit (in bytes) of the inlined methods.
jvm_inlineSmallCode
integer
bytes
2000
1
→ 16384
yes
The maximum compiled code size limit (in bytes) of the inlined methods.
jvm_usePerfData
categorical
+UsePerfData
+UsePerfData
, -UsePerfData
yes
Enable monitoring of performance data.
jvm_useNUMA
categorical
-UseNUMA
+UseNUMA
, -UseNUMA
yes
Enable NUMA.
jvm_useBiasedLocking
categorical
+UseBiasedLocking
+UseBiasedLocking
, -UseBiasedLocking
yes
Manage the use of biased locking.
jvm_activeProcessorCount
integer
CPUs
1
1
→ 512
yes
Overrides the number of detected CPUs that the VM will use to calculate the size of thread pools.
Parameter
Default value
Domain
jvm_minHeapSize
Depends on the instance available memory
jvm_maxHeapSize
Depends on the instance available memory
jvm_newSize
Depends on the configured heap
jvm_maxNewSize
Depends on the configured heap
jvm_concurrentGCThreads
Depends on the available CPU cores
Depends on the available CPU cores
jvm_parallelGCThreads
Depends on the available CPU cores
Depends on the available CPU cores
jvm_compilation_threads
Depends on the available CPU cores
Depends on the available CPU cores
jvm.jvm_minHeapSize <= jvm.jvm_maxHeapSize
jvm.jvm_minHeapFreeRatio <= jvm.jvm_maxHeapFreeRatio
jvm.jvm_maxNewSize < jvm.jvm_maxHeapSize * 0.8
jvm.jvm_concurrentGCThreads <= jvm.jvm_parallelGCThreads
jvm_activeProcessorCount < container.cpu_limits/1000 + 1
mem_used
bytes
The total amount of memory used
jvm_heap_size
bytes
The size of the JVM heap memory
jvm_heap_used
bytes
The amount of heap memory used
jvm_heap_util
percent
The utilization % of heap memory
jvm_off_heap_used
bytes
The amount of non-heap memory used
jvm_heap_old_gen_used
bytes
The amount of heap memory used (old generation)
jvm_heap_young_gen_used
bytes
The amount of heap memory used (young generation)
jvm_heap_old_gen_size
bytes
The size of the JVM heap memory (old generation)
jvm_heap_young_gen_size
bytes
The size of the JVM heap memory (young generation)
jvm_memory_used
bytes
The total amount of memory used across all the JVM memory pools
jvm_heap_committed
bytes
The size of the JVM committed memory
jvm_memory_buffer_pool_used
bytes
The total amount bytes used by buffers within the JVM buffer memory pool
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
cpu_used
CPUs
The total amount of CPUs used
jvm_gc_time
percent
The % of wall clock time the JVM spent doing stop the world garbage collection activities
jvm_gc_count
collections/s
The total number of stop the world JVM garbage collections that have occurred per second
jvm_gc_duration
seconds
The average duration of a stop the world JVM garbage collection
jvm_threads_current
threads
The total number of active threads within the JVM
jvm_threads_deadlocked
threads
The total number of deadlocked threads within the JVM
jvm_compilation_time
milliseconds
The total time spent by the JVM JIT compiler compiling bytecode
jvm_minHeapSize
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The minimum heap size.
jvm_maxHeapSize
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The maximum heap size.
jvm_maxRAM
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The maximum amount of memory used by the JVM.
jvm_initialRAMPercentage
real
percent
2
1
→ 100
yes
The percentage of memory used for initial heap size.
jvm_maxRAMPercentage
integer
percent
25
1
→ 100
yes
The percentage of memory used for maximum heap size, on systems with large physical memory size (more than 512MB).
jvm_minRAMPercentage
integer
percent
25
1
→ 100
yes
The percentage of memory used for maximum heap size, on systems with small physical memory size (up to 256MB)
jvm_alwaysPreTouch
categorical
-AlwaysPreTouch
+AlwaysPreTouch
, -AlwaysPreTouch
yes
Pretouch pages during initialization.
jvm_metaspaceSize
integer
megabytes
20
You should select your own domain within 1 and 1024
yes
The initial size of the allocated class metadata space.
jvm_maxMetaspaceSize
integer
megabytes
20
You should select your own domain within 1 and 1024
yes
The maximum size of the allocated class metadata space.
jvm_useTransparentHugePages
categorical
-UseTransparentHugePages
+UseTransparentHugePages
, -UseTransparentHugePages
yes
Enables the use of large pages that can dynamically grow or shrink.
jvm_allocatePrefetchInstr
integer
0
0
→ 3
yes
Prefetch ahead of the allocation pointer.
jvm_allocatePrefetchDistance
integer
bytes
0
0
→ 512
yes
Distance to prefetch ahead of allocation pointer. -1 use system-specific value (automatically determined).
jvm_allocatePrefetchLines
integer
lines
3
0
→ 64
yes
The number of lines to prefetch ahead of array allocation pointer.
jvm_allocatePrefetchStyle
integer
1
0
→ 3
yes
Selects the prefetch instruction to generate.
jvm_useLargePages
categorical
+UseLargePages
+UseLargePages
, -UseLargePages
yes
Enable the use of large page memory.
jvm_aggressiveHeap
categorical
-AggressiveHeap
-AggressiveHeap
, +AggressiveHeap
yes
Optimize heap options for long-running memory intensive apps.
jvm_newRatio
integer
2
0
→ 2147483647
yes
The ratio of old/new generation sizes.
jvm_newSize
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
Sets the initial and maximum size of the heap for the young generation (nursery).
jvm_maxNewSize
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
Specifies the upper bound for the young generation size.
jvm_survivorRatio
integer
8
1
→ 100
yes
The ratio between the Eden and each Survivor-space within the JVM. For example, a jvm_survivorRatio would mean that the Eden-space is 6 times one Survivor-space.
jvm_useAdaptiveSizePolicy
categorical
+UseAdaptiveSizePolicy
+UseAdaptiveSizePolicy
, -UseAdaptiveSizePolicy
yes
Enable adaptive generation sizing. Disable coupled with jvm_targetSurvivorRatio.
jvm_adaptiveSizePolicyWeight
integer
10
0 → 100
yes
The weighting given to the current Garbage Collection time versus previous GC times when checking the timing goal.
jvm_targetSurvivorRatio
integer
50
1
→ 100
yes
The desired percentage of Survivor-space used after young garbage collection.
jvm_minHeapFreeRatio
integer
40
1
→ 99
yes
The minimum percentage of heap free after garbage collection to avoid shrinking.
jvm_maxHeapFreeRatio
integer
70
0
→ 100
yes
The maximum percentage of heap free after garbage collection to avoid shrinking.
jvm_maxTenuringThreshold
integer
15
0
→ 15
yes
The maximum value for the tenuring threshold.
jvm_gcType
categorical
G1
Serial
, Parallel
, G1
, Z
, Shenandoah
yes
Type of the garbage collection algorithm.
jvm_concurrentGCThreads
integer
threads
You should select your own default value.
You should select your own domain.
yes
The number of threads concurrent garbage collection will use.
jvm_parallelGCThreads
integer
threads
You should select your own default value.
You should select your own domain.
yes
The number of threads garbage collection will use for parallel phases.
jvm_maxGCPauseMillis
integer
milliseconds
200
1
→ 1000
yes
Adaptive size policy maximum GC pause time goal in millisecond.
jvm_resizePLAB
categorical
+ResizePLAB
+ResizePLAB
, -ResizePLAB
yes
Enables the dynamic resizing of promotion LABs.
jvm_GCTimeRatio
integer
99
0
→ 100
yes
The target fraction of time that can be spent in garbage collection before increasing the heap, computet as 1 / (1 + GCTimeRatio).
jvm_initiatingHeapOccupancyPercent
integer
45
0
→ 100
yes
Sets the percentage of the heap occupancy at which to start a concurrent GC cycle.
jvm_youngGenerationSizeIncrement
integer
20
0
→ 100
yes
The increment size for Young Generation adaptive resizing.
jvm_tenuredGenerationSizeIncrement
integer
20
0
→ 100
yes
The increment size for Old/Tenured Generation adaptive resizing.
jvm_adaptiveSizeDecrementScaleFactor
integer
4
1
→ 1024
yes
Specifies the scale factor for goal-driven generation resizing.
jvm_G1HeapRegionSize
integer
megabytes
8
1
→32
yes
Sets the size of the regions for G1.
jvm_G1ReservePercent
integer
10
0
→ 50
yes
Sets the percentage of the heap that is reserved as a false ceiling to reduce the possibility of promotion failure for the G1 collector.
jvm_G1NewSizePercent
integer
5
0
→ 100
yes
Sets the percentage of the heap to use as the minimum for the young generation size.
jvm_G1MaxNewSizePercent
integer
60
0
→ 100
yes
Sets the percentage of the heap size to use as the maximum for young generation size.
jvm_G1MixedGCLiveThresholdPercent
integer
85
0
→ 100
yes
Sets the occupancy threshold for an old region to be included in a mixed garbage collection cycle.
jvm_G1HeapWastePercent
integer
5
0
→ 100
yes
The maximum percentage of the reclaimable heap before starting mixed GC.
jvm_G1MixedGCCountTarget
integer
collections
8
0
→ 100
yes
Sets the target number of mixed garbage collections after a marking cycle to collect old regions with at most G1MixedGCLIveThresholdPercent
live data. The default is 8 mixed garbage collections.
jvm_G1OldCSetRegionThresholdPercent
integer
10
0
→ 100
yes
The upper limit on the number of old regions to be collected during mixed GC.
jvm_G1AdaptiveIHOPNumInitialSamples
integer
3
1
→2097152
yes
The number of completed time periods from initial mark to first mixed GC required to use the input values for prediction of the optimal occupancy to start marking.
jvm_G1UseAdaptiveIHOP
categorical
+G1UseAdaptiveIHOP
+G1UseAdaptiveIHOP
, -G1UseAdaptiveIHOP
yes
Adaptively adjust the initiating heap occupancy from the initial value of InitiatingHeapOccupancyPercent
.
jvm_G1PeriodicGCInterval
integer
milliseconds
0
0
→ 3600000
yes
The number of milliseconds after a previous GC to wait before triggering a periodic gc. A value of zero disables periodically enforced gc cycles.
jvm_ZProactive
categorical
+ZProactive
+ZProactive
, -ZProactive
yes
Enable proactive GC cycles.
jvm_ZUncommit
categorical
+ZUncommit
+ZUncommit
, -ZUncommit
yes
Enable uncommit (free) of unused heap memory back to the OS.
jvm_ZAllocationSpikeTolerance
integer
2
1
→ 10
yes
The allocation spike tolerance factor for ZGC.
jvm_ZFragmentationLimit
integer
25
10
→ 90
yes
The maximum allowed heap fragmentation for ZGC.
jvm_ZCollectionInterval
integer
seconds
0
0
→ 3600
yes
Force GC at a fixed time interval (in seconds) for ZGC.
jvm_ZMarkStackSpaceLimit
integer
bytes
8589934592
33554432
→ 1099511627776
yes
The maximum number of bytes allocated for mark stacks for ZGC.
jvm_reservedCodeCacheSize
integer
megabytes
240
32
→ 2048
yes
The maximum size of the compiled code cache pool.
jvm_tieredCompilation
categorical
+TieredCompilation
+TieredCompilation
, -TieredCompilation
yes
The type of the garbage collection algorithm.
jvm_tieredCompilationStopAtLevel
integer
4
0
→ 4
yes
Overrides the number of detected CPUs that the VM will use to calculate the size of thread pools.
jvm_compilationThreads
integer
threads
You should select your own default value.
You should select your own domain.
yes
The number of compilation threads.
jvm_backgroundCompilation
categorical
+BackgroundCompilation
+BackgroundCompilation
, -BackgroundCompilation
yes
Allow async interpreted execution of a method while it is being compiled.
jvm_inline
categorical
+Inline
+Inline
, -Inline
yes
Enable inlining.
jvm_maxInlineSize
integer
bytes
35
1
→ 2097152
yes
The bytecode size limit (in bytes) of the inlined methods.
jvm_inlineSmallCode
integer
bytes
2000
500
→ 5000
yes
The maximum compiled code size limit (in bytes) of the inlined methods.
jvm_maxInlineLevel
integer
15
1
→ 64
yes
The maximum number of nested calls that are inlined by high tier compiler.
jvm_freqInlineSize
integer
bytes
325
1
→ 3250
yes
The maximum number of bytecode instructions to inline for a method.
jvm_compilationMode
categorical
default
default
, quick-only
, high-only
, high-only-quick-internal
yes
The JVM compilation mode.
jvm_typeProfileWidth
integer
2
1
→ 8
yes
The number of receiver types to record in call/cast profile.
jvm_usePerfData
categorical
+UsePerfData
+UsePerfData
, -UsePerfData
yes
Enable monitoring of performance data.
jvm_useNUMA
categorical
-UseNUMA
+UseNUMA
, -UseNUMA
yes
Enable NUMA.
jvm_useBiasedLocking
categorical
+UseBiasedLocking
+UseBiasedLocking
, -UseBiasedLocking
yes
Manage the use of biased locking.
jvm_activeProcessorCount
integer
CPUs
1
1
→ 512
yes
Overrides the number of detected CPUs that the VM will use to calculate the size of thread pools.
jvm_threadStackSize
integer
kilobytes
1024
128
→ 16384
yes
The thread Stack Size (in Kbytes).
Parameter
Default value
Domain
jvm_minHeapSize
Depends on the instance available memory
jvm_maxHeapSize
Depends on the instance available memory
jvm_newSize
Depends on the configured heap
jvm_maxNewSize
Depends on the configured heap
jvm_concurrentGCThreads
Depends on the available CPU cores
Depends on the available CPU cores
jvm_parallelGCThreads
Depends on the available CPU cores
Depends on the available CPU cores
jvm_compilation_threads
Depends on the available CPU cores
Depends on the available CPU cores
jvm.jvm_minHeapSize <= jvm.jvm_maxHeapSize
jvm.jvm_minHeapFreeRatio <= jvm.jvm_maxHeapFreeRatio
jvm.jvm_maxNewSize < jvm.jvm_maxHeapSize * 0.8
jvm.jvm_concurrentGCThreads <= jvm.jvm_parallelGCThreads
jvm_activeProcessorCount < container.cpu_limits/1000 + 1
jvm_heap_size
bytes
The size of the JVM heap memory
jvm_heap_used
bytes
The amount of heap memory used
jvm_heap_util
percent
The utilization % of heap memory
jvm_memory_used
bytes
The total amount of memory used across all the JVM memory pools
jvm_memory_used_details
bytes
The total amount of memory used broken down by pool (e.g., code-cache, compressed-class-space)
jvm_memory_buffer_pool_used
bytes
The total amount of bytes used by buffers within the JVM buffer memory pool
jvm_gc_time
percent
The % of wall clock time the JVM spent doing stop the world garbage collection activities
jvm_gc_time_details
percent
The % of wall clock time the JVM spent doing stop the world garbage collection activities broken down by type of garbage collection algorithm (e.g., ParNew)
jvm_gc_count
collections/s
The total number of stop the world JVM garbage collections that have occurred per second
jvm_gc_count_details
collections/s
The total number of stop the world JVM garbage collections that have occurred per second, broken down by type of garbage collection algorithm (e.g., G1, CMS)
jvm_gc_duration
seconds
The average duration of a stop the world JVM garbage collection
jvm_gc_duration_details
seconds
The average duration of a stop the world JVM garbage collection broken down by type of garbage collection algorithm (e.g., G1, CMS)
jvm_threads_current
threads
The total number of active threads within the JVM
jvm_threads_deadlocked
threads
The total number of deadlocked threads within the JVM
jvm_compilation_time
milliseconds
The total time spent by the JVM JIT compiler compiling bytecode
j9vm_minHeapSize
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
Minimum heap size (in megabytes)
j9vm_maxHeapSize
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
Maximum heap size (in megabytes)
j9vm_minFreeHeap
real
percent
0.3
0.1
→ 0.5
yes
Specify the minimum % free heap required after global GC
j9vm_maxFreeHeap
real
percent
0.6
0.4
→ 0.9
yes
Specify the maximum % free heap required after global GC
j9vm_gcPolicy
categorical
gencon
gencon
, subpool
, optavgpause
, optthruput
, nogc
yes
GC policy to use
j9vm_gcThreads
integer
threads
You should select your own default value.
1
→ 64
yes
Number of threads the garbage collector uses for parallel operations
j9vm_scvTenureAge
integer
10
1
→ 14
yes
Set the initial tenuring threshold for generational concurrent GC policy
j9vm_scvAdaptiveTenureAge
categorical
blank
blank, -Xgc:scvNoAdaptiveTenure
yes
Enable the adaptive tenure age for generational concurrent GC policy
j9vm_newSpaceFixed
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The fixed size of the new area when using the gencon GC policy. Must not be set alongside min or max
j9vm_minNewSpace
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The initial size of the new area when using the gencon GC policy
j9vm_maxNewSpace
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The maximum size of the new area when using the gencon GC policy
j9vm_oldSpaceFixed
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The fixed size of the old area when using the gencon GC policy. Must not be set alongside min or max
j9vm_minOldSpace
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The initial size of the old area when using the gencon GC policy
j9vm_maxOldSpace
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The maximum size of the old area when using the gencon GC policy
j9vm_concurrentScavenge
categorical
concurrentScavenge
concurrentScavenge
, noConcurrentScavenge
yes
Support pause-less garbage collection mode with gencon
j9vm_gcPartialCompact
categorical
nopartialcompactgc
nopartialcompactgc
, partialcompactgc
yes
Enable partial compaction
j9vm_concurrentMeter
categorical
soa
soa
, loa
, dynamic
yes
Determine which area is monitored by the concurrent mark
j9vm_concurrentBackground
integer
0
0
→ 128
yes
The number of background threads assisting the mutator threads in concurrent mark
j9vm_concurrentSlack
integer
megabytes
0
You should select your own domain.
yes
The target size of free heap space for concurrent collectors
j9vm_concurrentLevel
integer
percent
8
0
→ 100
yes
The ratio between the amount of heap allocated and the amount of heap marked
j9vm_gcCompact
categorical
blank
blank, -Xcompactgc
, -Xnocompactgc
yes
Enables full compaction on all garbage collections (system and global)
j9vm_minGcTime
real
percent
0.05
0.0
→ 1.0
yes
The minimum percentage of time to be spent in garbage collection, triggering the resize of the heap to meet the specified values
j9vm_maxGcTime
real
percent
0.13
0.0
→ 1.0
yes
The maximum percentage of time to be spent in garbage collection, triggering the resize of the heap to meet the specified values
j9vm_loa
categorical
loa
loa
, noloa
yes
Enable the allocation of the large area object during garbage collection
j9vm_loa_initial
real
0.05
0.0
→ 0.95
yes
The initial portion of the tenure area allocated to the large area object
j9vm_loa_minimum
real
0.01
0.0
→ 0.95
yes
The minimum portion of the tenure area allocated to the large area object
j9vm_loa_maximum
real
0.5
0.0
→ 0.95
yes
The maximum portion of the tenure area allocated to the large area object
j9vm_jitOptlevel
ordinal
noOpt
noOpt
, cold
, warm
, hot
, veryHot
, scorching
yes
Force the JIT compiler to compile all methods at a specific optimization level
j9vm_compilationThreads
integer
threads
You should select your own default value.
1
→ 7
yes
Number of JIT threads
j9vm_codeCacheTotal
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
Maximum size limit in MB for the JIT code cache
j9vm_jit_count
integer
10000
0
→ 1000000
yes
The number of times a method is called before it is compiled
j9vm_lockReservation
categorical
blank
blank, -XlockReservation
no
Enables an optimization that presumes a monitor is owned by the thread that last acquired it
j9vm_compressedReferences
categorical
blank
blank, -Xcompressedrefs
, -Xnocompressedrefs
yes
Enable/disable the use of compressed references
j9vm_aggressiveOpts
categorical
blank
blank, -Xaggressive
yes
Enable the use of aggressive performance optimization features, which are expected to become default in upcoming releases
j9vm_virtualized
categorical
blank
blank, -Xtune:virtualized
yes
Optimize the VM for virtualized environment, reducing CPU usage when idle
j9vm_shareclasses
categorical
blank
blank, -Xshareclasses
yes
Enable class sharing
j9vm_quickstart
categorical
blank
blank, -Xquickstart
yes
Run JIT with only a subset of optimizations, improving the performance of short-running applications
j9vm_minimizeUserCpu
categorical
blank
blank, -Xthr:minimizeUserCPU
yes
Minimizes user-mode CPU usage in thread synchronization where possible
j9vm_minNewSpace
25% of j9vm_minHeapSize
must not exceed j9vm_minHeapSize
j9vm_maxNewSpace
25% of j9vm_maxHeapSize
must not exceed j9vm_maxHeapSize
j9vm_minOldSpace
75% of j9vm_minHeapSize
must not exceed j9vm_minHeapSize
j9vm_maxOldSpace
same as j9vm_maxHeapSize
must not exceed j9vm_maxHeapSize
j9vm_gcthreads
number of CPUs - 1, up to a maximum of 64
capped to default, no benefit in exceeding that value
j9vm_compressedReferences
enabled for j9vm_maxHeapSize<= 57 GB
jvm.j9vm_minHeapSize < jvm.j9vm_maxHeapSize
jvm.j9vm_minNewSpace < jvm.j9vm_maxNewSpace && jvm.j9vm_minNewSpace < jvm.j9vm_minHeapSize && jvm.j9vm_maxNewSpace < jvm.j9vm_maxHeapSize
jvm.j9vm_minOldSpace < jvm.j9vm_maxOldSpace && jvm.j9vm_minOldSpace < jvm.j9vm_minHeapSize && jvm.j9vm_maxOldSpace < jvm.j9vm_maxHeapSize
jvm.j9vm_loa_minimum <= jvm.j9vm_loa_initial && jvm.j9vm_loa_initial <= jvm.j9vm_loa_maximum
jvm.j9vm_minFreeHeap + 0.05 < jvm.j9vm_maxFreeHeap
jvm.j9vm_minGcTimeMin < jvm.j9vm_maxGcTime
Node JS 18 runtime |
The Web Application optimization pack provides a component type apt for monitoring the performances from the end-user perspective of a generic web application, to evaluate the configuration of the technologies in the underlying stack.
The bundled component type provides Akamas with performance metrics representing concepts like throughput, response time, error rate, and user load, split into different levels of detail such as transaction, page, and single request.
Here’s the command to install the Web Application optimization pack using the Akamas CLI:
Metric | Unit | Description |
---|---|---|
Parameter | Type | Unit | Default Value | Domain | restart | Description |
---|---|---|---|---|---|---|
Component Type | Description |
---|
Component Type | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Unit | Description |
---|
cpu_used
CPUs
The total amount of CPUs used
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
memory_used
bytes
The total amount of memory used
memory_util
percent
The average memory utilization %
nodejs_gc_heap_used
bytes
GC heap used
nodejs_rss
bytes
Process Resident Set Size (RSS)
nodejs_v8_heap_total
bytes
V8 heap total
nodejs_v8_heap_used
bytes
V8 heap used
nodejs_number_active_threads
threads
Number of active threads
nodejs_suspension_time
percent
Suspension time %
nodejs_active_handles
handles
Number of active libuv handles grouped by handle type. Every handle type is C++ class name
nodejs_active_handles_total
handles
Total number of active handles
nodejs_active_requests
requests
Number of active libuv requests grouped by request type. Every request type is C++ class name
nodejs_active_requests_total
requests
Total number of active requests
nodejs_eventloop_lag_max_seconds
seconds
The maximum recorded event loop delay
nodejs_eventloop_lag_mean_seconds
seconds
The mean of the recorded event loop delays
nodejs_eventloop_lag_min_seconds
seconds
The minimum recorded event loop delay
nodejs_eventloop_lag_p50_seconds
seconds
The 50th percentile of the recorded event loop delays
nodejs_eventloop_lag_p90_seconds
seconds
The 90th percentile of the recorded event loop delays
nodejs_eventloop_lag_p99_seconds
seconds
The 99th percentile of the recorded event loop delays
nodejs_eventloop_lag_seconds
seconds
Lag of event loop in seconds
nodejs_external_memory_bytes
bytes
NodeJS external memory size in bytes
nodejs_gc_duration_seconds_bucket
seconds
The total count of observations for a bucket in the histogram. Garbage collection duration by kind, one of major, minor, incremental or weakcb
nodejs_gc_duration_seconds_count
seconds
The total number of observations for Garbage collection duration by kind, one of major, minor, incremental or weakcb
nodejs_gc_duration_seconds_sum
seconds
The total sum of observations for Garbage collection duration by kind, one of major, minor, incremental or weakcb
nodejs_heap_size_total_bytes
bytes
Process heap size from NodeJS in bytes
nodejs_heap_size_used_bytes
bytes
Process heap size used from NodeJS in bytes
nodejs_heap_space_size_available_bytes
bytes
Process heap size available from NodeJS in bytes
nodejs_heap_space_size_total_bytes
bytes
Process heap space size total from NodeJS in bytes
nodejs_heap_space_size_used_bytes
bytes
Process heap space size used from NodeJS in bytes
process_cpu_seconds_total
seconds
Total user and system CPU time spent in seconds
process_cpu_system_seconds_total
seconds
Total system CPU time spent in seconds
process_cpu_user_seconds_total
seconds
Total user CPU time spent in seconds
process_heap_bytes
bytes
Process heap size in bytes
process_max_fds
fds
Maximum number of open file descriptors
process_open_fds
fds
Number of open file descriptors
process_resident_memory_bytes
bytes
Resident memory size in bytes
process_virtual_memory_bytes
bytes
Virtual memory size in bytes
v8_allocation_size_pretenuring
categorical
--allocation-site-pretenuring
--allocation-site-pretenuring
, --no-allocation-site-pretenuring
yes
Pretenure with allocation sites
v8_min_semi_space_size
integer
megabytes
0
0
→ 1048576
yes
Min size of a semi-space (in MBytes), the new space consists of two semi-spaces
v8_max_semi_space_size
integer
megabytes
0
0
→ 1048576
yes
Max size of a semi-space (in MBytes), the new space consists of two semi-spaces. This parameter is equivalent to v8_max_semi_space_size_ordinal.
v8_max_semi_space_size_ordinal
ordinal
megabytes
16
2
, 4
, 6
, 8
, 16
, 32
, 64
, 128
, 256
, 512
, 1024
, 2048
, 4096
, 8192
, 16384
, 32768
yes
Max size of a semi-space (in MBytes), the new space consists of two semi-spaces. This parameter is equivalent to v8_max_semi_space_size but forces power of 2 values.
v8_semi_space_grouth_factor
integer
2
0
→ 100
yes
Factor by which to grow the new space
v8_max_old_space_size
integer
megabytes
0
0
→ 1048576
yes
Max size of the old space (in Mbytes)
v8_max_heap_size
integer
megabytes
0
0
→ 1048576
yes
Max size of the heap (in Mbytes) both max_semi_space_size and max_old_space_size take precedence. All three flags cannot be specified at the same time.
v8_initial_heap_size
integer
megabytes
0
0
→ 1048576
yes
Initial size of the heap (in Mbytes)
v8_initial_old_space_size
integer
megabytes
0
0
→ 1048576
yes
Initial old space size (in Mbytes)
v8_parallel_scavenge
categorical
--parallel-scavenge
--parallel-scavenge
, --no-parallel-scavenge
yes
Parallel scavenge
v8_scavenge_task_trigger
integer
80
1
→ 100
yes
Scavenge task trigger in percent of the current heap limit
v8_scavenge_separate_stack_scanning
categorical
--no-scavenge-separate-stack-scanning
--scavenge-separate-stack-scanning
, --no-scavenge-separate-stack-scanning
yes
Use a separate phase for stack scanning in scavenge
v8_concurrent_marking
categorical
--concurrent-marking
--concurrent-marking
, --no-concurrent-marking
yes
Use concurrent marking
v8_parallel_marking
categorical
--parallel-marking
--parallel-marking
, --no-parallel-marking
yes
Use parallel marking in atomic pause
v8_concurrent_sweeping
categorical
--concurrent-sweeping
--concurrent-sweeping
, --no-concurrent-sweeping
yes
Use concurrent sweeping
v8_heap_growing_percent
integer
0
0
→ 99
yes
Specifies heap growing factor as (1 + heap_growing_percent/100)
v8_os_page_size
integer
kilobytes
0
0
→ 1048576
yes
Override OS page size (in KBytes)
v8_stack_size
integer
kilobytes
984
16
→ 1048576
yes
Default size of stack region v8 is allowed to use (in kBytes)
v8_single_threaded
categorical
--no-single-threaded
--single-threaded
, --no-single-threaded
yes
Disable the use of background tasks
v8_single_threaded_gc
categorical
--no-single-threaded-gc
--single-threaded-gc
, --no-single-threaded-gc
yes
Disable the use of background gc tasks
transactions_throughput | transactions/s | The number of transactions executed per second |
transactions_response_time | milliseconds | The average transaction response time |
transactions_response_time_max | milliseconds | The maximum recorded transaction response time |
transactions_response_time_min | milliseconds | The minimum recorded transaction response time |
pages_throughput | pages/s | The number of pages requested per second |
pages_response_time | milliseconds | The average page response time |
pages_response_time_max | milliseconds | The maximum recorded page response time |
pages_response_time_min | milliseconds | The minimum recorded page response time |
requests_throughput | requests/s | The number of requests performed per second |
requests_response_time | milliseconds | The average request response time |
requests_response_time_max | milliseconds | The maximum recorded request response time |
requests_response_time_min | milliseconds | The minimum recorded request response time |
transactions_error_rate | percent | The percentage of transactions flagged as error |
transactions_error_throughput | transactions/s | The number of transactions flagged as error per second |
pages_error_rate | percent | The percentage of pages flagged as error |
pages_error_throughput | pages/s | The number of pages flagged as error per second |
requests_error_rate | percent | The requests of requests flagged as error |
requests_error_throughput | requests/s | The number of requests flagged as error per second |
users | users | The number of users performing requests on the web |
The Golang runtime 1 |
Web Application |
The Kubernetes optimization pack allows optimizing containerized applications running on a Kubernetes cluster. Through this optimization pack, Akamas is able to tackle the problem of distributing resources to containerized applications in order to minimize waste and ensure the quality of service.
To achieve these goals the optimization pack provides parameters that focus on the following areas:
Memory allocation
CPU allocation
Number of replicas
Similarly, the bundled metrics provide visibility on the following aspects of tuned applications:
Memory utilization
CPU utilization
The component types provided in this optimization pack allow modeling the entities found in a Kubernetes-based application, optimizing their parameters, and monitoring the key performance metrics.
Here’s the command to install the Kubernetes optimization pack optimization-pack using the Akamas CLI:
Component Type | Description |
---|---|
Name | Unit | Description |
---|
Name | Unit | Description |
---|
Kubernetes Container
Kubernetes Pod
Kubernetes Workload
Kubernetes Namespace
Kubernetes Cluster
k8s_cluster_cpu | millicores | The CPUs in the cluster |
k8s_cluster_cpu_available | millicores | The CPUs available for additional pods in the cluster |
k8s_cluster_cpu_util | percent | The percentage of used CPUs in the cluster |
k8s_cluster_cpu_request | millicores | The total CPUs requested in the cluster |
k8s_cluster_memory | bytes | The overall memory in the cluster |
k8s_cluster_memory_available | bytes | The amount of memory available for additional pods in the cluster |
k8s_cluster_memory_util | percent | The percentage of used memory in the cluster |
k8s_cluster_memory_request | bytes | The total memory requested in the cluster |
k8s_cluster_nodes | nodes | The number of nodes in the cluster |
k8s_namespace_cpu_limit | millicores | The CPU limit for the namespace |
k8s_namespace_cpu_request | millicores | The CPUs requested for the namespace |
k8s_namespace_memory_limit | bytes | The memory limit for the namespace |
k8s_namespace_memory_request | bytes | Memory requested for the namespace |
k8s_namespace_running_pods | pdds | The number of running pods in the namespace |
Component Type | Description |
---|
Amazon Web Services Elastic Compute Cloud |
Amazon Web Services Lambda |
This page describes the Optimization Pack for AWS EC2.
Notice: for the following parameters to take effect, the instance needs to be stopped and changes need to be applied before restarting the instance.
The following table shows a sample of constraints that are required in the definition of the study, depending on the tuned parameters.
Notice that AWS does not support all combinations of instance types and sizes, so it is better to specify them beforehand in your constraints to avoid unnecessary experiment failures.
To limit the combination of the instance type and sizes to those only supported by AWS or to those of interest for a particular study you can use a constraint such as the following: \
This constraint is built by connecting multiple constraints such as the following one with the OR operator ||
These constraint instruct akamas to only use large
, xlarge
and 2xlarge
size for instances of type c5
.
Name | Unit | Description |
---|---|---|
Name | Unit | Description |
---|---|---|
Name | Unit | Description |
---|---|---|
Name | Unit | Description |
---|---|---|
Name | Unit | Description |
---|---|---|
Name | Unit | Type | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Component Type | Description |
---|
Name | Unit | Description |
---|
Name | Unit | Type | Default Value | Domain | Restart | Description |
---|
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
network_in_bytes_details
bytes/s
The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0)
network_out_bytes_details
bytes/s
The number of outbound network packets in bytes per second broken down by network device (e.g., eth01)
disk_read_bytes
bytes/s
The number of bytes per second read across all disks
disk_write_bytes
bytes/s
The number of bytes per second written across all disks
aws_ec2_disk_iops_reads
ops/s
The per second average number of EBS IO disk-read operations summed across all disks
aws_ec2_disk_iops_writes
ops/s
The per second average number of EBS IO disk-write operations summed across all disks
aws_ec2_disk_iops
ops/s
The per second average number of EBS IO disk operations summed across all disks
aws_ec2_credits_cpu_available
credits
The number of earned CPU credits that an instance has accrued since it was launched or started. Credits are accrued in the credit balance after they are earned, and removed from the credit balance when they are spent
aws_ec2_credits_cpu_used
credits
The number of CPU credits spent by the instance for CPU utilization
aws_ec2_ebs_credits_io_util
percent
The percentage of I/O credits remaining in the burst bucket
aws_ec2_ebs_credits_bytes_util
percent
The percentage of throughput credits remaining in the burst bucket
aws_ec2_price
dollars
AWS EC2 hourly instance price (on-demand)
aws_ec2_instance_type
Categorical
m5
c5
,c5d
,c5a
,c6g
,c6gd
,c6gd
,
r5
,r5d
,r5a
,r5ad
,r6g
,r6gd
,
m5
,m5d
,m5a
,m5ad
,m6g
,m6gd
,
t3
,t3a
,
a1
,z1d
yes
Instance types comprise varying combinations of CPU, memory, storage, and networking capacity, optimized to fit different use cases
aws_ec2_instance_size
Ordinal
large
nano
, micro
,small
,medium
,large
,
xlarge
,2xlarge
,4xlarge
,8xlarge
,
9xlarge
,12xlarge
,16xlarge
, 18xlarge
,24xlarge
yes
k8s_workload_desired_pods | pods | Number of desired pods per workload |
k8s_workload_running_pods | pods | The number of running pods per workload |
k8s_workload_ready_pods | pods | The number of ready pods per workload |
k8s_workload_cpu_used | millicores | The total amount of CPUs used by the entire workload |
k8s_workload_memory_used | bytes | The total amount of memory used by the entire workload |
k8s_workload_cpu_request | millicores | The total amount of CPUs requests for the workload |
k8s_workload_cpu_limit | millicores | The total amount of CPUs limits for the entire workload |
k8s_workload_memory_request | millicores | The total amount of memory requests for the workload |
k8s_workload_memory_limit | millicores | The total amount of memory limits for the entire workload |
k8s_workload_replicas | integer | pods |
|
| yes | Number of desired pods in the deployment |
IBM WebSphere Application Server 8.5 |
IBM WebSphere Liberty ND |
Name | Unit | Description |
---|---|---|
Name | Unit | Type | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
aws_lambda_duration
seconds
The duration of an AWS Lambda function execution
aws_lambda_memory_size
megabytes
The memory size allocated for an AWS Lambda function
aws_lambda_cost
dollars
The elaboration cost of an AWS Lambda function
aws_lambda_reserved_concurrency
instances
The maximum number of concurrent instances for an AWS Lambda function
aws_lambda_provisioned_concurrency
instances
The number of prepared environments for an AWS Lambda function
aws_lambda_memory_size
integer
128
128 → 10240
no
The memory size allocated for an AWS Lambda function
aws_lambda_reserved_concurrency
integer
100
0→ 1000
no
The maximum number of concurrent instances for an AWS Lambda function
aws_lambda_provisioned_concurrency
integer
0
0→100
no
The number of prepared environments for an AWS Lambda function
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
cm_maxPoolSize
integer
50
0
→ 1000
yes
Maximum number of physical connections for a pool
cm_minPoolSize
integer
0
0
→ 1000
yes
Minimum number of physical connections for a pool
cm_maxConnectionsPerThread
integer
1
0
→ 30
yes
Maximum number of connections per thread
cm_numConnectionsPerThreadLocal
integer
1
0
→ 30
yes
Maximum number of connections per local thread
cm_purgePolicy
categorical
EntirePool
EntirePool
, FailingConnectionOnly
, ValidateAllConnections
yes
Purge Policy
cm_connectionTimeout
categorical
30s
-1
, 0
, 5s
, 10s
, 30s
, 60s
, 90s
, 120s
yes
Connection Timeout
cm_maxIdleTime
categorical
30m
-1
, 1m
, 5m
, 10m
, 15m
, 30m
yes
Max Idle Time
cm_reapTime
categorical
3m
-1
, 30s
, 1m
, 3m
, 5m
yes
Reap Time
exe_coreThreads
integer
-1
-1
, 4
, 6
, 8
, 10
, 12
, 14
, 16
, 18
, 20
yes
Number of core threads
exe_maxThreads
integer
-1
-1
→ 200
yes
Max number of threads
db_minPoolSize
integer
0
0
→ 1000
yes
Mimimum pool size
db_maxPoolSize
integer
50
0
→ 1000
yes
Maximum pool size
db_connectionWaitTime
integer
180
0
→ 3600
yes
Connection wait time
Parameter | Unit | Description |
---|---|---|
Parameter | Unit | Description |
---|---|---|
Parameter | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Type | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default value | Restart | Description | |
---|---|---|---|---|---|---|
Metric | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
pg_connections
connections
The number of connections in the db.
pg_start_time
seconds
The total amount time spent by postresql to boot up.
pg_commits
commits/s
The number of transactions committed per second.
pg_rollbacks
rollbacks/s
The number of transactions rollbacked per second.
pg_checkpoint_executed
checkpoints/s
The total number of checkpoint operations executed by postgresql.
pg_disk_used
bytes
The amount of disk space used by postgresql.
pg_blocks_read
blocks/s
The number of blocks read per second by postgresql.
pg_blocks_cache_hit
blocks/s
Number of blocks found in the buffer cache.
pg_backend_fsync_count
syncs
The total number of times postgresql executed a sync of data to disk.
pg_effective_io_concurrency
integer
iops
1
0
→ 1000
no
The number of simultaneous requests that can be handled efficiently by the disk subsystem.
pg_bgwriter_delay
integer
milliseconds
200
10
→ 10000
no
The delay between activity rounds for the background writer.
pg_bgwriter_lru_maxpages
integer
buffers
100
0
→ 1073741823
no
The maximum number of LRU pages to flush per round by the background writer.
pg_checkpoint_completion_target
real
0.5
0.0
→ 1.0
no
The time spent flushing dirty buffers during checkpoint, as fraction of checkpoint interval.
pg_effective_cache_size
integer
kilobytes
524288
1
→ 2147483647
no
The planner's assumption about the effective size of the disk cache available to a single query. A higher value makes it more likely index scans will be used, a lower value makes it more likely sequential scans will be used.
read_rate | ops/s | Read queries per second |
read_response_time_p99 | milliseconds | 99th percentile of read queries response time |
read_response_time_avg | milliseconds | Average response time of read queries |
write_rate | ops/s | Write queries per second |
write_response_time_p99 | milliseconds | 99th percentile of write queries response time |
read_response_time_max | milliseconds | Maximum response time of read queries |
total_rate | ops/s | Total queries per second |
write_response_time_avg | milliseconds | Average response time of write queries |
write_response_time_max | milliseconds | Maximum response time of write queries |
read_response_time_p90 | milliseconds | 90th percentile of read queries response time |
write_response_time_p90 | milliseconds | 90th percentile of write queries response time |
cassandra_compactionStrategy | categorical |
|
| yes | Compaction strategy in use |
cassandra_concurrentReads | integer |
|
| yes | Concurrent Reads |
cassandra_concurrentWrites | integer |
|
| yes | Concurrent Writes |
cassandra_fileCacheSizeInMb | integer | megabytes |
|
| yes | Total memory to use for SSTable-reading buffers |
cassandra_memtableCleanupThreshold | real |
|
| yes | Ratio used for automatic memtable flush |
cassandra_concurrentCompactors | integer |
|
| yes | Sets the number of concurrent compaction processes allowed to run simultaneously on a node |
cassandra_commitlog_compression | categorical |
|
| Sets the compression of commit log |
cassandra_commitlog_segment_size_in_mb | integer | megabytes |
|
| Sets the segment size of commit log |
cassandra_compaction_throughput_mb_per_sec | integer | megabytes/s |
|
| Sets the number of throughput for compaction |
cassandra_commitlog_sync_period_in_ms | integer | milliseconds |
|
| Sets the sync period of commit log |
The MySQL optimization pack allows the user to monitor a MySQL instance and explore the configuration space of its parameters. The optimization pack provides parameters and metrics that can be leveraged to reach, among others, two main goals:
Throughput optimization - increasing the capacity of a MySQL deployment to serve clients
Cost optimization - decreasing the size of a MySQL deployment while guaranteeing the same service level
To reach the aforementioned goals, the optimization pack focuses on three key areas of tuning of the InnoDB, the storage engine for MySQL:
Buffer management
Threading
Paging
The following table describes the supported component types by the MySQL optimization pack.
Here’s the command to install the MySQL optimization-pack using the Akamas CLI:
The PostgreSQL optimization pack allows you to explore and tune the configuration space of PostgreSQL parameters. In this way, an Akamas study can ramp up the transaction number or minimize its resource consumption according to your typical workload, cutting costs. The main tuning areas covered by the parameters provided in this optimization pack are:
Background writer management
VACUUM management
Deadlock and concurrency management
Write-ahead management
The optimization pack includes metrics to monitor:
Query executions
Concurrency and locks
Buffers and disk I/O
These component types model different PostgreSQL releases. They provided a subset of the parameters available for the best optimization results.
Here’s the command to install the PostgreSQL optimization pack using the Akamas CLI:
Component Type | Description |
---|---|
Component Type | Description |
---|---|
Component Type | Description |
---|
Metric | Unit | Description |
---|
Parameter | Type | Unit | Default value | Domain | Description | Restart |
---|
Parameter | Unit | Description |
---|
Parameter | Unit | Description |
---|
Metric | Unit | Description |
---|
Parameter | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default value | Domain | Restart | Description |
---|
MySQL 8.0 Database, deployed on-premises.
PostgeSQL 11
PostgeSQL 12
mysql_aborted_connection | connections | The number of failed attempts to connect to the MySQL |
mysql_connections_current | connections | The current number of connection opened towards MySql |
mysql_connections_max | connections | The maximum number of connections that can be opened towards MySQL |
mysql_innodb_buffer_pool_size | bytes | The size of the memory area where InnoDB caches tables and indexes |
mysql_mem_usage | bytes | MySQL instance memory consumption divided by type (innodb_buffer_pool_data, innodb_log_buffer, query_cache, key_buffer_size) |
mysql_query_throughput | queries/s | The number of queries per second processed by mysql |
mysql_slow_query_rate | querys/s | The rate of queries that are considered slow based on parameters mysql_long_query_time and mysql_long_query_min_examined_row |
mysql_statements_rate | statements/s | The rate at which each type of statement (select, insert, update, delete) executed per second. |
mysql_threads_running | threads | The number of threads running in the MySQL instance |
mysql_transactions_rate | transactions | The rate at which each type of transaction (handler label) is executed (commit. rollback, prepare, savepoint) |
network_in_bytes_rate | bytes/s | The number of network inbound data in bytes per second |
network_out_bytes_rate | bytes/s | The number of network outbound data in bytes per second |
mysql_innodb_buffer_pool_size | integer | bytes |
|
| The size of the buffer pool used by InnoDB to cache tables and indexes in memory | no |
mysql_innodb_buffer_pool_instances | integer | regions |
|
| The number of regions that the InnoDB buffer pool is divided into | no |
mysql_innodb_thread_sleep_delay | integer | milliseconds |
|
| The number of milliseconds each InnoDB thread sleeps before joining the InnoDB queue | no |
mysql_innodb_flush_method | string | - |
|
| The method used to flush data to InnoDB's datafiles and log files | yes |
mysql_innodb_log_file_size | integer | bytes |
|
| The size of each log file in each log group maintained by InnoDB. The total size of log files cannot exceed 4GB. | yes |
mysql_innodb_thread_concurrency | integer | threads |
|
| The limit on the number of os threads used by InnoDB to serve user requests | no |
mysql_innodb_max_dirty_pages_pct | real | percentage |
|
| The limit on the percentage of dirty pages in the buffer pool of InnoDB | no |
mysql_innodb_read_ahead_threshold | integer | pages |
|
| The number of sequentially read pages after which MySQL initiates an async read of the following extend (a group of pages within a tablespace) | no |
mysql_innodb_adaptive_hash_index | - |
|
| Whether or not enable the adaptive hash index optimization for InnoDB tables | no |
mysql_innodb_fill_factor | integer | percentage |
|
| The percentage of each B-tree page that is filled during a sorted index build | no |
pg_connections | connections | The number of connections in the db. |
pg_total_locks | locks | The total number of locks (of any type) performed. |
pg_conflicts | conflicts/s | The number of queries canceled due to conflicts with recovery in this database per second. |
pg_deadlocks | deadlocks/s | The number of deadlocks detected in this database per second. |
pg_commits | commits/s | The number of transactions committed per second. |
pg_rollbacks | rollbacks/s | The number of transactions rollbacked per second. |
pg_longest_transaction | seconds | The max duration in seconds any active transaction has been running. |
pg_fetched_rows | rows/s | The number of rows fetched by queries per second. |
pg_inserted_rows | rows/s | The number of rows inserted by queries per second. |
pg_updated_rows | rows/s | The number of rows updated by queries per second. |
pg_deleted_rows | rows/s | The number of rows deleted by queries per second. |
pg_returned_rows | rows/s | The number of rows returned by queries per second. |
pg_query_per_second | queries/s | The number of queries performed per second (both committed and rollbacked). |
pg_scheduled_checkpoints | checkpoints/s | The number of scheduled checkpoints performed in this database per second. |
pg_requested_checkpoints | checkpoints/s | The number of requested checkpoints performed in this database per second. |
pg_checkpoint_write_time | milliseconds | The total amount of time that has been spent in the portion of checkpoint processing where files are written to disk. |
pg_checkpoint_write_time | milliseconds | The total amount of time that has been spent in the portion of checkpoint processing where files are synchronized to disk. |
pg_written_buffers_backend | writes/s | The number of buffers written directly by a backend per second. |
pg_buffers_allocated | buffers/s | The number of buffers allocated per second. |
pg_written_buffers_background | writes/s | The number of buffers written by the background writer per second. |
pg_temp_files | bytes/s | The total amount of data written to temporary files by queries in this database per second. |
pg_maxwritten_cleaning | stops/s | The number of times the background writer stopped a cleaning scan because it had written too many buffers per second. |
pg_written_buffers_checkpoint | writes/s | The number of buffers written during checkpoints per second. |
pg_cache_hit_rate | percent | The cache hit rate of the db. |
pg_disks_reads | reads/s | The number of reads performed per second. |
pg_read_time | milliseconds | The time spent reading data file blocks by backends in this database. |
pg_disks_reads | milliseconds | The time spent writing data file blocks by backends in this database. |
pg_backend_fsync | fsyncs/s | The time spent writing data file blocks by backends in this database. |
pg_autovacuum | categorical |
|
| no | Controls whether the server should run the autovacuum launcher daemon. |
pg_autovacuum_vacuum_cost_delay | real | milliseconds |
|
| no | The cost delay value that will be used in automatic VACUUM operations. |
pg_autovacuum_vacuum_cost_limit | integer |
|
| no | The cost limit value that will be used in automatic VACUUM operations. |
pg_autovacuum_vacuum_threshold | integer | tuples |
|
| no | The minimum number of updated or deleted tuples needed to trigger a VACUUM in any one table. |
pg_autovacuum_vacuum_scale_factor | real | tuples |
|
| no | The fraction of the table size to add to autovacuum_vacuum_threshold when deciding whether to trigger a VACUUM. |
pg_statement_timeout | integer | milliseconds |
|
| no | The maximum allowed duration of any statement. |
pg_max_connections | integer | connections |
|
| yes | The maximum number of concurrent connections allowed. |
pg_effective_io_concurrency | integer | iops |
|
| no | The number of simultaneous requests that can be handled efficiently by the disk subsystem. |
pg_max_parallel_maintenance_workers | integer | workers |
|
| no | Set the maximum number of parallel processes that can be started by a single utility command. |
pg_max_parallel_workers | integer | workers |
|
| no | Set the maximum number of parallel workers that that the system can support for parallel operations. |
pg_max_parallel_workers_per_gather | integer | workers |
|
| no | Set the maximum number of parallel processes that can be started by a single Gather or Gather Merge node. |
pg_deadlock_timeout | integer | milliseconds |
|
| no | The time to wait on a lock before checking for deadlock. |
pg_max_pred_locks_per_transaction | integer | predicate_locks |
|
| yes | The maximum number of predicate locks per transaction. |
pg_max_locks_per_transaction | integer | locks |
|
| yes | The maximum number of locks per transaction. |
pg_bgwriter_delay | integer | milliseconds |
|
| no | The delay between activity rounds for the background writer. |
pg_bgwriter_lru_maxpages | integer | buffers |
|
| no | The maximum number of LRU pages to flush per round by the background writer. |
pg_checkpoint_completion_target | real |
|
| no | The time spent flushing dirty buffers during checkpoint, as fraction of checkpoint interval. |
pg_wal_level | categorical | category |
|
| yes | The level of information written to the WAL. |
pg_wal_buffers | integer | kilobytes |
|
| no | The number of disk-page buffers in shared memory for WAL. |
pg_max_wal_senders | integer | processes |
|
| yes | The maximum number of simultaneously running WAL sender processes. Zero disables replication. |
pg_wal_compression | categorical |
|
| no | Set the compression of full-page writes written in WAL file. |
pg_max_wal_size | integer | megabytes |
|
| no | The maximum size to let the WAL grow to between automatic WAL checkpoints. |
pg_checkpoint_timeout | integer | seconds |
|
| no | The maximum time between automatic WAL checkpoints. |
pg_wal_sync_method | categorical | category |
|
| no | The method used for forcing WAL updates out to disk. |
pg_random_page_cost | real |
|
| no | The planner's estimate of the cost of a non-sequentially fetched disk page. |
pg_shared_buffers | integer | kilobytes |
|
| yes | The amount of memory dedicated to PostgreSQL to use for caching data. |
pg_work_mem | integer | kilobytes |
|
| no | The maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files. |
pg_maintenance_work_mem | integer | kilobytes |
|
| no | The maximum amount of memory to be used by maintenance operations, such as VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY.
|
pg_effective_cache_size | integer | kilobytes |
|
| no | The planner's assumption about the effective size of the disk cache available to a single query. A higher value makes it more likely index scans will be used, a lower value makes it more likely sequential scans will be used. |
pg_default_statistics_target | integer |
|
| no | Sets the amount of default statistics target for table columns. |
Cassandra NoSQL database version 3 |
The optimization pack for Oracle Database 12c on Amazon RDS.
The following parameters require their ranges or default values to be updated according to the described rules.
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters:
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Default value | Domain |
---|---|---|
Parameter | Default value | Domain |
---|---|---|
Formula | Notes |
---|---|
Formula | Notes |
---|---|
oracle_sga_total_size
bytes
The current memory size of the SGA.
oracle_sga_free_size
bytes
The amount of SGA currently available.
oracle_sga_max_size
bytes
The configured maximum memory size for the SGA.
oracle_pga_target_size
bytes
The configured target memory size for the PGA.
oracle_redo_buffers_size
bytes
The memory size of the redo buffers.
oracle_default_buffer_cache_size
bytes
The memory size for the DEFAULT buffer cache component.
oracle_default_2k_buffer_cache_size
bytes
The memory size for the DEFAULT 2k buffer cache component.
oracle_default_4k_buffer_cache_size
bytes
The memory size for the DEFAULT 4k buffer cache component.
oracle_default_8k_buffer_cache_size
bytes
The memory size for the DEFAULT 8k buffer cache component.
oracle_default_16k_buffer_cache_size
bytes
The memory size for the DEFAULT 16k buffer cache component.
oracle_default_32k_buffer_cache_size
bytes
The memory size for the DEFAULT 32k buffer cache component.
oracle_keep_buffer_cache_size
bytes
The memory size for the KEEP buffer cache component.
oracle_recycle_buffer_cache_size
bytes
The memory size for the RECYCLE buffer cache component.
oracle_asm_buffer_cache_size
bytes
The memory size for the ASM buffer cache component.
oracle_shared_io_pool_size
bytes
The memory size for the IO pool component.
oracle_java_pool_size
bytes
The memory size for the Java pool component.
oracle_large_pool_size
bytes
The memory size for the large pool component.
oracle_shared_pool_size
bytes
The memory size for the shared pool component.
oracle_streams_pool_size
bytes
The memory size for the streams pool component.
oracle_buffer_cache_hit_ratio
percent
How often a requested block has been found in the buffer cache without requiring disk access.
oracle_wait_class_commit
percent
The percentage of time spent waiting on the events of class 'Commit'.
oracle_wait_class_concurrency
percent
The percentage of time spent waiting on the events of class 'Concurrency'.
oracle_wait_class_system_io
percent
The percentage of time spent waiting on the events of class 'System I/O'.
oracle_wait_class_user_io
percent
The percentage of time spent waiting on the events of class 'User I/O'.
oracle_wait_class_other
percent
The percentage of time spent waiting on the events of class 'Other'.
oracle_wait_class_scheduler
percent
The percentage of time spent waiting on the events of class 'Scheduler'.
oracle_wait_class_idle
percent
The percentage of time spent waiting on the events of class 'Idle'.
oracle_wait_class_application
percent
The percentage of time spent waiting on the events of class 'Application'.
oracle_wait_class_network
percent
The percentage of time spent waiting on the events of class 'Network'.
oracle_wait_class_configuration
percent
The percentage of time spent waiting on the events of class 'Configuration'.
oracle_wait_event_log_file_sync
percent
The percentage of time spent waiting on the 'log file sync' event.
oracle_wait_event_log_file_parallel_write
percent
The percentage of time spent waiting on the 'log file parallel write' event.
oracle_wait_event_log_file_sequential_read
percent
The percentage of time spent waiting on the 'log file sequential read' event.
oracle_wait_event_enq_tx_contention
percent
The percentage of time spent waiting on the 'enq: TX - contention' event.
oracle_wait_event_enq_tx_row_lock_contention
percent
The percentage of time spent waiting on the 'enq: TX - row lock contention' event.
oracle_wait_event_latch_row_cache_objects
percent
The percentage of time spent waiting on the 'latch: row cache objects' event.
oracle_wait_event_latch_shared_pool
percent
The percentage of time spent waiting on the 'latch: shared pool' event.
oracle_wait_event_resmgr_cpu_quantum
percent
The percentage of time spent waiting on the 'resmgr:cpu quantum' event.
oracle_wait_event_sql_net_message_from_client
percent
The percentage of time spent waiting on the 'SQL*Net message from client' event.
oracle_wait_event_rdbms_ipc_message
percent
The percentage of time spent waiting on the 'rdbms ipc message' event.
oracle_wait_event_db_file_sequential_read
percent
The percentage of time spent waiting on the 'db file sequential read' event.
oracle_wait_event_log_file_switch_checkpoint_incomplete
percent
The percentage of time spent waiting on the 'log file switch (checkpoint incomplete)' event.
oracle_wait_event_row_cache_lock
percent
The percentage of time spent waiting on the 'row cache lock' event.
oracle_wait_event_buffer_busy_waits
percent
The percentage of time spent waiting on the 'buffer busy waits' event.
oracle_wait_event_db_file_async_io_submit
percent
The percentage of time spent waiting on the 'db file async I/O submit' event.
oracle_sessions_active_user
sessions
The number of active user sessions.
oracle_sessions_inactive_user
sessions
The number of inactive user sessions.
oracle_sessions_active_background
sessions
The number of active background sessions.
oracle_sessions_inactive_background
sessions
The number of inactive background sessions.
oracle_calls_execute_count
calls
Total number of calls (user and recursive) that executed SQL statements.
oracle_tuned_undoretention
seconds
The amount of time for which undo will not be recycled from the time it was committed.
oracle_max_query_length
seconds
The length of the longest query executed.
oracle_transaction_count
transactions
The total number of transactions executed within the period.
oracle_sso_errors
errors/s
The number of ORA-01555 (snapshot too old) errors raised per second.
oracle_redo_log_space_requests
requests
The number of times a user process waits for space in the redo log file, usually caused by checkpointing or log switching.
bitmap_merge_area_size
kilobytes
1024
0
→ 2097152
yes
The amount of memory Oracle uses to merge bitmaps retrieved from a range scan of the index.
create_bitmap_area_size
megabytes
8192
0
→ 2097152
yes
Size of create bitmap buffer for bitmap index. Relevant only for systems containing bitmap indexes.
db_cache_size
megabytes
48
0
→ 2097152
no
The size of the DEFAULT buffer pool for standard block size buffers. The value must be at least 4M * cpu number.
hash_area_size
kilobytes
128
0
→ 2097151
yes
Maximum size of in-memory hash work area maximum amount of memory.
java_pool_size
megabytes
24
0
→ 16384
no
The size of the Java pool. If SGA_TARGET is set, this value represents the minimum value for the memory pool.
large_pool_size
megabytes
0
0
→ 65536
no
The size of large pool allocation heap.
memory_max_target
megabytes
8192
152
→ 2097152
yes
The maximum value to which a DBA can set the MEMORY_TARGET initialization parameter.
memory_target
megabytes
6864
0
→ 2097152
no
Oracle systemwide usable memory. The database tunes memory to the MEMORY_TARGET value, reducing or enlarging the SGA and PGA as needed.
olap_page_pool_size
bytes
0
0
→ 2147483647
no
Size of the olap page pool.
pga_aggregate_limit
megabytes
2048
0
→ 2097152
no
The limit on the aggregate PGA memory consumed by the instance.
pga_aggregate_target
megabytes
1024
0
→ 2097152
no
The target aggregate PGA memory available to all server processes attached to the instance.
pre_page_sga
FALSE
TRUE
, FALSE
yes
Read the entire SGA into memory at instance startup.
result_cache_max_result
percent
5
0
→ 100
no
Maximum result size as a percent of the cache size.
result_cache_max_size
megabytes
0
0
→ 65536
no
The maximum amount of SGA memory that can be used by the Result Cache.
result_cache_mode
MANUAL
MANUAL
, FORCE
no
Specifies when a ResultCache operator is spliced into a query's execution plan.
result_cache_remote_expiration
minutes
0
0
→ 10000
no
The expiration in minutes of remote objects. High values may cause stale answers.
sga_max_size
megabytes
8192
0
→ 2097152
yes
The maximum size of the SGA for the lifetime of the instance.
sga_min_size
megabytes
2920
0
→ 1048576
no
The guaranteed SGA size for a pluggable database (PDB). When SGA_MIN_SIZE is set for a PDB, it guarantees the specified SGA size for the PDB.
sga_target
megabytes
5840
0
→ 2097152
no
The total size of all SGA components, acts as the minimum value for the size of the SGA.
shared_pool_reserved_size
megabytes
128
1
→ 2048
yes
The shared pool space reserved for large contiguous requests for shared pool memory.
shared_pool_size
megabytes
0
0
→ 65536
no
The size of the shared pool.
sort_area_retained_size
kilobytes
0
0
→ 2097151
no
The maximum amount of the User Global Area memory retained after a sort run completes.
sort_area_size
kilobytes
64
0
→ 2097151
no
The maximum amount of memory Oracle will use for a sort. If more space is required then temporary segments on disks are used.
streams_pool_size
megabytes
0
0
→ 2097152
no
Size of the streams pool.
use_large_pages
TRUE
ONLY
, FALSE
, TRUE
yes
Enable the use of large pages for SGA memory.
workarea_size_policy
AUTO
MANUAL
, AUTO
no
Policy used to size SQL working areas (MANUAL/AUTO).
commit_logging
BATCH
IMMEDIATE
, BATCH
no
Control how redo is batched by Log Writer.
commit_wait
WAIT
NOWAIT
, WAIT
, FORCE_WAIT
no
Control when the redo for a commit is flushed to the redo logs.
log_archive_max_processes
processes
4
1
→ 30
no
Maximum number of active ARCH processes.
log_buffer
megabytes
16
2
→ 256
yes
The amount of memory that Oracle uses when buffering redo entries to a redo log file.
log_checkpoint_interval
blocks
0
0
→ 2147483647
no
The maximum number of log file blocks between incremental checkpoints.
log_checkpoint_timeout
seconds
1800
0
→ 2147483647
no
Maximum time interval between checkpoints. Guarantees a no buffer remains dirty for more than the specified time.
db_flashback_retention_target
minutes
1440
30
→ 2147483647
no
Maximum Flashback Database log retention time.
undo_retention
seconds
900
0
→ 2147483647
no
Low threshold value of undo retention.
optimizer_capture_sql_plan_baselines
FALSE
TRUE
, FALSE
no
Automatic capture of SQL plan baselines for repeatable statements
optimizer_dynamic_sampling
2
0
→ 11
no
Controls both when the database gathers dynamic statistics, and the size of the sample that the optimizer uses to gather the statistics.
optimizer_features_enable
11.2.0.4
11.2.0.4.1
, 11.2.0.4
, 11.2.0.3
, 11.2.0.2
, 11.2.0.1
, 11.1.0.7
, 11.1.0.6
, 10.2.0.5
, 10.2.0.4
, 10.2.0.3
, 10.2.0.2
, 10.2.0.1
, 10.1.0.5
, 10.1.0.4
, 10.1.0.3
, 10.1.0
, 9.2.0.8
, 9.2.0
, 9.0.1
, 9.0.0
, 8.1.7
, 8.1.6
, 8.1.5
, 8.1.4
, 8.1.3
, 8.1.0
, 8.0.7
, 8.0.6
, 8.0.5
, 8.0.4
, 8.0.3
, 8.0.0
no
Enable a series of optimizer features based on an Oracle release number.
optimizer_index_caching
0
0
→ 100
no
Adjust the behavior of cost-based optimization to favor nested loops joins and IN-list iterators.
optimizer_index_cost_adj
100
1
→ 10000
no
Tune optimizer behavior for access path selection to be more or less index friendly.
optimizer_mode
ALL_ROWS
ALL_ROWS
, FIRST_ROWS
, FIRST_ROWS_1
, FIRST_ROWS_10
, FIRST_ROWS_100
, FIRST_ROWS_1000
no
The default behavior for choosing an optimization approach for the instance.
optimizer_secure_view_merging
TRUE
TRUE
, FALSE
no
Enables security checks when the optimizer uses view merging.
optimizer_use_invisible_indexes
FALSE
TRUE
, FALSE
no
Enable or disables the use of invisible indexes.
optimizer_use_pending_statistics
FALSE
TRUE
, FALSE
no
Control whether the optimizer uses pending statistics when compiling SQL statements.
optimizer_use_sql_plan_baselines
TRUE
TRUE
, FALSE
no
Enables the use of SQL plan baselines stored in SQL Management Base.
approx_for_aggregation
FALSE
TRUE
, FALSE
no
Replace exact query processing for aggregation queries with approximate query processing.
approx_for_count_distinct
FALSE
TRUE
, FALSE
no
Automatically replace COUNT (DISTINCT expr) queries with APPROX_COUNT_DISTINCT queries.
approx_for_percentile
NONE
NONE
, PERCENTILE_CONT
, PERCENTILE_CONT DETERMINISTIC
, PERCENTILE_DISC
, PERCENTILE_DISC DETERMINISTIC
, ALL
, ALL DETERMINISTIC
no
Converts exact percentile functions to their approximate percentile function counterparts.
parallel_degree_policy
MANUAL
MANUAL
, LIMITED
, AUTO
no
Policy used to compute the degree of parallelism (MANUAL/LIMITED/AUTO).
parallel_execution_message_size
16384
2148
→ 32768
yes
Message buffer size for parallel execution.
parallel_force_local
FALSE
TRUE
, FALSE
no
Force single instance execution.
parallel_max_servers
processes
0
0
→ 3600
no
The maximum number of parallel execution processes and parallel recovery processes for an instance.
parallel_min_servers
processes
0
0
→ 2000
no
The minimum number of execution processes kept alive to service parallel statements.
parallel_min_percent
percent
0
0
→ 100
yes
The minimum percentage of parallel execution processes (of the value of PARALLEL_MAX_SERVERS) required for parallel execution.
circuits
circuits
10
0
→ 3000
no
The total number of virtual circuits that are available for inbound and outbound network sessions.
cpu_count
cpus
0
0
→ 512
no
Number of CPUs available for the Oracle instance to use.
cursor_bind_capture_destination
MEMORY+DISK
OFF
, MEMORY
, MEMORY+DISK
no
Allowed destination for captured bind variables.
cursor_sharing
EXACT
FORCE
, EXACT
, SIMILAR
no
Cursor sharing mode.
cursor_space_for_time
FALSE
TRUE
, FALSE
yes
Use more memory in order to get faster execution.
db_files
files
200
200
→ 20000
yes
The maximum number of database files that can be opened for this database. This may be subject to OS constraints.
open_cursors
cursors
300
0
→ 65535
no
The maximum number of open cursors (handles to private SQL areas) a session can have at once.
open_links
connections
4
0
→ 255
yes
The maximum number of concurrent open connections to remote databases in one session.
open_links_per_instance
connections
4
0
→ 2147483647
yes
Maximum number of migratable open connections globally for each database instance.
processes
processes
100
80
→ 20000
yes
The maximum number of OS user processes that can simultaneously connect to Oracle.
serial_reuse
DISABLE
DISABLE
, ALL
, SELECT
, DML
, PLSQL
, FORCE
yes
Types of cursors that make use of the serial-reusable memory feature.
session_cached_cursors
50
0
→ 65535
no
Number of session cursors to cache.
session_max_open_files
10
1
→ 50
yes
Maximum number of open files allowed per session.
sessions
sessions
1262
100
→ 65532
no
The maximum number of sessions that can be created in the system, effectively the maximum number of concurrent users in the system.
transactions
transactions
1388
4
→ 2147483647
yes
The maximum number of concurrent transactions.
aq_tm_processes
1
0
→ 40
no
Number of AQ Time Managers to start.
audit_sys_operations
FALSE
TRUE
, FALSE
yes
Enable sys auditing
audit_trail
NONE
NONE
, OS
, DB
, TRUE
, FALSE
, DB_EXTENDED
, XML
, EXTENDED
yes
Configure system auditing.
client_result_cache_lag
milliseconds
3000
0
→ 60000
yes
Maximum time before checking the database for changes related to the queries cached on the client.
client_result_cache_size
kilobytes
0
0
→ 2147483647
yes
The maximum size of the client per-process result set cache.
db_block_checking
MEDIUM
FALSE
, OFF
, LOW
, MEDIUM
, TRUE
, FULL
no
Header checking and data and index block checking.
db_block_checksum
TYPICAL
OFF
, FALSE
, TYPICAL
, TRUE
, FULL
no
Store checksum in db blocks and check during reads.
db_file_multiblock_read_count
128
0
→ 1024
no
Db block to be read each IO.
db_keep_cache_size
megabytes
0
0
→ 2097152
no
Size of KEEP buffer pool for standard block size buffers.
db_lost_write_protect
NONE
NONE
, TYPICAL
, FULL
no
Enable lost write detection.
db_recovery_file_dest_size
megabytes
1024
1
→ 16777216
no
Database recovery files size limit.
db_recycle_cache_size
megabytes
0
0
→ 2097152
no
Size of RECYCLE buffer pool for standard block size buffers.
db_writer_processes
1
1
→ 36
yes
Number of background database writer processes to start.
ddl_lock_timeout
0
0
→ 1000000
no
Timeout to restrict the time that ddls wait for dml lock.
deferred_segment_creation
TRUE
TRUE
, FALSE
no
Defer segment creation to first insert.
distributed_lock_timeout
seconds
60
1
→ 2147483647
yes
Number of seconds a distributed transaction waits for a lock.
dml_locks
5552
0
→ 2000000
yes
The maximum number of DML locks - one for each table modified in a transaction.
enable_goldengate_replication
FALSE
TRUE
, FALSE
no
Enable GoldenGate replication.
fast_start_parallel_rollback
LOW
FALSE
, LOW
, HIGH
no
Max number of parallel recovery slaves that may be used.
hs_autoregister
TRUE
TRUE
, FALSE
no
Enable automatic server DD updates in HS agent self-registration.
java_jit_enabled
TRUE
TRUE
, FALSE
no
Enables the Just-in-Time (JIT) compiler for the Oracle Java Virtual Machine.
java_max_sessionspace_size
bytes
0
0
→ 2147483647
yes
Max allowed size in bytes of a Java sessionspace.
java_soft_sessionspace_limit
bytes
0
0
→ 2147483647
yes
Warning limit on size in bytes of a Java sessionspace.
job_queue_processes
1000
0
→ 1000
no
Maximum number of job queue slave processes.
object_cache_max_size_percent
percent
10
0
→ 100
no
Percentage of maximum size over optimal of the user sessions object cache.
object_cache_optimal_size
kilobytes
100
0
→ 67108864
no
Optimal size of the user sessions object cache.
plscope_settings
IDENTIFIERS:NONE
IDENTIFIERS:NONE
, IDENTIFIERS:ALL
no
Plscope_settings controls the compile-time collection, cross reference, and storage of PL/SQLsourcecode identifier data.
plsql_code_type
INTERPRETED
INTERPRETED
, NATIVE
no
PL/SQL code-type.
plsql_optimize_level
2
0
→ 3
no
PL/SQL optimize level.
query_rewrite_enabled
TRUE
FALSE
, TRUE
, FORCE
no
Allow rewrite of queries using materialized views if enabled.
query_rewrite_integrity
ENFORCED
ENFORCED
, TRUSTED
, STALE_TOLERATED
no
Perform rewrite using materialized views with desired integrity.
remote_dependencies_mode
TIMESTAMP
TIMESTAMP
, SIGNATURE
no
Remote-procedure-call dependencies mode parameter.
replication_dependency_tracking
TRUE
TRUE
, FALSE
yes
Tracking dependency for Replication parallel propagation.
resource_limit
FALSE
TRUE
, FALSE
no
Enforce resource limits in database profiles.
resourcemanager_cpu_allocation
2
0
→ 20
no
ResourceManager CPU allocation.
resumable_timeout
seconds
0
0
→ 2147483647
no
Enables resumable statements and specifies resumable timeout at the system level.
sql_trace
FALSE
TRUE
, FALSE
no
Enable SQL trace.
star_transformation_enabled
FALSE
FALSE
, TRUE
, TEMP_DISABLE
no
Enable the use of star transformation.
timed_os_statistics
0
0
→ 1000000
no
The interval at which Oracle collects operating system statistics.
timed_statistics
TRUE
TRUE
, FALSE
no
Maintain internal timing statistics.
trace_enabled
TRUE
TRUE
, FALSE
no
Enable in-memory tracing.
transactions_per_rollback_segment
5
1
→ 10000
yes
Expected number of active transactions per rollback segment.
db_cache_size
MAX(48MB, 4MB * cpu_num)
java_pool_size
24MB
if SGA_TARGET
is not set
0
if SGA_TARGET
is set, meaning the lower bound for the pool is automatically determined
shared_pool_reserved_size
5%
of shared_pool_size
shared_pool_size
0
if sga_target
is set, 128MB
otherwise
shared_pool_reserved_size
upper bound can’t exceed half the size of shared_pool_size
pga_aggregate_target
MAX(10MB, 0.2*sga_target)
pga_aggregate_limit
MEMORY_MAX_TARGET
if MEMORY_TARGET
explicit or
2 * PGA_AGGREGATE_TARGET
if PGA_AGGREGATE_TARGET
explicit or
0.9 * ({MEMORY_AVAILABLE} - SGA)
at least MAX(2GB, 3MB * db.processes)
hash_area_size
2 * sort_area_size
cpu_count
should match the available CPUs 0 to let the Oracle engine automatically determine the value
must not exceed the available CPUs
gcs_server_processes
0
if cluster_database=false
1
for 1-3 CPUs, or if ASM
2
for 4-15 CPUs2+lower(CPUs/32)
for 16+ CPUs
parallel_min_servers
CPU_COUNT * PARALLEL_THREADS_PER_CPU * 2
parallel_max_servers
PARALLEL_THREADS_PER_CPU * CPU_COUNT * concurrent_parallel_users * 5
sessions
1.5 * processes + 22
must be at least equal to the default value
transactions
1.1 * sessions
db.memory_target <= db.memory_max_target && db.memory_max_target < {MEMORY_AVAILABLE}
Add when tuning automatic memory management
db.sga_max_size + db.pga_aggregate_limit <= db.memory_max_target
Add when tuning SGA and PGA
db.sga_target + db.pga_aggregate_target <= db.memory_target
Add when tuning SGA and PGA
db.sga_target <= db.sga_max_size
Add when tuning SGA
db.db_cache_size + db.java_pool_size + db.large_pool_size + db.log_buffer + db.shared_pool_size + db.streams_pool_size < db.sga_max_size
Add when tuning SGA areas
db.pga_aggregate_target <= db.pga_aggregate_limit
Add when tuning PGA
db.shared_pool_reserved_size <= 0.5 * db.shared_pool_size
db.sort_area_retained_size <= db.sort_area_size
db.sessions < db.transactions
db.parallel_min_servers < db.parallel_max_servers
The MongoDB optimization pack helps you optimize instances of MongoDB to reach the desired performance goal. The optimization pack provides parameters and metrics specific to MongoDB that can be leveraged to reach, among others, two main goals:
Throughput optimization - increasing the capacity of a MongoDB deployment to serve clients
Cost optimization - decreasing the size of a MongoDB deployment while guaranteeing the same service level
To reach such goals the pack focuses mostly on the parameters managing the cache, being one of the elements that impact performances the most; in particular, the optimization pack provides parameters to control the lifecycle and the size of the MongoDB’s cache thus significantly impacting performance.
Even though it is possible to evaluate performance improvements of MongoDB by looking at the business application that uses it as its database, looking at the end to end throughput or response time or using a performance test like YCSB, the optimization pack provides internal MongoDB metrics that can shed a light too on how MongoDB is performing, in particular in terms of throughput, for example:
The number of documents inserted in the database per second
The number of active connections
The optimization pack supports the following version of MongoDB.
Here’s the command to install the MongoDB optimization-pack using the Akamas CLI:
The Spark optimization pack allows tuning applications running on the Apache Spark framework. Through this optimization pack, Akamas is able to explore the space of the Spark parameters in order to find the configurations that best optimize the allocated resources or the execution time.
To achieve these goals the optimization pack provides parameters that focus on the following areas:
Driver and executors' resources allocation
Parallelism
Shuffling
Spark SQL
Similarly, the bundled metrics provide visibility on the following statistics from the Spark History Server:
Execution time
Executors' resource usage
Garbage collection time
Here’s the command to install the Spark optimization pack using the Akamas CLI:
This page describes the Optimization Pack for Spark Application 2.3.0.
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters:
The overall resources allocated to the application should be constrained by a maximum and, sometimes, a minimum value:
the maximum value could be the sum of resources physically available in the cluster, or a lower limit to allow the concurrent execution of other applications
an optional minimum value could be useful to avoid configurations that allocate executors that are both small and scarce
This page describes the Optimization Pack for Spark Application 2.2.0.
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters:
The overall resources allocated to the application should be constrained by a maximum and, sometimes, a minimum value:
the maximum value could be the sum of resources physically available in the cluster, or a lower limit to allow the concurrent execution of other applications
an optional minimum value could be useful to avoid configurations that allocate executors that are both small and scarce
Name | Unit | Description |
---|---|---|
Name | Unit | Description |
---|---|---|
Name | Unit | Type | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Unit | Type | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Constraint |
---|
Component Type | Description |
---|---|
Component Type | Description |
---|
Metric | Unit | Description |
---|
Parameter | Type | Unit | Default value | Domain | Restart | Description |
---|
Name | Unit | Description |
---|
Name | Unit | Description |
---|
Name | Unit | Description |
---|
Name | Unit | Description |
---|
Name | Unit | Type | Default | Domain | Restart | Description |
---|
Component Type | Description |
---|
Metric | Unit | Desciption |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|
Metric | Unit | Desciption |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|
mongodb_document_deleted
documents/s
The average number of documents deleted per second
mongodb_documents_inserted
documents/s
The average number of documents inserted per second
mongodb_documents_updated
documents/s
The average number of documents updated per second
mongodb_documents_returned
documents/s
The average number of documents returned by queries per second
mongodb_connections_current
connections
The current number of opened connections
mongodb_heap_used
bytes
The total size of heap space used (only available in Linux/Unix systems)
mongodb_mem_used
bytes
The total amount of memory used
mongodb_page_faults_total
faults/s
The average number of page faults per second (i.e., operations that require MongoDB to access data on disk rather than on memory)
mongodb_global_lock_current_queue
ops
The current number of operations queued because of a lock
mongodb_cache_size
megabytes
Integer
You should select your own default value when you create a study, since it is highly dependent on your system (how much memory your system has)
You should select your own default value when you create a study, since it is highly dependent on your system (how much memory your system has)
No
The maximum size of the internal cache that MongoDB (WiredTiger) will use to operate
mongodb_eviction_trigger
percentage
Integer
95
1 → 99
No
The percentage threshold on the use of the MongoDB cache for which cache eviction will start and client threads will throttle
mongodb_eviction_target
percentage
Integer
80
1 → 99
No
The target percentage usage of the MongoDB cache to reach after evictions
mongodb_eviction_dirty_trigger
percentage
Integer
20
1 → 99
No
The percentage threshold on the use of MongoDB dirty cache for which cache eviction will start and client threads will throttle
mongodb_eviction_dirty_target
percentage
Integer
5
1 → 99
No
The target percentage usage of the MongoDB dirty cache to reach after evictions
mongodb_eviction_threads_min
threads
Integer
4
1 → 20
No
The minimum number of threads to use to perform cache eviction
mongodb_eviction_threads_max
threads
Integer
4
1 → 20
No
The maximum number of threads to use to perform cache eviction
mongodb_sync_delay
seconds
Integer
1min
1min → 6min
no
The temporal interval between fsync operations where mongod flushes its working memory to disk
mongodb_eviction_threads_min <= mongodb_eviction_threads_max
mongodb_eviction_dirty_target <= mongodb_eviction_target
mongodb_eviction_dirty_trigger <= mongodb_eviction_trigger
MongoDB version 4.x
MongoDB version 5.x
es_cluster_search_query_time | milliseconds | The average search query time of elasticsearch |
es_cluster_search_query_throughput | queries/s | The throughput of elasticsearch in terms of search queries per second |
es_cluster_active_shards | shards | The total number of active shards (including replica shards) across all indices within the elasticsearch cluster |
es_cluster_status | flag | The status of the elasticsearch search cluster, red = 0, yellow = 1, green = 2 |
es_cluster_out_packets | packets/s | The number of packets per second transmitted outside of the elasticsearch cluster |
es_cluster_out_bytes | bytes/s | The number of bytes per second transmitted outside of the elasticsearch cluster |
es_cluster_in_packets | packets/s | The number of packets per second received by the elasticsearch cluster |
es_cluster_in_bytes | bytes/s | The number of bytes per second received by the elasticsearch cluster |
es_node_process_open_files | files | The total number of file descriptors opened by the elasticsearch process within the elasticsearch node |
es_node_process_cpu_util | percentage | The CPU utilization % of the elasticsearch process within the elasticsearch node |
es_node_process_jvm_gc_duration | milliseconds | The average duration of jvm garbage collection for the eleasticearch process |
es_node_process_jvm_gc_count | gcs | The total number of jvm garbage collections that have occurred for the elasticsearch process in the node |
index_merge_scheduler_max_thread_count | integer | threads |
|
| ElasticSearch max number of threads for merge operations |
indices_store_throttle_max_bytes_per_sec | integer | bytes/s |
|
| ElasticSearch max bandwidth for store operations. |
index_translog_flush_threshold_size | integer | megabytes |
|
| ElasticSearch flush threshold size. |
index_refresh_interval | integer | seconds |
|
| ElasticSearch refresh interval. |
index_number_of_shards | integer | shards |
|
| ElasticSearch number of shards. |
index_number_of_replicas | integer | replicas |
|
| ElasticSearch number of replicas. |
mongodb_opcounters_insert | operations/s | The number of insert operations received per second. |
mongodb_opcounters_query | operations/s | The number of queries received per second. |
mongodb_opcounters_update | operations/s | The number of update operations received per second |
mongodb_opcounters_delete | operations/s | The number of delete operations received per second. |
mongodb_opcounters_getmore | operations/s | The number of getMore operations received per second. This counter can be high even if the query count is low. Secondary nodes send getMore operations as part of the replication process. |
mongodb_opcounters_command | operations/s | The number of command operations received per second. It counts all commands except the write commands (insert, update, and delete). |
mongodb_documents_deleted | documents/s | The number of documents deleted per second. |
mongodb_documents_inserted | documents/s | The number of documents inserted per second. |
mongodb_documents_returned | documents/s | The number of documents returned per second. |
mongodb_documents_updated | documents/s | The number of documents updated per second. |
mongodb_wt_concurrentTransactions_read_out | tickets | Number of read tickets in use |
mongodb_wt_concurrentTransactions_write_out | tickets | Number of write tickets in use |
mongodb_wt_concurrentTransactions_read_available | tickets | Number of read tickets remaining. When it reaches 0, read requests will be queued. The maximum number of read operations is controlled with wiredTigerConcurrentReadTransactions (or by adding more shards) |
mongodb_wt_concurrentTransactions_write_available | tickets | Number of write tickets remaining. When it reaches 0, write requests will be queued. The maximum number of read operations is controlled with wiredTigerConcurrentWriteTransactions (or by adding more shards) |
mongodb_wt_transaction_most_recent_time | milliseconds | Amount of time, in milliseconds, to create the most recent checkpoint. An increase in this value under stead write load may indicate saturation on the I/O subsystem. |
mongodb_metrics_cursor_open_total | cursors | The number of cursors that MongoDB is maintaining for clients. Because MongoDB exhausts unused cursors, typically this value small or zero. However, if there is a queue, stale tailable cursors, or a large number of operations this value may rise. |
mongodb_metrics_cursor_timedOut | cursors | The total number of cursors that have timed out since the server process started. If this number is large or growing at a regular rate, this may indicate an application error. |
mongodb_connections_current | connections | The number of incoming connections from clients to the database server, including connections from other servers such as replica set members or instances |
mongodb_connections_available | connections | The number of unused incoming connections available. |
mongodb_connections_active | connections | The number of active client connections to the server. This refers to client connections that currently have operations in progress. |
mongodb_mem_resident | megabytes | The number of incoming connections from clients to the database server, including connections from other servers such as replica set members or instances |
mongodb_mem_virtual | megabytes | The number of unused incoming connections available. |
mongodb_extra_info_page_faults | faults/s | The number of active client connections to the server. This refers to client connections that currently have operations in progress. |
mongodb_wt_cache_bytes_currently_in_the_cache | bytes | Size in byte of the data currently in WiredTiger internal cache. |
mongodb_wt_cache_maximum_bytes_configured | bytes | Maximum WiredTiger internal cache size. |
mongodb_wt_cache_unmodified_pages_evicted | pages/s | Number of unmodified pages evicted from the cache per second |
mongodb_wt_cache_modified_pages_evicted | pages/s | Number of modified pages evicted from the cache per second |
mongodb_wt_cache_tracked_dirty_bytes_in_the_cache | bytes/s | Size in bytes of the dirty data in the cache. |
mongodb_wt_cache_pages_read_into_cache | pages/s | The number of pages read into the cache per second. Together with the write metric, it can provide an overview of the I/O activity. |
mongodb_wt_cache_pages_written_from_cache | pages/s | The number of pages written from the cache. Together with the read metric, it can provide an overview of the I/O activity. |
mongodb_wt_cache_pages_read_into_cache | pages/s | Number of pages read into the cache per second. Together with the write metric, it can provide an overview of the I/O activity. |
mongodb_wt_cache_pages_written_from_cache | pages | Number of pages written from the cache. Together with the read metric, it can provide an overview of the I/O activity. |
mongodb_globalLock_currentQueue_total | operations | The total number of operations queued waiting for the lock (sum of readers and writers). A consistently small queue, particularly of shorter operations, should cause no concern. |
mongodb_globalLock_currentQueue_readers | operations | The number of operations that are currently queued and waiting for the read lock. A consistently small read-queue, particularly of shorter operations, should cause no concern. |
mongodb_globalLock_currentQueue_writers | operations | The number of operations that are currently queued and waiting for the write lock. A consistently small write-queue, particularly of shorter operations, is no cause for concern. |
mongodb_globalLock_activeClients_total | operations | The total number of internal client connections to the database including system threads as well as queued readers and writers. This metric will be higher than the total of readers and writers to the inclusion of system threads. |
mongodb_globalLock_activeClients_readers | operations | The number of the active client connections performing read operations. |
mongodb_globalLock_activeClients_writers | operations | The number of the active client connections performing write operations. |
mongodb_cursorTimeoutMillis | milliseconds | integer | 600000 | 0 → 1200000 | no | Sets the expiration threshold in milliseconds for idle cursors before MongoDB removes them |
mongodb_notablescan | categorical |
|
| no | Return an error when executing queries that don't use indices. |
mongodb_ttlMonitorEnabled | categorical |
|
| no | Disable the TTL monitor works, preventing TTL documents removal |
mongodb_disableJavaScriptJIT | categorical |
|
| no | The MongoDB JavaScript engine uses SpiderMonkey, which implements Just-in-Time (JIT) compilation for improved performance when running scripts |
mongodb_maxIndexBuildMemoryUsageMegabytes | megabytes | Integer | 200 | 50 → 2000 | no | Limits the amount of memory that simultaneous index builds on one collection may consume for the duration of the builds. The memory consumed by an index build is separate from the WiredTiger cache memory. |
mongodb_tcmallocReleaseRate | real | 1.0 | 0.0 → 10.0 | no | Rate at which we release unused memory to the system, via madvise(MADV_DONTNEED), on systems that support it. Zero means we never release memory back to the system. |
mongodb_journalCommitInterval | milliseconds | integer | 100 | 1 → 500 | no | The number of milliseconds (ms) between journal commits. |
mongodb_syncdelay | integer | 60 | 0 → 300 | no | The interval in seconds between fsync operations where mongod flushes its working memory to disk. By default, mongod flushes memory to disk every 60 seconds. In almost every situation you should not set this value and use the default setting. |
mongodb_internalQueryEnableSlotBasedExecutionEngine | categorical |
|
| no | Use enhanced query execution when possible. |
mongodb_planCacheSize | percent | integer | 5 | 0 → 99 | no | The size of the plan cache for the enhanced query execution engine. |
mongodb_wterc_cache_overhead | integer | 8 | 0 → 30 | no | Amount of additional heap to allocate expressed as a percentage of the heap. |
mongodb_wterc_cache_size | megabytes | integer | 100 | 0 → 10000000 | no | Maximum heap memory to allocate for the cache. A database should configure either cache_size or shared_cache but not both. This should correspond to the value used in MongoDB, so set it to a minimum of 256 MB or to 50% of (RAM - 1 GB) |
mongodb_wterc_checkpoint_log_size | bytes | integer | 0 | 0 → 2000000000 | no | Minimum number of bites to be written between checkpoints. Setting the value to 0 configures periodic checkpoints. |
mongodb_wterc_checkpoint_wait | seconds | integer | 0 | 0 → 10000000 | no | Seconds to wait between periodic checkpoints. |
mongodb_wterc_eviction_threads_max | integer | 8 | 1 → 20 | no | Maximum number of threads WiredTiger will start to help evict pages from cache. |
mongodb_wterc_eviction_threads_min | integer | 1 | 1 → 20 | no | Minimum number of threads WiredTiger will start to help evict pages from cache. |
mongodb_wterc_eviction_checkpoint_target | percent | integer | 1 | 0 → 99 | no | Perform eviction at the beginning of checkpoints to bring the dirty content in cache to this level, expressed as a percentage of the total cache size. Ignored if set to zero or in_memory is true. |
mongodb_wterc_eviction_dirty_target | percent | integer | 5 | 0 → 99 | no | Perform eviction in worker threads when the cache contains at least this much dirty content, expressed as a percentage of the total cache size. |
mongodb_wterc_eviction_dirty_trigger | percent | integer | 20 | 0 → 99 | no | Trigger application threads to perform eviction when the cache contains at least this much dirty content, expressed as a percentage of the total cache size. This setting only alters behavior if it is lower than eviction_trigger. |
mongodb_wterc_eviction_target | percent | integer | 80 | 0 → 99 | no | Perform eviction in worker threads when the cache contains at least this much content, expressed as a percentage of the total cache size. Must be less than eviction_trigger. |
mongodb_wterc_eviction_trigger | percent | integer | 95 | 0 → 99 | no | Trigger application threads to perform eviction when the cache contains at least this much content, expressed as a percentage of the total cache size. |
mongodb_wterc_file_manager_close_handle_minimum | integer | 250 | 0 → 1000 | no | Number of handles open before the file manager will look for handles to close. |
mongodb_wterc_file_manager_close_idle_time | seconds | integer | 30 | 0 → 100000 | no | Amount of time in seconds a file handle needs to be idle before attempting to close it. A setting of 0 means that idle handles are not closed. |
mongodb_wterc_file_manager_close_scan_interval | seconds | integer | 10 | 0 → 100000 | no | Interval in seconds at which to check for files that are inactive and close them. |
mongodb_wterc_log_archive | categorical |
|
| no | Automatically archive unneeded log files. |
mongodb_wterc_log_prealloc | categorical |
|
| no | Pre-allocate log files. |
mongodb_wterc_log_zero_fill | categorical |
|
| no | Manually write zeroes into log files. |
mongodb_wterc_lsm_manager_merge | categorical |
|
| no | Merge LSM chunks where possible. |
mongodb_wterc_lsm_manager_worker_threads_max | integer | 4 | 3 → 20 | no | Configure a set of threads to manage merging LSM trees in the database. |
mongodb_wterc_concurrent_read_transactions | transactions | integer | 128 | 1 → 8192 | no | Configure the number of concurrent read transactions allowed into the WiredTiger storage engine. |
mongodb_wterc_concurrent_write_transactions | transactions | integer | 128 | 1 → 8192 | no | Configure the number of concurrent write transactions allowed into the WiredTiger storage engine. |
mongodb_wterc_cursor_cache_size | cursors | integer | -100 | -100000 → 100000 | no | The absolute value of this parameter sets the maximum number of cursors cached at levels above the WiredTiger storage engine. Zero or negative values also enable the caching at the WiredTiger level. |
mongodb_wterc_session_close_idle_time | seconds | integer | 300 | 0 → 3600 | no | Idle time in seconds before WiredTiger sessions are removed from the session cache. |
spark_application_duration | milliseconds | The duration of the Spark application |
spark_job_duration | milliseconds | The duration of the job |
spark_stage_duration | milliseconds | The duration of the stage |
spark_task_duration | milliseconds | The duration of the task |
spark_driver_rdd_blocks | blocks | The total number of persisted RDD blocks for the driver |
spark_driver_mem_used | bytes | The total amount of memory used by the driver |
spark_driver_disk_used | bytes | The total amount of disk used for RDDs by the driver |
spark_driver_cores | cores | The total number of concurrent tasks that can be run by the driver |
spark_driver_total_input_bytes | bytes | The total number of bytes read from RDDs or persisted data by the driver |
spark_driver_total_tasks | tasks | The total number of tasks run for each the driver |
spark_driver_total_duration | milliseconds | The total amount of time spent by the driver running tasks |
spark_driver_max_mem_used | bytes | The maximum amount of memory used by the driver |
spark_driver_total_jvm_gc_duration | milliseconds | The total amount of time spent by the driver's JVM doing garbage across all tasks |
spark_driver_total_shuffle_read | bytes | The total number of bytes read during a shuffle by the driver |
spark_driver_total_shuffle_write | bytes | The total number of bytes written in shuffle operations by the driver |
spark_driver_used_on_heap_storage_memory | bytes | The amount of on-heap memory used by the driver |
spark_driver_used_off_heap_storage_memory | bytes | The amount of off-heap memory used by the driver |
spark_driver_total_on_heap_storage_memory | bytes | The total amount of available on-heap memory for the driver |
spark_driver_total_off_heap_storage_memory | bytes | The total amount of available off-heap memory for the driver |
spark_executor_max_count | executors | The maximum number of executors used for the application |
spark_executor_rdd_blocks | blocks | The total number of persisted RDD blocks for each executor |
spark_executor_mem_used | bytes | The total amount of memory used by each executor |
spark_executor_disk_used | bytes | The total amount of disk used for RDDs by each executor |
spark_executor_cores | cores | The number of cores used by each executor |
spark_executor_total_input_bytes | bytes | The total number of bytes read from RDDs or persisted data by each executor |
spark_executor_total_tasks | tasks | The total number of tasks run for each the executor |
spark_executor_total_duration | milliseconds | The total amount of time spent by each executor running tasks |
spark_executor_max_mem_used | bytes | The maximum amount of memory used by each executor |
spark_executor_total_jvm_gc_duration | milliseconds | The total amount of time spent by each executor's JVM doing garbage collection across all tasks |
spark_executor_total_shuffle_read | bytes | The total number of bytes read during a shuffle by each executor |
spark_executor_total_shuffle_write | bytes | The total number of bytes written in shuffle operations by each executor |
spark_executor_used_on_heap_storage_memory | bytes | The amount of on-heap memory used by each executor |
spark_executor_used_off_heap_storage_memory | bytes | The amount of off-heap memory used by each executor |
spark_executor_total_on_heap_storage_memory | bytes | The total amount of available on-heap memory for each executor |
spark_executor_total_off_heap_storage_memory | bytes | The total amount of available off-heap memory for each executor |
spark_stage_shuffle_read_bytes | bytes | The total number of bytes read in shuffle operations by each stage |
spark_task_jvm_gc_duration | milliseconds | The total duration of JVM garbage collections for each task |
spark_task_peak_execution_memory | bytes | The sum of the peak sizes across internal data structures created for each task |
spark_task_result_size | bytes | The size of the result of the computation of each task |
spark_task_result_serialization_time | milliseconds | The time spent by each task serializing the computation result |
spark_task_shuffle_read_fetch_wait_time | milliseconds | The time spent by each task waiting for remote shuffle blocks |
spark_task_shuffle_read_local_blocks_fetched | blocks | The total number of local blocks fetched in shuffle operations by each task |
spark_task_shuffle_read_local_bytes | bytes | The total number of bytes read in shuffle operations from local disk by each task |
spark_task_shuffle_read_remote_blocks_fetched | blocks | The total number of remote blocks fetched in shuffle operations by each task |
spark_task_shuffle_read_remote_bytes | bytes | The total number of remote bytes read in shuffle operations by each task |
spark_task_shuffle_read_remote_bytes_to_disk | bytes | The total number of remote bytes read to disk in shuffle operations by each task |
spark_task_shuffle_write_time | nanoseconds | The time spent by each task writing data on disk or on buffer caches during shuffle operations |
spark_task_executor_deserialize_time | nanoseconds | The time spent by the executor deserializing the task |
spark_task_executor_deserialize_cpu_time | nanoseconds | The CPU time spent by the executor deserializing the task |
spark_task_stage_shuffle_write_records | records | The total number of records written in shuffle operations broken down by task and stage |
spark_task_stage_shuffle_write_bytes | records | The total number of bytes written in shuffle operations broken down by task and stage |
spark_task_stage_shuffle_read_records | records | The total number of records read in shuffle operations broken down by task and stage |
spark_task_stage_disk_bytes_spilled | bytes | The total number of bytes spilled on disk broken down by task and stage |
spark_task_stage_memory_bytes_spilled | bytes | The total number of bytes spilled on memory broken down by task and stage |
spark_task_stage_input_bytes_read | bytes | The total number of bytes read, broken down by task and stage |
spark_task_stage_input_records_read | records | The total number of records read, broken down by task and stage |
spark_task_stage_output_bytes_written | bytes | The total number of bytes written, broken down by task and stage |
spark_task_stage_output_records_written | records | The total number of records written, broken down by task and stage |
spark_task_stage_executor_run_time | nanoseconds | The time spent by each executor actually running tasks (including fetching shuffle data) broken down by task, stage and executor |
spark_task_stage_executor_cpu_time | nanoseconds | The CPU time spent by each executor actually running each task (including fetching shuffle data) broken down by task and stage |
driverCores | integer | cores | You should select your own default | You should select your own domain | yes | The number of CPU cores assigned to the driver in cluster deploy mode. |
numExecutors | integer | executors | You should select your own default | You should select your own domain | yes | Number of executors to use. YARN only. |
totalExecutorCores | integer | cores | You should select your own default | You should select your own domain | yes | Total number of cores for the application. Spark standalone and Mesos only. |
executorCores | integer | cores | You should select your own default | You should select your own domain | yes | Number of CPU cores for an executor. Spark standalone and YARN only. |
defaultParallelism | integer | partitions | You should select your own default | You should select your own domain | yes | Default number of partitions in RDDs returned by transformations like join, reduceByKey, and parallelize when not set by user. |
broadcastBlockSize | integer | kilobytes |
|
| yes | Size of each piece of a block for TorrentBroadcastFactory. |
schedulerMode | categorical |
|
| yes | Define the scheduling strategy across jobs. |
driverMemory | integer | megabytes | You should select your own default | You should select your own domain | yes | Amount of memory to use for the driver process. |
yarnDriverMemoryOverhead | integer | megabytes |
|
| yes | Off-heap memory to be allocated per driver in cluster mode. Currently supported in YARN and Kubernetes. |
executorMemory | integer | megabytes | You should select your own default | You should select your own domain | yes | Amount of memory to use per executor. |
yarnExecutorMemoryOverhead | integer | megabytes |
|
| yes | Off-heap memory to be allocated per executor. Currently supported in YARN and Kubernetes. |
memoryOffHeapEnabled | categorical |
|
| yes | If true, Spark will attempt to use off-heap memory for certain operations. |
memoryOffHeapSize | integer | megabytes |
|
| yes | The absolute amount of memory in bytes which can be used for off-heap allocation. |
reducerMaxSizeInFlight | integer | megabytes |
|
| yes | Maximum size of map outputs to fetch simultaneously from each reduce task in MB. |
shuffleFileBuffer | integer | kilobytes |
|
| yes | Size of the in-memory buffer for each shuffle file output stream in KB. |
shuffleCompress | categorical |
|
| yes | Whether to compress map output files. |
shuffleServiceEnabled | categorical |
|
| yes | Enables the external shuffle service. This service preserves the shuffle files written by executors so the executors can be safely removed. |
dynamicAllocationEnabled | categorical |
|
| yes | Whether to use dynamic resource allocation, which scales the number of executors registered with this application up and down based on the workload. Requires spark.shuffle.service.enabled to be set. |
dynamicAllocationExecutorIdleTimeout | integer |
|
| yes | If dynamic allocation is enabled and an executor has been idle for more than this duration, the executor will be removed. |
dynamicAllocationInitialExecutors | integer | executors | You should select your own default | You should select your own domain | yes | Initial number of executors to run if dynamic allocation is enabled. |
dynamicAllocationMinExecutors | integer | executors | You should select your own default | You should select your own domain | yes | Lower bound for the number of executors if dynamic allocation is enabled. |
dynamicAllocationMaxExecutors | integer | executors | You should select your own default | You should select your own domain | yes | Upper bound for the number of executors if dynamic allocation is enabled. |
sqlInMemoryColumnarStorageCompressed | categorical |
|
| yes | When set to true Spark SQL will automatically select a compression codec for each column based on statistics of the data. |
sqlInMemoryColumnarStorageBatchSize | integer | records |
|
| yes | Controls the size of batches for columnar caching. Larger batch sizes can improve memory utilization and compression, but risk OOMs when caching data. |
sqlFilesMaxPartitionBytes | integer | bytes |
|
| yes | The maximum number of bytes to pack into a single partition when reading files. |
sqlFilesOpenCostInBytes | integer | bytes |
|
| yes | The estimated cost to open a file, measured by the number of bytes could be scanned in the same time. This is used when putting multiple files into a partition. |
compressionLz4BlockSize | integer | bytes |
|
| yes | Block size in bytes used in LZ4 compression. |
serializer | categorical |
|
| yes | Class to use for serializing objects that will be sent over the network or need to be cached in serialized form. |
kryoserializerBuffer | integer | bytes |
|
| yes | Initial size of Kryo's serialization buffer. Note that there will be one buffer per core on each worker. |
| The overall allocated memory should not exceed the specified limit |
| The overall allocated CPUs should not exceed the specified limit |
| The overall allocated memory should not exceed the specified limit |
| The overall allocated CPUs should not exceed the specified limit |
spark_application_duration | milliseconds | The duration of the Spark application |
spark_job_duration | milliseconds | The duration of the job |
spark_stage_duration | milliseconds | The duration of the stage |
spark_task_duration | milliseconds | The duration of the task |
spark_driver_rdd_blocks | blocks | The total number of persisted RDD blocks for the driver |
spark_driver_mem_used | bytes | The total amount of memory used by the driver |
spark_driver_disk_used | bytes | The total amount of disk used for RDDs by the driver |
spark_driver_cores | cores | The total number of concurrent tasks that can be run by the driver |
spark_driver_total_input_bytes | bytes | The total number of bytes read from RDDs or persisted data by the driver |
spark_driver_total_tasks | tasks | The total number of tasks run for each the driver |
spark_driver_total_duration | milliseconds | The total amount of time spent by the driver running tasks |
spark_driver_max_mem_used | bytes | The maximum amount of memory used by the driver |
spark_driver_total_jvm_gc_duration | milliseconds | The total amount of time spent by the driver's JVM doing garbage across all tasks |
spark_driver_total_shuffle_read | bytes | The total number of bytes read during a shuffle by the driver |
spark_driver_total_shuffle_write | bytes | The total number of bytes written in shuffle operations by the driver |
spark_driver_used_on_heap_storage_memory | bytes | The amount of on-heap memory used by the driver |
spark_driver_used_off_heap_storage_memory | bytes | The amount of off-heap memory used by the driver |
spark_driver_total_on_heap_storage_memory | bytes | The total amount of available on-heap memory for the driver |
spark_driver_total_off_heap_storage_memory | bytes | The total amount of available off-heap memory for the driver |
spark_executor_max_count | executors | The maximum number of executors used for the application |
spark_executor_rdd_blocks | blocks | The total number of persisted RDD blocks for each executor |
spark_executor_mem_used | bytes | The total amount of memory used by each executor |
spark_executor_disk_used | bytes | The total amount of disk used for RDDs by each executor |
spark_executor_cores | cores | The number of cores used by each executor |
spark_executor_total_input_bytes | bytes | The total number of bytes read from RDDs or persisted data by each executor |
spark_executor_total_tasks | tasks | The total number of tasks run for each the executor |
spark_executor_total_duration | milliseconds | The total amount of time spent by each executor running tasks |
spark_executor_max_mem_used | bytes | The maximum amount of memory used by each executor |
spark_executor_total_jvm_gc_duration | milliseconds | The total amount of time spent by each executor's JVM doing garbage collection across all tasks |
spark_executor_total_shuffle_read | bytes | The total number of bytes read during a shuffle by each executor |
spark_executor_total_shuffle_write | bytes | The total number of bytes written in shuffle operations by each executor |
spark_executor_used_on_heap_storage_memory | bytes | The amount of on-heap memory used by each executor |
spark_executor_used_off_heap_storage_memory | bytes | The amount of off-heap memory used by each executor |
spark_executor_total_on_heap_storage_memory | bytes | The total amount of available on-heap memory for each executor |
spark_executor_total_off_heap_storage_memory | bytes | The total amount of available off-heap memory for each executor |
spark_stage_shuffle_read_bytes | bytes | The total number of bytes read in shuffle operations by each stage |
spark_task_jvm_gc_duration | milliseconds | The total duration of JVM garbage collections for each task |
spark_task_peak_execution_memory | bytes | The sum of the peak sizes across internal data structures created for each task |
spark_task_result_size | bytes | The size of the result of the computation of each task |
spark_task_result_serialization_time | milliseconds | The time spent by each task serializing the computation result |
spark_task_shuffle_read_fetch_wait_time | milliseconds | The time spent by each task waiting for remote shuffle blocks |
spark_task_shuffle_read_local_blocks_fetched | blocks | The total number of local blocks fetched in shuffle operations by each task |
spark_task_shuffle_read_local_bytes | bytes | The total number of bytes read in shuffle operations from local disk by each task |
spark_task_shuffle_read_remote_blocks_fetched | blocks | The total number of remote blocks fetched in shuffle operations by each task |
spark_task_shuffle_read_remote_bytes | bytes | The total number of remote bytes read in shuffle operations by each task |
spark_task_shuffle_read_remote_bytes_to_disk | bytes | The total number of remote bytes read to disk in shuffle operations by each task |
spark_task_shuffle_write_time | nanoseconds | The time spent by each task writing data on disk or on buffer caches during shuffle operations |
spark_task_executor_deserialize_time | nanoseconds | The time spent by the executor deserializing the task |
spark_task_executor_deserialize_cpu_time | nanoseconds | The CPU time spent by the executor deserializing the task |
spark_task_stage_shuffle_write_records | records | The total number of records written in shuffle operations broken down by task and stage |
spark_task_stage_shuffle_write_bytes | records | The total number of bytes written in shuffle operations broken down by task and stage |
spark_task_stage_shuffle_read_records | records | The total number of records read in shuffle operations broken down by task and stage |
spark_task_stage_disk_bytes_spilled | bytes | The total number of bytes spilled on disk broken down by task and stage |
spark_task_stage_memory_bytes_spilled | bytes | The total number of bytes spilled on memory broken down by task and stage |
spark_task_stage_input_bytes_read | bytes | The total number of bytes read, broken down by task and stage |
spark_task_stage_input_records_read | records | The total number of records read, broken down by task and stage |
spark_task_stage_output_bytes_written | bytes | The total number of bytes written, broken down by task and stage |
spark_task_stage_output_records_written | records | The total number of records written, broken down by task and stage |
spark_task_stage_executor_run_time | nanoseconds | The time spent by each executor actually running tasks (including fetching shuffle data) broken down by task, stage and executor |
spark_task_stage_executor_cpu_time | nanoseconds | The CPU time spent by each executor actually running each task (including fetching shuffle data) broken down by task and stage |
driverCores | integer | cores | You should select your own default | You should select your own domain | yes | The number of CPU cores assigned to the driver in cluster deploy mode. |
numExecutors | integer | executors | You should select your own default | You should select your own domain | yes | Number of executors to use. YARN only. |
totalExecutorCores | integer | cores | You should select your own default | You should select your own domain | yes | Total number of cores for the application. Spark standalone and Mesos only. |
executorCores | integer | cores | You should select your own default | You should select your own domain | yes | Number of CPU cores for an executor. Spark standalone and YARN only. |
defaultParallelism | integer | partitions | You should select your own default | You should select your own domain | yes | Default number of partitions in RDDs returned by transformations like join, reduceByKey, and parallelize when not set by user. |
broadcastBlockSize | integer | kilobytes |
|
| yes | Size of each piece of a block for TorrentBroadcastFactory. |
schedulerMode | categorical |
|
| yes | Define the scheduling strategy across jobs. |
driverMemory | integer | megabytes | You should select your own default | You should select your own domain | yes | Amount of memory to use for the driver process. |
yarnDriverMemoryOverhead | integer | megabytes |
|
| yes | Off-heap memory to be allocated per driver in cluster mode. Currently supported in YARN and Kubernetes. |
executorMemory | integer | megabytes | You should select your own default | You should select your own domain | yes | Amount of memory to use per executor. |
yarnExecutorMemoryOverhead | integer | megabytes |
|
| yes | Off-heap memory to be allocated per executor. Currently supported in YARN and Kubernetes. |
memoryOffHeapEnabled | categorical |
|
| yes | If true, Spark will attempt to use off-heap memory for certain operations. |
memoryOffHeapSize | integer | megabytes |
|
| yes | The absolute amount of memory in bytes which can be used for off-heap allocation. |
reducerMaxSizeInFlight | integer | megabytes |
|
| yes | Maximum size of map outputs to fetch simultaneously from each reduce task in MB. |
shuffleFileBuffer | integer | kilobytes |
|
| yes | Size of the in-memory buffer for each shuffle file output stream in KB. |
shuffleCompress | categorical |
|
| yes | Whether to compress map output files. |
shuffleServiceEnabled | categorical |
|
| yes | Enables the external shuffle service. This service preserves the shuffle files written by executors so the executors can be safely removed. |
dynamicAllocationEnabled | categorical |
|
| yes | Whether to use dynamic resource allocation, which scales the number of executors registered with this application up and down based on the workload. Requires spark.shuffle.service.enabled to be set. |
dynamicAllocationExecutorIdleTimeout | integer |
|
| yes | If dynamic allocation is enabled and an executor has been idle for more than this duration, the executor will be removed. |
dynamicAllocationInitialExecutors | integer | executors | You should select your own default | You should select your own domain | yes | Initial number of executors to run if dynamic allocation is enabled. |
dynamicAllocationMinExecutors | integer | executors | You should select your own default | You should select your own domain | yes | Lower bound for the number of executors if dynamic allocation is enabled. |
dynamicAllocationMaxExecutors | integer | executors | You should select your own default | You should select your own domain | yes | Upper bound for the number of executors if dynamic allocation is enabled. |
sqlInMemoryColumnarStorageCompressed | categorical |
|
| yes | When set to true Spark SQL will automatically select a compression codec for each column based on statistics of the data. |
sqlInMemoryColumnarStorageBatchSize | integer | records |
|
| yes | Controls the size of batches for columnar caching. Larger batch sizes can improve memory utilization and compression, but risk OOMs when caching data. |
sqlFilesMaxPartitionBytes | integer | bytes |
|
| yes | The maximum number of bytes to pack into a single partition when reading files. |
sqlFilesOpenCostInBytes | integer | bytes |
|
| yes | The estimated cost to open a file, measured by the number of bytes could be scanned in the same time. This is used when putting multiple files into a partition. |
compressionLz4BlockSize | integer | bytes |
|
| yes | Block size in bytes used in LZ4 compression. |
serializer | categorical |
|
| yes | Class to use for serializing objects that will be sent over the network or need to be cached in serialized form. |
kryoserializerBuffer | integer | bytes |
|
| yes | Initial size of Kryo's serialization buffer. Note that there will be one buffer per core on each worker. |
| The overall allocated memory should not exceed the specified limit |
| The overall allocated CPUs should not exceed the specified limit |
| The overall allocated memory should not exceed the specified limit |
| The overall allocated CPUs should not exceed the specified limit |
ElasticSearch NoSQL database version 6 |
Spark Application 2.2.0 |
Spark Application 2.3.0 |
Spark Application 2.4.0 |
The OpenJ9 optimization pack enables the ability to optimize Java applications based on the Eclipse OpenJ9 VM, formerly known as IBM J9. Through this optimization pack, Akamas is able to tackle the problem of performance of JVM-based applications from both the point of view of cost savings and quality of service.
To achieve these goals the optimization pack provides parameters that focus on the following areas:
Garbage collection
Heap
JIT
Similarly, the bundled metrics provide visibility on the following aspects of tuned applications:
Heap and memory utilization
Garbage Collection
Execution threads
The optimization pack supports the most used versions of JVM.
Here’s the command to install the Eclipse OpenJ9 optimization pack using the Akamas CLI:
For more information on the process of installing or upgrading an optimization pack refer to Install Optimization Packs.
This page describes the Optimization Pack for Spark Application 2.4.0.
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters:
The overall resources allocated to the application should be constrained by a maximum and, sometimes, a minimum value:
the maximum value could be the sum of resources physically available in the cluster, or a lower limit to allow the concurrent execution of other applications
an optional minimum value could be useful to avoid configurations that allocate executors that are both small and scarce
This page describes the Optimization Pack for Eclipse OpenJ9 (formerly known as IBM J9) Virtual Machine version 6.
The following parameters require their ranges or default values to be updated according to the described rules:
Notice that the value nocompressedreferences
for j9vm_compressedReferences
can only be specified for JVMs compiled with the proper --with-noncompressedrefs
flag. If this is not the case you cannot actively disable compressed references, meaning:
for Xmx <= 57GB is useless to tune this parameter since compressed references are active by default and it is not possible to explicitly disable it
for Xmx > 57GB, since the by default (blank value) compressed references are disabled, Akamas can try to enable it. This requires removing the nocompressedreferences
from the domain
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters:
Notice that
j9vm_newSpaceFixed
is mutually exclusive with j9vm_minNewSpace
and j9vm_maxNewSpace
j9vm_oldSpaceFixed
is mutually exclusive with j9vm_minOldSpace
and j9vm_maxOldSpace
the sum of j9vm_minNewSpace
and j9vm_minOldSpace
must be equal to j9vm_minHeapSize
, so it's useless to tune all of them together. Max values seem to be more complex.
This page describes the Optimization Pack for Eclipse OpenJ9 (formerly known as IBM J9) Virtual Machine version 8.
The following parameters require their ranges or default values to be updated according to the described rules:
Notice that the value nocompressedreferences
for j9vm_compressedReferences
can only be specified for JVMs compiled with the proper --with-noncompressedrefs
flag. If this is not the case you cannot actively disable compressed references, meaning:
for Xmx <= 57GB is useless to tune this parameter since compressed references are active by default and it is not possible to explicitly disable it
for Xmx > 57GB, since the by default (blank value) compressed references are disabled, Akamas can try to enable it. This requires removing the nocompressedreferences
from the domain
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters:
Notice that
j9vm_newSpaceFixed
is mutually exclusive with j9vm_minNewSpace
and j9vm_maxNewSpace
j9vm_oldSpaceFixed
is mutually exclusive with j9vm_minOldSpace
and j9vm_maxOldSpace
the sum of j9vm_minNewSpace
and j9vm_minOldSpace
must be equal to j9vm_minHeapSize
, so it's useless to tune all of them together. Max values seem to be more complex.
Component Type | Description |
---|---|
Metric | Unit | Desciption |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Metric | Unit | Description |
---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|
Name | Unit | Description |
---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|
Parameter | Default value | Domain |
---|
Formula | Notes |
---|
Name | Unit | Description |
---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|
Parameter | Default value | Domain |
---|
Formula | Notes |
---|
Eclipse OpenJ9 (formerly known as IBM J9) Virtual Machine version 6
Eclipse OpenJ9 (formerly known as IBM J9) Virtual Machine version 8
Eclipse OpenJ9 (formerly known as IBM J9) version 11
spark_application_duration
milliseconds
The duration of the Spark application
spark_job_duration
milliseconds
The duration of the job
spark_stage_duration
milliseconds
The duration of the stage
spark_task_duration
milliseconds
The duration of the task
spark_driver_rdd_blocks
blocks
The total number of persisted RDD blocks for the driver
spark_driver_mem_used
bytes
The total amount of memory used by the driver
spark_driver_disk_used
bytes
The total amount of disk used for RDDs by the driver
spark_driver_cores
cores
The total number of concurrent tasks that can be run by the driver
spark_driver_total_input_bytes
bytes
The total number of bytes read from RDDs or persisted data by the driver
spark_driver_total_tasks
tasks
The total number of tasks run for each the driver
spark_driver_total_duration
milliseconds
The total amount of time spent by the driver running tasks
spark_driver_max_mem_used
bytes
The maximum amount of memory used by the driver
spark_driver_total_jvm_gc_duration
milliseconds
The total amount of time spent by the driver's JVM doing garbage across all tasks
spark_driver_total_shuffle_read
bytes
The total number of bytes read during a shuffle by the driver
spark_driver_total_shuffle_write
bytes
The total number of bytes written in shuffle operations by the driver
spark_driver_used_on_heap_storage_memory
bytes
The amount of on-heap memory used by the driver
spark_driver_used_off_heap_storage_memory
bytes
The amount of off-heap memory used by the driver
spark_driver_total_on_heap_storage_memory
bytes
The total amount of available on-heap memory for the driver
spark_driver_total_off_heap_storage_memory
bytes
The total amount of available off-heap memory for the driver
spark_executor_max_count
executors
The maximum number of executors used for the application
spark_executor_rdd_blocks
blocks
The total number of persisted RDD blocks for each executor
spark_executor_mem_used
bytes
The total amount of memory used by each executor
spark_executor_disk_used
bytes
The total amount of disk used for RDDs by each executor
spark_executor_cores
cores
The number of cores used by each executor
spark_executor_total_input_bytes
bytes
The total number of bytes read from RDDs or persisted data by each executor
spark_executor_total_tasks
tasks
The total number of tasks run for each the executor
spark_executor_total_duration
milliseconds
The total amount of time spent by each executor running tasks
spark_executor_max_mem_used
bytes
The maximum amount of memory used by each executor
spark_executor_total_jvm_gc_duration
milliseconds
The total amount of time spent by each executor's JVM doing garbage collection across all tasks
spark_executor_total_shuffle_read
bytes
The total number of bytes read during a shuffle by each executor
spark_executor_total_shuffle_write
bytes
The total number of bytes written in shuffle operations by each executor
spark_executor_used_on_heap_storage_memory
bytes
The amount of on-heap memory used by each executor
spark_executor_used_off_heap_storage_memory
bytes
The amount of off-heap memory used by each executor
spark_executor_total_on_heap_storage_memory
bytes
The total amount of available on-heap memory for each executor
spark_executor_total_off_heap_storage_memory
bytes
The total amount of available off-heap memory for each executor
spark_stage_shuffle_read_bytes
bytes
The total number of bytes read in shuffle operations by each stage
spark_task_jvm_gc_duration
milliseconds
The total duration of JVM garbage collections for each task
spark_task_peak_execution_memory
bytes
The sum of the peak sizes across internal data structures created for each task
spark_task_result_size
bytes
The size of the result of the computation of each task
spark_task_result_serialization_time
milliseconds
The time spent by each task serializing the computation result
spark_task_shuffle_read_fetch_wait_time
milliseconds
The time spent by each task waiting for remote shuffle blocks
spark_task_shuffle_read_local_blocks_fetched
blocks
The total number of local blocks fetched in shuffle operations by each task
spark_task_shuffle_read_local_bytes
bytes
The total number of bytes read in shuffle operations from local disk by each task
spark_task_shuffle_read_remote_blocks_fetched
blocks
The total number of remote blocks fetched in shuffle operations by each task
spark_task_shuffle_read_remote_bytes
bytes
The total number of remote bytes read in shuffle operations by each task
spark_task_shuffle_read_remote_bytes_to_disk
bytes
The total number of remote bytes read to disk in shuffle operations by each task
spark_task_shuffle_write_time
nanoseconds
The time spent by each task writing data on disk or on buffer caches during shuffle operations
spark_task_executor_deserialize_time
nanoseconds
The time spent by the executor deserializing the task
spark_task_executor_deserialize_cpu_time
nanoseconds
The CPU time spent by the executor deserializing the task
spark_task_stage_shuffle_write_records
records
The total number of records written in shuffle operations broken down by task and stage
spark_task_stage_shuffle_write_bytes
records
The total number of bytes written in shuffle operations broken down by task and stage
spark_task_stage_shuffle_read_records
records
The total number of records read in shuffle operations broken down by task and stage
spark_task_stage_disk_bytes_spilled
bytes
The total number of bytes spilled on disk broken down by task and stage
spark_task_stage_memory_bytes_spilled
bytes
The total number of bytes spilled on memory broken down by task and stage
spark_task_stage_input_bytes_read
bytes
The total number of bytes read, broken down by task and stage
spark_task_stage_input_records_read
records
The total number of records read, broken down by task and stage
spark_task_stage_output_bytes_written
bytes
The total number of bytes written, broken down by task and stage
spark_task_stage_output_records_written
records
The total number of records written, broken down by task and stage
spark_task_stage_executor_run_time
nanoseconds
The time spent by each executor actually running tasks (including fetching shuffle data) broken down by task, stage and executor
spark_task_stage_executor_cpu_time
nanoseconds
The CPU time spent by each executor actually running each task (including fetching shuffle data) broken down by task and stage
driverCores
integer
cores
You should select your own default
You should select your own domain
yes
The number of CPU cores assigned to the driver in cluster deploy mode.
numExecutors
integer
executors
You should select your own default
You should select your own domain
yes
Number of executors to use. YARN only.
totalExecutorCores
integer
cores
You should select your own default
You should select your own domain
yes
Total number of cores for the application. Spark standalone and Mesos only.
executorCores
integer
cores
You should select your own default
You should select your own domain
yes
Number of CPU cores for an executor. Spark standalone and YARN only.
defaultParallelism
integer
partitions
You should select your own default
You should select your own domain
yes
Default number of partitions in RDDs returned by transformations like join, reduceByKey, and parallelize when not set by user.
broadcastBlockSize
integer
kilobytes
4096
256
→ 131072
yes
Size of each piece of a block for TorrentBroadcastFactory.
schedulerMode
categorical
FIFO
FIFO
, FAIR
yes
Define the scheduling strategy across jobs.
driverMemory
integer
megabytes
You should select your own default
You should select your own domain
yes
Amount of memory to use for the driver process.
yarnDriverMemoryOverhead
integer
megabytes
384
384
→ 65536
yes
Off-heap memory to be allocated per driver in cluster mode. Currently supported in YARN and Kubernetes.
executorMemory
integer
megabytes
You should select your own default
You should select your own domain
yes
Amount of memory to use per executor.
executorPySparkMemory
integer
megabytes
You should select your own default
You should select your own default
yes
The amount of memory to be allocated to PySpark in each executor.
yarnExecutorMemoryOverhead
integer
megabytes
384
384
→ 65536
yes
Off-heap memory to be allocated per executor. Currently supported in YARN and Kubernetes.
memoryOffHeapEnabled
categorical
false
true
, false
yes
If true, Spark will attempt to use off-heap memory for certain operations.
memoryOffHeapSize
integer
megabytes
0
0
→ 16384
yes
The absolute amount of memory in bytes which can be used for off-heap allocation.
reducerMaxSizeInFlight
integer
megabytes
48
1
→ 1024
yes
Maximum size of map outputs to fetch simultaneously from each reduce task in MB.
shuffleFileBuffer
integer
kilobytes
32
1
→ 2048
yes
Size of the in-memory buffer for each shuffle file output stream in KB.
shuffleCompress
categorical
true
true
, false
yes
Whether to compress map output files.
shuffleServiceEnabled
categorical
true
true
, false
yes
Enables the external shuffle service. This service preserves the shuffle files written by executors so the executors can be safely removed.
dynamicAllocationEnabled
categorical
true
true
, false
yes
Whether to use dynamic resource allocation, which scales the number of executors registered with this application up and down based on the workload. Requires spark.shuffle.service.enabled to be set.
dynamicAllocationExecutorIdleTimeout
integer
60
1
→ 3600
yes
If dynamic allocation is enabled and an executor has been idle for more than this duration, the executor will be removed.
dynamicAllocationInitialExecutors
integer
executors
You should select your own default
You should select your own domain
yes
Initial number of executors to run if dynamic allocation is enabled.
dynamicAllocationMinExecutors
integer
executors
You should select your own default
You should select your own domain
yes
Lower bound for the number of executors if dynamic allocation is enabled.
dynamicAllocationMaxExecutors
integer
executors
You should select your own default
You should select your own domain
yes
Upper bound for the number of executors if dynamic allocation is enabled.
sqlInMemoryColumnarStorageCompressed
categorical
true
true
, false
yes
When set to true Spark SQL will automatically select a compression codec for each column based on statistics of the data.
sqlInMemoryColumnarStorageBatchSize
integer
records
1000
1
→ 100000
yes
Controls the size of batches for columnar caching. Larger batch sizes can improve memory utilization and compression, but risk OOMs when caching data.
sqlFilesMaxPartitionBytes
integer
bytes
134217728
1024
→ 1073741824
yes
The maximum number of bytes to pack into a single partition when reading files.
sqlFilesOpenCostInBytes
integer
bytes
4194304
262144
→ 67108864
yes
The estimated cost to open a file, measured by the number of bytes could be scanned in the same time. This is used when putting multiple files into a partition.
compressionLz4BlockSize
integer
bytes
32
8
→ 1024
yes
Block size in bytes used in LZ4 compression.
serializer
categorical
org.apache.spark.serializer.KryoSerializer
org.apache.spark.serializer.JavaSerializer
, org.apache.spark.serializer.KryoSerializer
yes
Class to use for serializing objects that will be sent over the network or need to be cached in serialized form.
kryoserializerBuffer
integer
bytes
64
8
→ 1024
yes
Initial size of Kryo's serialization buffer. Note that there will be one buffer per core on each worker.
driverMemory + executorMemory * numExecutors < MEMORY_CAP
The overall allocated memory should not exceed the specified limit
driverCores + executorCores * numExecutors < CPU_CAP
The overall allocated CPUs should not exceed the specified limit
driverMemory + executorMemory * numExecutors > MIN_MEMORY
The overall allocated memory should not exceed the specified limit
driverCores + executorCores * numExecutors > MIN_CPUS
The overall allocated CPUs should not exceed the specified limit
cpu_used | CPUs | The total amount of CPUs used |
cpu_util | percents | The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work) |
go_heap_size | bytes | The largest size reached by the Go heap memory |
go_heap_used | bytes | The amount of heap memory used |
go_heap_util | bytes | The amount of heap memory used |
go_memory_used | bytes | The total amount of memory used by Go |
go_gc_time | percent | The % of wall clock time the Go spent doing stop the world garbage collection activities |
go_gc_duration | seconds | The average duration of a stop the world Go garbage collection |
go_gc_count | collections/s | The total number of stop the world Go garbage collections that have occurred per second |
go_threads_current | threads | The total number of active Go threads |
go_goroutines_current | goroutines | The total number of active Goroutines |
go_gcTargetPercentage | integer |
|
| yes | Sets the GOGC variable which controls the aggressiveness of the garbage collector |
go_maxProcs | integer | theads |
|
| yes | Limits the number of operating system threads that can execute user-level code simultaneously |
go_memLimit | integer | megabtes | 100 |
| yes | Sets a soft memory limit for the runtime. Available since Go 1.19 |
jvm_heap_size | bytes | The size of the JVM heap memory |
jvm_heap_used | bytes | The amount of heap memory used |
jvm_heap_util | percent | The utilization % of heap memory |
jvm_memory_used | bytes | The total amount of memory used across all the JVM memory pools |
jvm_memory_used_details | bytes | The total amount of memory used broken down by pool (e.g., code-cache, compressed-class-space) |
jvm_memory_buffer_pool_used | bytes | The total amount of bytes used by buffers within the JVM buffer memory pool |
jvm_gc_time | percent | The % of wall clock time the JVM spent doing stop the world garbage collection activities |
jvm_gc_time_details | percent | The % of wall clock time the JVM spent doing stop the world garbage collection activities broken down by type of garbage collection algorithm (e.g., ParNew) |
jvm_gc_count | collections/s | The total number of stop the world JVM garbage collections that have occurred per second |
jvm_gc_count_details | collections/s | The total number of stop the world JVM garbage collections that have occurred per second, broken down by type of garbage collection algorithm (e.g., G1, CMS) |
jvm_gc_duration | seconds | The average duration of a stop the world JVM garbage collection |
jvm_gc_duration_details | seconds | The average duration of a stop the world JVM garbage collection broken down by type of garbage collection algorithm (e.g., G1, CMS) |
jvm_threads_current | threads | The total number of active threads within the JVM |
jvm_threads_deadlocked | threads | The total number of deadlocked threads within the JVM |
jvm_compilation_time | milliseconds | The total time spent by the JVM JIT compiler compiling bytecode |
j9vm_minHeapSize | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | Minimum heap size (in megabytes) |
j9vm_maxHeapSize | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | Maximum heap size (in megabytes) |
j9vm_minFreeHeap | real | percent |
|
| yes | Specify the minimum % free heap required after global GC |
j9vm_maxFreeHeap | real | percent |
|
| yes | Specify the maximum % free heap required after global GC |
j9vm_gcPolicy | categorical |
|
| yes | GC policy to use |
j9vm_gcThreads | integer | threads | You should select your own default value. |
| yes | Number of threads the garbage collector uses for parallel operations |
j9vm_scvTenureAge | integer |
|
| yes | Set the initial tenuring threshold for generational concurrent GC policy |
j9vm_scvAdaptiveTenureAge | categorical | blank | blank, | yes | Enable the adaptive tenure age for generational concurrent GC policy |
j9vm_newSpaceFixed | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The fixed size of the new area when using the gencon GC policy. Must not be set alongside min or max |
j9vm_minNewSpace | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The initial size of the new area when using the gencon GC policy |
j9vm_maxNewSpace | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The maximum size of the new area when using the gencon GC policy |
j9vm_oldSpaceFixed | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The fixed size of the old area when using the gencon GC policy. Must not be set alongside min or max |
j9vm_minOldSpace | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The initial size of the old area when using the gencon GC policy |
j9vm_maxOldSpace | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The maximum size of the old area when using the gencon GC policy |
j9vm_concurrentScavenge | categorical |
|
| yes | Support pause-less garbage collection mode with gencon |
j9vm_gcPartialCompact | categorical |
|
| yes | Enable partial compaction |
j9vm_concurrentMeter | categorical |
|
| yes | Determine which area is monitored by the concurrent mark |
j9vm_concurrentBackground | integer |
|
| yes | The number of background threads assisting the mutator threads in concurrent mark |
j9vm_concurrentSlack | integer | megabytes |
| You should select your own domain. | yes | The target size of free heap space for concurrent collectors |
j9vm_concurrentLevel | integer | percent |
|
| yes | The ratio between the amount of heap allocated and the amount of heap marked |
j9vm_gcCompact | categorical | blank | blank, | yes | Enables full compaction on all garbage collections (system and global) |
j9vm_minGcTime | real | percent |
|
| yes | The minimum percentage of time to be spent in garbage collection, triggering the resize of the heap to meet the specified values |
j9vm_maxGcTime | real | percent |
|
| yes | The maximum percentage of time to be spent in garbage collection, triggering the resize of the heap to meet the specified values |
j9vm_loa | categorical |
|
| yes | Enable the allocation of the large area object during garbage collection |
j9vm_loa_initial | real |
|
| yes | The initial portion of the tenure area allocated to the large area object |
j9vm_loa_minimum | real |
|
| yes | The minimum portion of the tenure area allocated to the large area object |
j9vm_loa_maximum | real |
|
| yes | The maximum portion of the tenure area allocated to the large area object |
j9vm_jitOptlevel | ordinal |
|
| yes | Force the JIT compiler to compile all methods at a specific optimization level |
j9vm_codeCacheTotal | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | Maximum size limit in MB for the JIT code cache |
j9vm_jit_count | integer |
|
| yes | The number of times a method is called before it is compiled |
j9vm_compressedReferences | categorical | blank | blank, | yes | Enable/disable the use of compressed references |
j9vm_aggressiveOpts | categorical | blank | blank, | yes | Enable the use of aggressive performance optimization features, which are expected to become default in upcoming releases |
j9vm_virtualized | categorical | blank | blank, | yes | Optimize the VM for virtualized environment, reducing CPU usage when idle |
j9vm_shareclasses | categorical | blank | blank, | yes | Enable class sharing |
j9vm_quickstart | categorical | blank | blank, | yes | Run JIT with only a subset of optimizations, improving the performance of short-running applications |
j9vm_minimizeUserCpu | categorical | blank | blank, | yes | Minimizes user-mode CPU usage in thread synchronization where possible |
j9vm_minNewSpace | 25% of | must not exceed |
j9vm_maxNewSpace | 25% of j9vm_maxHeapSize | must not exceed |
j9vm_minOldSpace | 75% of j9vm_minHeapSize | must not exceed |
j9vm_maxOldSpace | same as j9vm_maxHeapSize | must not exceed |
j9vm_gcthreads | number of CPUs - 1, up to a maximum of 64 | capped to default, no benefit in exceeding that value |
j9vm_compressedReferences | enabled for j9vm_maxHeapSize<= 57 GB |
jvm.j9vm_minHeapSize < jvm.j9vm_maxHeapSize |
jvm.j9vm_minNewSpace < jvm.j9vm_maxNewSpace && jvm.j9vm_minNewSpace < jvm.j9vm_minHeapSize && jvm.j9vm_maxNewSpace < jvm.j9vm_maxHeapSize |
jvm.j9vm_minOldSpace < jvm.j9vm_maxOldSpace && jvm.j9vm_minOldSpace < jvm.j9vm_minHeapSize && jvm.j9vm_maxOldSpace < jvm.j9vm_maxHeapSize |
jvm.j9vm_loa_minimum <= jvm.j9vm_loa_initial && jvm.j9vm_loa_initial <= jvm.j9vm_loa_maximum |
jvm.j9vm_minFreeHeap + 0.05 < jvm.j9vm_maxFreeHeap |
jvm.j9vm_minGcTimeMin < jvm.j9vm_maxGcTime |
jvm_heap_size | bytes | The size of the JVM heap memory |
jvm_heap_used | bytes | The amount of heap memory used |
jvm_heap_util | percent | The utilization % of heap memory |
jvm_memory_used | bytes | The total amount of memory used across all the JVM memory pools |
jvm_memory_used_details | bytes | The total amount of memory used broken down by pool (e.g., code-cache, compressed-class-space) |
jvm_memory_buffer_pool_used | bytes | The total amount of bytes used by buffers within the JVM buffer memory pool |
jvm_gc_time | percent | The % of wall clock time the JVM spent doing stop the world garbage collection activities |
jvm_gc_time_details | percent | The % of wall clock time the JVM spent doing stop the world garbage collection activities broken down by type of garbage collection algorithm (e.g., ParNew) |
jvm_gc_count | collections/s | The total number of stop the world JVM garbage collections that have occurred per second |
jvm_gc_count_details | collections/s | The total number of stop the world JVM garbage collections that have occurred per second, broken down by type of garbage collection algorithm (e.g., G1, CMS) |
jvm_gc_duration | seconds | The average duration of a stop the world JVM garbage collection |
jvm_gc_duration_details | seconds | The average duration of a stop the world JVM garbage collection broken down by type of garbage collection algorithm (e.g., G1, CMS) |
jvm_threads_current | threads | The total number of active threads within the JVM |
jvm_threads_deadlocked | threads | The total number of deadlocked threads within the JVM |
jvm_compilation_time | milliseconds | The total time spent by the JVM JIT compiler compiling bytecode |
j9vm_minHeapSize | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | Minimum heap size (in megabytes) |
j9vm_maxHeapSize | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | Maximum heap size (in megabytes) |
j9vm_minFreeHeap | real | percent |
|
| yes | Specify the minimum % free heap required after global GC |
j9vm_maxFreeHeap | real | percent |
|
| yes | Specify the maximum % free heap required after global GC |
j9vm_gcPolicy | categorical |
|
| yes | GC policy to use |
j9vm_gcThreads | integer | threads | You should select your own default value. |
| yes | Number of threads the garbage collector uses for parallel operations |
j9vm_scvTenureAge | integer |
|
| yes | Set the initial tenuring threshold for generational concurrent GC policy |
j9vm_scvAdaptiveTenureAge | categorical | blank | blank, | yes | Enable the adaptive tenure age for generational concurrent GC policy |
j9vm_newSpaceFixed | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The fixed size of the new area when using the gencon GC policy. Must not be set alongside min or max |
j9vm_minNewSpace | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The initial size of the new area when using the gencon GC policy |
j9vm_maxNewSpace | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The maximum size of the new area when using the gencon GC policy |
j9vm_oldSpaceFixed | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The fixed size of the old area when using the gencon GC policy. Must not be set alongside min or max |
j9vm_minOldSpace | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The initial size of the old area when using the gencon GC policy |
j9vm_maxOldSpace | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The maximum size of the old area when using the gencon GC policy |
j9vm_concurrentScavenge | categorical |
|
| yes | Support pause-less garbage collection mode with gencon |
j9vm_gcPartialCompact | categorical |
|
| yes | Enable partial compaction |
j9vm_concurrentMeter | categorical |
|
| yes | Determine which area is monitored by the concurrent mark |
j9vm_concurrentBackground | integer |
|
| yes | The number of background threads assisting the mutator threads in concurrent mark |
j9vm_concurrentSlack | integer | megabytes |
| You should select your own domain. | yes | The target size of free heap space for concurrent collectors |
j9vm_concurrentLevel | integer | percent |
|
| yes | The ratio between the amount of heap allocated and the amount of heap marked |
j9vm_gcCompact | categorical | blank | blank, | yes | Enables full compaction on all garbage collections (system and global) |
j9vm_minGcTime | real | percent |
|
| yes | The minimum percentage of time to be spent in garbage collection, triggering the resize of the heap to meet the specified values |
j9vm_maxGcTime | real | percent |
|
| yes | The maximum percentage of time to be spent in garbage collection, triggering the resize of the heap to meet the specified values |
j9vm_loa | categorical |
|
| yes | Enable the allocation of the large area object during garbage collection |
j9vm_loa_initial | real |
|
| yes | The initial portion of the tenure area allocated to the large area object |
j9vm_loa_minimum | real |
|
| yes | The minimum portion of the tenure area allocated to the large area object |
j9vm_loa_maximum | real |
|
| yes | The maximum portion of the tenure area allocated to the large area object |
j9vm_jitOptlevel | ordinal |
|
| yes | Force the JIT compiler to compile all methods at a specific optimization level |
j9vm_compilationThreads | integer | integer | You should select your own default value. |
| yes | Number of JIT threads |
j9vm_codeCacheTotal | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | Maximum size limit in MB for the JIT code cache |
j9vm_jit_count | integer |
|
| yes | The number of times a method is called before it is compiled |
j9vm_lockReservation | categorical | categorical | blank, | no | Enables an optimization that presumes a monitor is owned by the thread that last acquired it |
j9vm_compressedReferences | categorical | blank | blank, | yes | Enable/disable the use of compressed references |
j9vm_aggressiveOpts | categorical | blank | blank, | yes | Enable the use of aggressive performance optimization features, which are expected to become default in upcoming releases |
j9vm_virtualized | categorical | blank | blank, | yes | Optimize the VM for virtualized environment, reducing CPU usage when idle |
j9vm_shareclasses | categorical | blank | blank, | yes | Enable class sharing |
j9vm_quickstart | categorical | blank | blank, | yes | Run JIT with only a subset of optimizations, improving the performance of short-running applications |
j9vm_minimizeUserCpu | categorical | blank | blank, | yes | Minimizes user-mode CPU usage in thread synchronization where possible |
j9vm_minNewSpace | 25% of | must not exceed |
j9vm_maxNewSpace | 25% of j9vm_maxHeapSize | must not exceed |
j9vm_minOldSpace | 75% of j9vm_minHeapSize | must not exceed |
j9vm_maxOldSpace | same as j9vm_maxHeapSize | must not exceed |
j9vm_gcthreads | number of CPUs - 1, up to a maximum of 64 | capped to default, no benefit in exceeding that value |
j9vm_compressedReferences | enabled for j9vm_maxHeapSize<= 57 GB |
jvm.j9vm_minHeapSize < jvm.j9vm_maxHeapSize |
jvm.j9vm_minNewSpace < jvm.j9vm_maxNewSpace && jvm.j9vm_minNewSpace < jvm.j9vm_minHeapSize && jvm.j9vm_maxNewSpace < jvm.j9vm_maxHeapSize |
jvm.j9vm_minOldSpace < jvm.j9vm_maxOldSpace && jvm.j9vm_minOldSpace < jvm.j9vm_minHeapSize && jvm.j9vm_maxOldSpace < jvm.j9vm_maxHeapSize |
jvm.j9vm_loa_minimum <= jvm.j9vm_loa_initial && jvm.j9vm_loa_initial <= jvm.j9vm_loa_maximum |
jvm.j9vm_minFreeHeap + 0.05 < jvm.j9vm_maxFreeHeap |
jvm.j9vm_minGcTimeMin < jvm.j9vm_maxGcTime |
Name | Unit | Description |
---|---|---|
Component Type | Description |
---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Unit | Description |
---|---|---|
Name | Unit | Description |
---|---|---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Formula | Notes |
---|---|
k8s_pod_cpu_used
millicores
The CPUs used by all containers of the pod
k8s_pod_memory_used
bytes
The total amount of memory used as sum of all containers in a pod
k8s_pod_cpu_request
millicores
The CPUs requested for the pod as sum of all container cpu requests
k8s_pod_cpu_limit
millicores
The CPUs allowed for the pod as sum of all container cpu limits
k8s_pod_memory_request
bytes
The memory requested for the pod as sum of all container memory requests
k8s_pod_memory_limit
bytes
The memory limit for the pod as sum of all container memory limits
k8s_pod_restarts
events
The number of container restarts in a pod
Docker container
container_cpu_util
CPUs
The number of CPUs (or fraction of CPUs) allowed for a container
container_cpu_used
percents
CPUs used by the container per second
container_cpu_throttle_time
CPUs
CPUs used by the container per second
container_cpu_limit
percent
The number of CPUs (or fraction of CPUs) allowed for a container
container_mem_util_nocache
percent
Percentage of memory used with respect to the limit. Memory used includes all types of memory, including file system cache
container_mem_util
percent
Percentage of working set memory used with respect to the limit
container_mem_used
bytes
The total amount of memory used by the container. Memory used includes all types of memory, including file system cache
container_mem_limit
bytes
Memory limit for the container
container_mem_working_set
threads
Current working set in bytes
container_mem_limit_hits
hits/s
Number of times memory usage hits memory limit per second
limits_cpu
integer
CPUs
0.7
0.1
→ 100.0
Limits on the amount of CPU resources usage in CPU units
requests_cpu
integer
megabytes
0.7
0.1
→ 100.0
Limits on the amount of memory resources usage
limits_memory
integer
CPUs
128
64
→ 64000
Amount of CPU resources requests in CPU units
requests_memory
integer
megabytes
128
64
→ 64000
Amount of memory resources requests
container_cpu_used
millicores
The CPUs used by the container
container_cpu_used_max
millicores
The maximum CPUs used by the container among all container replicas
container_cpu_util
percent
The percentage of CPUs used with respect to the limit
container_cpu_util_max
percent
The maximum percentage of CPUs used with respect to the limit among all container replicas
container_cpu_throttle_time
percent
The amount of time the CPU has been throttled
container_cpu_throttled_millicores
millicores
The CPUs throttling per container in millicores
container_cpu_request
millicores
The CPUs requested for the container
container_cpu_limit
millicores
The CPUs limit for the container
container_memory_used
bytes
The total amount of memory used by the container
container_memory_used_max
bytes
The maximum memory used by the container among all container replicas
container_memory_util
percent
The percentage of memory used with respect to the limit
container_memory_util_max
percent
The maximum percentage of memory used with respect to the limit among all container replicas
container_memory_working_set
bytes
The working set usage in bytes
container_memory_resident_set
bytes
The resident set usage in bytes
container_memory_cache
bytes
The memory cache usage in bytes
container_memory_request
bytes
The memory requested for the container
container_memory_limit
bytes
The memory limit for the container
cpu_request
integer
millicores
You should select your own default value.
You should select your own domain.
yes
Amount of CPU resources requests in CPU units (milllicores)
cpu_limit
integer
millicores
You should select your own default value.
You should select your own domain.
yes
Limits on the amount of CPU resources usage in CPU units (millicores)
memory_request
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
Amount of memory resources requests in megabytes
memory_limit
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
Limits on the amount of memory resources usage in megabytes
component_name.cpu_request <= component_name.cpu_limit
component_name.memory_request <= component_name.memory_limit
The Oracle Database optimization pack allows monitoring of an Oracle instance and exploring the configuration space of its initialization parameters. In this way, an Akamas study can achieve goals such as maximizing the throughput of an Oracle-backed application or minimizing its resource consumption, thus reducing costs.
The main tuning areas covered by the parameters provided in this optimization pack are:
SGA memory management
PGA memory management
SQL plan optimization
Approximate query execution
The optimization pack also includes metrics to monitor:
Memory allocation and utilization
Sessions
Query executions
Wait events
These component types model different Oracle Database releases, either as on-premise or cloud solutions. They provide the initialization parameters the workflow can apply through the OracleConfigurator operator, and a set of metrics to monitor the instance performances.
Notice that for the Oracle Database hosted on Amazon RDS, a subset of initialization parameters can be applied in the workflow to interact with the RDS API interface.
Here’s the command to install the Oracle Database optimization pack using the Akamas CLI:
The optimization pack for Oracle Database 12c.
The following parameters require their ranges or default values to be updated according to the described rules:
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters.
The optimization pack for Oracle Database 11g on Amazon RDS.
The following parameters require their ranges or default values to be updated according to the described rules.
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters:
The optimization pack for Oracle Database 19c.
The following parameters require their ranges or default values to be updated according to the described rules:
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters.
Component Type | Description |
---|---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Formula | Notes |
---|---|
Formula | Notes |
---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Default value | Domain |
---|---|---|
Parameter | Default value | Domain |
---|---|---|
Formula | Notes |
---|---|
Formula | Notes |
---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Formula | Notes |
---|---|
Formula | Notes |
---|---|
Oracle Database 12c
Oracle Database 18c
Oracle Database 19c
Oracle Database 11g on Amazon RDS
Oracle Database 12c on Amazon RDS
was_tcp_maxppenconnections_tcp2
integer
connections
20000
1
→ 128000
yes
Maximum number of connections that are available for a server to use (TCP_2)
was_tcp_listenBacklog_tcp2
integer
connections
511
1
→ 1024
yes
Maximum number of outstanding connection requests that the operating system can buffer while it waits for the application server to accept the connections (TCP_2)
was_tcp_maxppenconnections_tcp4
integer
connections
20000
1
→ 128000
yes
Maximum number of connections that are available for a server to use (TCP_4)
was_tcp_listenBacklog_tcp4
integer
connections
511
1
→ 1024
yes
Maximum number of outstanding connection requests that the operating system can buffer while it waits for the application server to accept the connections (TCP_4)
was_http_maximumPersistentRequests_http2
integer
requests
10000
1
→ 20000
yes
Maximum number of persistent requests that are allowed on a single HTTP connection (HTTP_2)
was_http_maximumPersistentRequests_http4
integer
requests
10000
1
→ 20000
yes
Maximum number of persistent requests that are allowed on a single HTTP connection (HTTP_4)
was_threadpools_minimumSize_webcontainer
integer
threads
50
1
→ 100
yes
Minimum number of threads to allow in the pool (Web Container)
was_threadpools_maximumsize_webcontainer
integer
threads
50
1
→ 500
yes
Maximum number of threads to maintain in the thread pool (Web Container)
was_threadpools_minimumSize_default
integer
threads
20
1
→ 100
yes
Maximum number of threads to maintain in the thread pool (Web Container)
was_threadpools_maximumsize_default
integer
threads
20
1
→ 500
yes
Maximum number of threads to maintain in the default thread pool (default)
was_threadpools_minimumSize_threadpoolmanager_orb
integer
threads
10
1
→ 100
yes
Minimum number of threads to allow in the pool (ThreadPoolManager ORB)
was_threadpools_maximumsize_threadpoolmanager_orb
integer
threads
50
1
→ 500
yes
Maximum number of threads to maintain in the thread pool (ThreadPoolManager ORB)
was_threadpools_minimumSize_objectrequestbroker_orb
integer
threads
10
1
→ 100
yes
Minimum number of threads to allow in the pool (ObjectRequestBroker ORB)
was_threadpools_maximumsize_objectrequestbroker_orb
integer
threads
50
1
→ 500
yes
Maximum number of threads to maintain in the thread pool (ObjectRequestBroker ORB)
was_threadpools_minimumSize_custom_TCPChannel_DCS
integer
20
1
→ 100
yes
Minimum number of threads to allow in the pool (TCPChannel.DCS)
was_threadpools_maximumsize_custom_TCPChannel_DCS
integer
threads
100
1
→ 500
yes
Maximum number of threads to maintain in the thread pool (TCPChannel.DCS)
was_auth_cacheTimeout
integer
milliseconds
600
0
→ 7200
yes
The time period at which the authenticated credential in the cache expires
was_webserverplugin_serverIOtimeout
integer
milliseconds
900
-1
→ 1800
yes
How long should the plug-in wait for a response from the application
was_Server_provisionComponents
categorical
false
true
, false
yes
Select this property if you want the server components started as they are needed by an application that is running on this server
was_ObjectRequestBroker_noLocalCopies
categorical
false
true
, false
yes
Specifies how the ORB passes parameters. If enabled, the ORB passes parameters by reference instead of by value, to avoid making an object copy. If disabled, a copy of the parameter passes rather than the parameter object itself.
was_PMIService_statisticSet
categorical
basic
true
, false
yes
When PMI service is enabled, the monitoring of individual components can be enabled or disabled dynamically. PMI provides four predefined statistic sets that can be used to enable a set of statistics
oracle_sga_total_size
bytes
The current memory size of the SGA.
oracle_sga_free_size
bytes
The amount of SGA currently available.
oracle_sga_max_size
bytes
The configured maximum memory size for the SGA.
oracle_pga_target_size
bytes
The configured target memory size for the PGA.
oracle_redo_buffers_size
bytes
The memory size of the redo buffers.
oracle_default_buffer_cache_size
bytes
The memory size for the DEFAULT buffer cache component.
oracle_default_2k_buffer_cache_size
bytes
The memory size for the DEFAULT 2k buffer cache component.
oracle_default_4k_buffer_cache_size
bytes
The memory size for the DEFAULT 4k buffer cache component.
oracle_default_8k_buffer_cache_size
bytes
The memory size for the DEFAULT 8k buffer cache component.
oracle_default_16k_buffer_cache_size
bytes
The memory size for the DEFAULT 16k buffer cache component.
oracle_default_32k_buffer_cache_size
bytes
The memory size for the DEFAULT 32k buffer cache component.
oracle_keep_buffer_cache_size
bytes
The memory size for the KEEP buffer cache component.
oracle_recycle_buffer_cache_size
bytes
The memory size for the RECYCLE buffer cache component.
oracle_asm_buffer_cache_size
bytes
The memory size for the ASM buffer cache component.
oracle_shared_io_pool_size
bytes
The memory size for the IO pool component.
oracle_java_pool_size
bytes
The memory size for the Java pool component.
oracle_large_pool_size
bytes
The memory size for the large pool component.
oracle_shared_pool_size
bytes
The memory size for the shared pool component.
oracle_streams_pool_size
bytes
The memory size for the streams pool component.
oracle_buffer_cache_hit_ratio
percent
How often a requested block has been found in the buffer cache without requiring disk access.
oracle_wait_class_commit
percent
The percentage of time spent waiting on the events of class 'Commit'.
oracle_wait_class_concurrency
percent
The percentage of time spent waiting on the events of class 'Concurrency'.
oracle_wait_class_system_io
percent
The percentage of time spent waiting on the events of class 'System I/O'.
oracle_wait_class_user_io
percent
The percentage of time spent waiting on the events of class 'User I/O'.
oracle_wait_class_other
percent
The percentage of time spent waiting on the events of class 'Other'.
oracle_wait_class_scheduler
percent
The percentage of time spent waiting on the events of class 'Scheduler'.
oracle_wait_class_idle
percent
The percentage of time spent waiting on the events of class 'Idle'.
oracle_wait_class_application
percent
The percentage of time spent waiting on the events of class 'Application'.
oracle_wait_class_network
percent
The percentage of time spent waiting on the events of class 'Network'.
oracle_wait_class_configuration
percent
The percentage of time spent waiting on the events of class 'Configuration'.
oracle_wait_event_log_file_sync
percent
The percentage of time spent waiting on the 'log file sync' event.
oracle_wait_event_log_file_parallel_write
percent
The percentage of time spent waiting on the 'log file parallel write' event.
oracle_wait_event_log_file_sequential_read
percent
The percentage of time spent waiting on the 'log file sequential read' event.
oracle_wait_event_enq_tx_contention
percent
The percentage of time spent waiting on the 'enq: TX - contention' event.
oracle_wait_event_enq_tx_row_lock_contention
percent
The percentage of time spent waiting on the 'enq: TX - row lock contention' event.
oracle_wait_event_latch_row_cache_objects
percent
The percentage of time spent waiting on the 'latch: row cache objects' event.
oracle_wait_event_latch_shared_pool
percent
The percentage of time spent waiting on the 'latch: shared pool' event.
oracle_wait_event_resmgr_cpu_quantum
percent
The percentage of time spent waiting on the 'resmgr:cpu quantum' event.
oracle_wait_event_sql_net_message_from_client
percent
The percentage of time spent waiting on the 'SQL*Net message from client' event.
oracle_wait_event_rdbms_ipc_message
percent
The percentage of time spent waiting on the 'rdbms ipc message' event.
oracle_wait_event_db_file_sequential_read
percent
The percentage of time spent waiting on the 'db file sequential read' event.
oracle_wait_event_log_file_switch_checkpoint_incomplete
percent
The percentage of time spent waiting on the 'log file switch (checkpoint incomplete)' event.
oracle_wait_event_row_cache_lock
percent
The percentage of time spent waiting on the 'row cache lock' event.
oracle_wait_event_buffer_busy_waits
percent
The percentage of time spent waiting on the 'buffer busy waits' event.
oracle_wait_event_db_file_async_io_submit
percent
The percentage of time spent waiting on the 'db file async I/O submit' event.
oracle_sessions_active_user
sessions
The number of active user sessions.
oracle_sessions_inactive_user
sessions
The number of inactive user sessions.
oracle_sessions_active_background
sessions
The number of active background sessions.
oracle_sessions_inactive_background
sessions
The number of inactive background sessions.
oracle_calls_execute_count
calls
Total number of calls (user and recursive) that executed SQL statements.
oracle_tuned_undoretention
seconds
The amount of time for which undo will not be recycled from the time it was committed.
oracle_max_query_length
seconds
The length of the longest query executed.
oracle_transaction_count
transactions
The total number of transactions executed within the period.
oracle_sso_errors
errors/s
The number of ORA-01555 (snapshot too old) errors raised per second.
oracle_redo_log_space_requests
requests
The number of times a user process waits for space in the redo log file, usually caused by checkpointing or log switching.
bitmap_merge_area_size
kilobytes
1048576
0
→ 2147483647
yes
The amount of memory Oracle uses to merge bitmaps retrieved from a range scan of the index.
create_bitmap_area_size
megabytes
8388608
0
→ 1073741824
yes
Size of create bitmap buffer for bitmap index. Relevant only for systems containing bitmap indexes.
db_block_size
bytes
8192
2048
→ 32768
yes
The size of Oracle database blocks. The value of this parameter can be changed only when the database is first created.
db_cache_size
megabytes
48
0
→ 2097152
no
The size of the DEFAULT buffer pool for standard block size buffers. The value must be at least 4M * cpu number.
db_2k_cache_size
megabytes
0
0
→ 2097152
no
Size of cache for 2K buffers.
db_4k_cache_size
megabytes
0
0
→ 2097152
no
Size of cache for 4K buffers.
db_8k_cache_size
megabytes
0
0
→ 2097152
no
Size of cache for 8K buffers.
db_16k_cache_size
megabytes
0
0
→ 2097152
no
Size of cache for 16K buffers.
db_32k_cache_size
megabytes
0
0
→ 2097152
no
Size of cache for 32K buffers.
hash_area_size
kilobytes
131072
0
→ 2147483647
yes
Maximum size of in-memory hash work area maximum amount of memory.
java_pool_size
megabytes
24
0
→ 65536
no
The size of the Java pool. If SGA_TARGET is set, this value represents the minimum value for the memory pool.
large_pool_size
megabytes
0
0
→ 65536
no
The size of large pool allocation heap.
lock_sga
FALSE
TRUE
, FALSE
yes
Lock the entire SGA in physical memory.
memory_max_target
megabytes
8192
152
→ 2097152
yes
The maximum value to which a DBA can set the MEMORY_TARGET initialization parameter.
memory_target
megabytes
6144
0
→ 2097152
no
Oracle systemwide usable memory. The database tunes memory to the MEMORY_TARGET value, reducing or enlarging the SGA and PGA as needed.
pga_aggregate_limit
megabytes
2048
0
→ 4194304
no
The limit on the aggregate PGA memory consumed by the instance.
pga_aggregate_target
megabytes
1024
0
→ 4194304
no
The target aggregate PGA memory available to all server processes attached to the instance.
result_cache_max_result
percent
5
0
→ 100
no
Maximum result size as a percent of cache the size.
result_cache_max_size
megabytes
0
0
→ 65536
no
The maximum amount of SGA memory that can be used by the Result Cache.
result_cache_remote_expiration
minutes
0
0
→ 10000
no
The expiration in minutes of remote objects. High values may cause stale answers.
sga_max_size
megabytes
8192
0
→ 2097152
yes
The maximum size of the SGA for the lifetime of the instance.
sga_min_size
megabytes
2920
0
→ 1048576
no
The guaranteed SGA size for a pluggable database (PDB). When SGA_MIN_SIZE is set for a PDB, it guarantees the specified SGA size for the PDB.
sga_target
megabytes
5840
0
→ 2097152
no
The total size of all SGA components, acts as the minimum value for the size of the SGA.
shared_pool_reserved_size
megabytes
128
1
→ 2048
yes
The shared pool space reserved for large contiguous requests for shared pool memory.
shared_pool_size
megabytes
0
0
→ 65536
no
The size of the shared pool.
sort_area_retained_size
kilobytes
0
0
→ 2147483647
no
The maximum amount of the User Global Area memory retained after a sort run completes.
sort_area_size
kilobytes
64
0
→ 8388608
no
The maximum amount of memory Oracle will use for a sort. If more space is required then temporary segments on disks are used.
streams_pool_size
megabytes
0
0
→ 65536
no
Size of the streams pool.
use_large_pages
TRUE
ONLY
, FALSE
, TRUE
yes
Enable the use of large pages for SGA memory.
commit_logging
BATCH
IMMEDIATE
, BATCH
no
Control how redo is batched by Log Writer.
log_archive_max_processes
processes
4
1
→ 30
no
Maximum number of active ARCH processes.
log_buffer
megabytes
16
2
→ 8192
yes
The amount of memory that Oracle uses when buffering redo entries to a redo log file.
log_checkpoint_interval
blocks
0
0
→ 2147483647
no
The maximum number of log file blocks between incremental checkpoints.
log_checkpoint_timeout
seconds
1800
0
→ 2147483647
no
Maximum time interval between checkpoints. Guarantees a no buffer remains dirty for more than the specified time.
undo_retention
seconds
900
0
→ 2147483647
no
Low threshold value of undo retention.
undo_management
AUTO
MANUAL
, AUTO
yes
Instance runs in SMU mode if TRUE, else in RBU mode
temp_undo_enabled
FALSE
TRUE
, FALSE
no
Split undo log into temporary (temporary objects) and permanent (persistent objects) undo log.
optimizer_adaptive_plans
FALSE
TRUE
, FALSE
no
Controls adaptive plans, execution plans built with alternative choices based on collected statistics.
optimizer_adaptive_statistics
FALSE
TRUE
, FALSE
no
Enable the optimizer to use adaptive statistics for complex queries.
optimizer_capture_sql_plan_baselines
FALSE
TRUE
, FALSE
no
Automatic capture of SQL plan baselines for repeatable statements
optimizer_dynamic_sampling
2
0
→ 11
no
Controls both when the database gathers dynamic statistics, and the size of the sample that the optimizer uses to gather the statistics.
optimizer_features_enable
11.2.0.4
11.2.0.4.1
, 11.2.0.4
, 11.2.0.3
, 11.2.0.2
, 11.2.0.1
, 11.1.0.7
, 11.1.0.6
, 10.2.0.5
, 10.2.0.4
, 10.2.0.3
, 10.2.0.2
, 10.2.0.1
, 10.1.0.5
, 10.1.0.4
, 10.1.0.3
, 10.1.0
, 9.2.0.8
, 9.2.0
, 9.0.1
, 9.0.0
, 8.1.7
, 8.1.6
, 8.1.5
, 8.1.4
, 8.1.3
, 8.1.0
, 8.0.7
, 8.0.6
, 8.0.5
, 8.0.4
, 8.0.3
, 8.0.0
no
Enable a series of optimizer features based on an Oracle release number.
optimizer_index_caching
0
0
→ 100
no
Adjust the behavior of cost-based optimization to favor nested loops joins and IN-list iterators.
optimizer_index_cost_adj
100
1
→ 10000
no
Tune optimizer behavior for access path selection to be more or less index friendly.
optimizer_inmemory_aware
TRUE
TRUE
, FALSE
no
Enables all of the optimizer cost model enhancements for in-memory.
optimizer_mode
ALL_ROWS
ALL_ROWS
, FIRST_ROWS
, FIRST_ROWS_1
, FIRST_ROWS_10
, FIRST_ROWS_100
, FIRST_ROWS_1000
no
The default behavior for choosing an optimization approach for the instance.
optimizer_use_invisible_indexes
FALSE
TRUE
, FALSE
no
Enable or disables the use of invisible indexes.
optimizer_use_pending_statistics
FALSE
TRUE
, FALSE
no
Control whether the optimizer uses pending statistics when compiling SQL statements.
optimizer_use_sql_plan_baselines
TRUE
TRUE
, FALSE
no
Enables the use of SQL plan baselines stored in SQL Management Base.
approx_for_aggregation
FALSE
TRUE
, FALSE
no
Replace exact query processing for aggregation queries with approximate query processing.
approx_for_count_distinct
FALSE
TRUE
, FALSE
no
Automatically replace COUNT (DISTINCT expr) queries with APPROX_COUNT_DISTINCT queries.
approx_for_percentile
NONE
NONE
, PERCENTILE_CONT
, PERCENTILE_CONT DETERMINISTIC
, PERCENTILE_DISC
, PERCENTILE_DISC DETERMINISTIC
, ALL
, ALL DETERMINISTIC
no
Converts exact percentile functions to their approximate percentile function counterparts.
parallel_max_servers
processes
0
0
→ 32767
no
The maximum number of parallel execution processes and parallel recovery processes for an instance.
parallel_min_servers
processes
0
0
→ 2000
no
The minimum number of execution processes kept alive to service parallel statements.
parallel_threads_per_cpu
2
1
→ 128
no
Number of parallel execution threads per CPU.
cpu_count
cpus
0
0
→ 512
no
Number of CPUs available for the Oracle instance to use.
db_files
files
200
100
→ 20000
yes
The maximum number of database files that can be opened for this database. This may be subject to OS constraints.
open_cursors
cursors
50
0
→ 65535
no
The maximum number of open cursors (handles to private SQL areas) a session can have at once.
open_links
connections
4
0
→ 32768
yes
The maximum number of concurrent open connections to remote databases in one session.
open_links_per_instance
connections
4
0
→ 2147483647
yes
Maximum number of migratable open connections globally for each database instance.
processes
processes
800
6
→ 20000
yes
The maximum number of OS user processes that can simultaneously connect to Oracle.
read_only_open_delayed
FALSE
TRUE
, FALSE
yes
Delay opening of read only files until first access.
sessions
sessions
1262
1
→ 65536
no
The maximum number of sessions that can be created in the system, effectively the maximum number of concurrent users in the system.
transactions
transactions
1388
4
→ 2147483647
yes
The maximum number of concurrent transactions.
audit_sys_operations
FALSE
TRUE
, FALSE
yes
Enable sys auditing
audit_trail
NONE
NONE
, OS
, DB
, DB, EXTENDED
, XML
, XML, EXTENDED
yes
Configure system auditing.
gcs_server_processes
processes
0
0
→ 100
yes
The number of background GCS server processes to serve the inter-instance traffic among Oracle RAC instances.
java_jit_enabled
TRUE
TRUE
, FALSE
no
Enables the Just-in-Time (JIT) compiler for the Oracle Java Virtual Machine.
fast_start_mttr_target
seconds
0
0
→ 3600
no
number of seconds the database should take to perform crash recovery of a single instance. This parameter impacts the time between checkpoints.
recyclebin
ON
ON
, OFF
no
Allow recovering of dropped tables.
statistics_level
TYPICAL
BASIC
, TYPICAL
, ALL
no
Level of collection for database and operating system statistics.
transactions_per_rollback_segment
5
1
→ 10000
yes
Expected number of active transactions per rollback segment.
filesystemio_options
asynch
none
, setall
, directIO
, asynch
yes
Specifies I/O operations for file system files.
Parameter
Default value
Domain
db_cache_size
MAX(48MB, 4MB * cpu_num)
java_pool_size
24MB
if SGA_TARGET
is not set
0
if SGA_TARGET
is set, meaning the lower bound for the pool is automatically determined
shared_pool_reserved_size
5%
of shared_pool_size
shared_pool_size
0
if sga_target
is set, 128MB
otherwise
shared_pool_reserved_size
upper bound can’t exceed half the size of shared_pool_size
pga_aggregate_target
MAX(10MB, 0.2*sga_target)
pga_aggregate_limit
MEMORY_MAX_TARGET
if MEMORY_TARGET
explicit or
2 * PGA_AGGREGATE_TARGET
if PGA_AGGREGATE_TARGET
explicit or
0.9 * ({MEMORY_AVAILABLE} - SGA)
at least MAX(2GB, 3MB * db.processes)
hash_area_size
2 * sort_area_size
Parameter
Default value
Domain
cpu_count
should match the available CPUs 0 to let the Oracle engine automatically determine the value
must not exceed the available CPUs
gcs_server_processes
0
if cluster_database=false
1
for 1-3 CPUs, or if ASM
2
for 4-15 CPUs2+lower(CPUs/32)
for 16+ CPUs
parallel_min_servers
CPU_COUNT * PARALLEL_THREADS_PER_CPU * 2
parallel_max_servers
PARALLEL_THREADS_PER_CPU * CPU_COUNT * concurrent_parallel_users * 5
sessions
1.1 * processes + 5
must be at least equal to the default value
transactions
1.1 * sessions
db.memory_target <= db.memory_max_target && db.memory_max_target < {MEMORY_AVAILABLE}
Add when tuning automatic memory management
db.sga_max_size + db.pga_aggregate_limit <= db.memory_max_target
Add when tuning SGA and PGA
db.sga_target + db.pga_aggregate_target <= db.memory_target
Add when tuning SGA and PGA
db.sga_target <= db.sga_max_size
Add when tuning SGA
db.db_cache_size + db.java_pool_size + db.large_pool_size + db.log_buffer + db.shared_pool_size + db.streams_pool_size < db.sga_max_size
Add when tuning SGA areas
db.pga_aggregate_target <= db.pga_aggregate_limit
Add when tuning PGA
db.shared_pool_reserved_size <= 0.5 * db.shared_pool_size
db.sort_area_retained_size <= db.sort_area_size
db.sessions < db.transactions
db.parallel_min_servers < db.parallel_max_servers
oracle_sga_total_size
bytes
The current memory size of the SGA.
oracle_sga_free_size
bytes
The amount of SGA currently available.
oracle_sga_max_size
bytes
The configured maximum memory size for the SGA.
oracle_pga_target_size
bytes
The configured target memory size for the PGA.
oracle_redo_buffers_size
bytes
The memory size of the redo buffers.
oracle_default_buffer_cache_size
bytes
The memory size for the DEFAULT buffer cache component.
oracle_default_2k_buffer_cache_size
bytes
The memory size for the DEFAULT 2k buffer cache component.
oracle_default_4k_buffer_cache_size
bytes
The memory size for the DEFAULT 4k buffer cache component.
oracle_default_8k_buffer_cache_size
bytes
The memory size for the DEFAULT 8k buffer cache component.
oracle_default_16k_buffer_cache_size
bytes
The memory size for the DEFAULT 16k buffer cache component.
oracle_default_32k_buffer_cache_size
bytes
The memory size for the DEFAULT 32k buffer cache component.
oracle_keep_buffer_cache_size
bytes
The memory size for the KEEP buffer cache component.
oracle_recycle_buffer_cache_size
bytes
The memory size for the RECYCLE buffer cache component.
oracle_asm_buffer_cache_size
bytes
The memory size for the ASM buffer cache component.
oracle_shared_io_pool_size
bytes
The memory size for the IO pool component.
oracle_java_pool_size
bytes
The memory size for the Java pool component.
oracle_large_pool_size
bytes
The memory size for the large pool component.
oracle_shared_pool_size
bytes
The memory size for the shared pool component.
oracle_streams_pool_size
bytes
The memory size for the streams pool component.
oracle_buffer_cache_hit_ratio
percent
How often a requested block has been found in the buffer cache without requiring disk access.
oracle_wait_class_commit
percent
The percentage of time spent waiting on the events of class 'Commit'.
oracle_wait_class_concurrency
percent
The percentage of time spent waiting on the events of class 'Concurrency'.
oracle_wait_class_system_io
percent
The percentage of time spent waiting on the events of class 'System I/O'.
oracle_wait_class_user_io
percent
The percentage of time spent waiting on the events of class 'User I/O'.
oracle_wait_class_other
percent
The percentage of time spent waiting on the events of class 'Other'.
oracle_wait_class_scheduler
percent
The percentage of time spent waiting on the events of class 'Scheduler'.
oracle_wait_class_idle
percent
The percentage of time spent waiting on the events of class 'Idle'.
oracle_wait_class_application
percent
The percentage of time spent waiting on the events of class 'Application'.
oracle_wait_class_network
percent
The percentage of time spent waiting on the events of class 'Network'.
oracle_wait_class_configuration
percent
The percentage of time spent waiting on the events of class 'Configuration'.
oracle_wait_event_log_file_sync
percent
The percentage of time spent waiting on the 'log file sync' event.
oracle_wait_event_log_file_parallel_write
percent
The percentage of time spent waiting on the 'log file parallel write' event.
oracle_wait_event_log_file_sequential_read
percent
The percentage of time spent waiting on the 'log file sequential read' event.
oracle_wait_event_enq_tx_contention
percent
The percentage of time spent waiting on the 'enq: TX - contention' event.
oracle_wait_event_enq_tx_row_lock_contention
percent
The percentage of time spent waiting on the 'enq: TX - row lock contention' event.
oracle_wait_event_latch_row_cache_objects
percent
The percentage of time spent waiting on the 'latch: row cache objects' event.
oracle_wait_event_latch_shared_pool
percent
The percentage of time spent waiting on the 'latch: shared pool' event.
oracle_wait_event_resmgr_cpu_quantum
percent
The percentage of time spent waiting on the 'resmgr:cpu quantum' event.
oracle_wait_event_sql_net_message_from_client
percent
The percentage of time spent waiting on the 'SQL*Net message from client' event.
oracle_wait_event_rdbms_ipc_message
percent
The percentage of time spent waiting on the 'rdbms ipc message' event.
oracle_wait_event_db_file_sequential_read
percent
The percentage of time spent waiting on the 'db file sequential read' event.
oracle_wait_event_log_file_switch_checkpoint_incomplete
percent
The percentage of time spent waiting on the 'log file switch (checkpoint incomplete)' event.
oracle_wait_event_row_cache_lock
percent
The percentage of time spent waiting on the 'row cache lock' event.
oracle_wait_event_buffer_busy_waits
percent
The percentage of time spent waiting on the 'buffer busy waits' event.
oracle_wait_event_db_file_async_io_submit
percent
The percentage of time spent waiting on the 'db file async I/O submit' event.
oracle_sessions_active_user
sessions
The number of active user sessions.
oracle_sessions_inactive_user
sessions
The number of inactive user sessions.
oracle_sessions_active_background
sessions
The number of active background sessions.
oracle_sessions_inactive_background
sessions
The number of inactive background sessions.
oracle_calls_execute_count
calls
Total number of calls (user and recursive) that executed SQL statements.
oracle_tuned_undoretention
seconds
The amount of time for which undo will not be recycled from the time it was committed.
oracle_max_query_length
seconds
The length of the longest query executed.
oracle_transaction_count
transactions
The total number of transactions executed within the period.
oracle_sso_errors
errors/s
The number of ORA-01555 (snapshot too old) errors raised per second.
oracle_redo_log_space_requests
requests
The number of times a user process waits for space in the redo log file, usually caused by checkpointing or log switching.
bitmap_merge_area_size
kilobytes
1024
0
→ 2097152
yes
The amount of memory Oracle uses to merge bitmaps retrieved from a range scan of the index.
create_bitmap_area_size
megabytes
8192
0
→ 2097152
yes
Size of create bitmap buffer for bitmap index. Relevant only for systems containing bitmap indexes.
db_cache_size
megabytes
48
0
→ 2097152
no
The size of the DEFAULT buffer pool for standard block size buffers. The value must be at least 4M * cpu number.
hash_area_size
kilobytes
128
0
→ 2097151
yes
Maximum size of in-memory hash work area maximum amount of memory.
java_pool_size
megabytes
24
0
→ 16384
no
The size of the Java pool. If SGA_TARGET is set, this value represents the minimum value for the memory pool.
large_pool_size
megabytes
0
0
→ 65536
no
The size of large pool allocation heap.
memory_max_target
megabytes
8192
152
→ 2097152
yes
The maximum value to which a DBA can set the MEMORY_TARGET initialization parameter.
memory_target
megabytes
6864
0
→ 2097152
no
Oracle systemwide usable memory. The database tunes memory to the MEMORY_TARGET value, reducing or enlarging the SGA and PGA as needed.
olap_page_pool_size
bytes
0
0
→ 2147483647
no
Size of the olap page pool.
pga_aggregate_limit
megabytes
2048
0
→ 2097152
no
The limit on the aggregate PGA memory consumed by the instance.
pga_aggregate_target
megabytes
1024
0
→ 2097152
no
The target aggregate PGA memory available to all server processes attached to the instance.
pre_page_sga
FALSE
TRUE
, FALSE
yes
Read the entire SGA into memory at instance startup.
result_cache_max_result
percent
5
0
→ 100
no
Maximum result size as a percent of the cache size.
result_cache_max_size
megabytes
0
0
→ 65536
no
The maximum amount of SGA memory that can be used by the Result Cache.
result_cache_mode
MANUAL
MANUAL
, FORCE
no
Specifies when a ResultCache operator is spliced into a query's execution plan.
result_cache_remote_expiration
minutes
0
0
→ 10000
no
The expiration in minutes of remote objects. High values may cause stale answers.
sga_max_size
megabytes
8192
0
→ 2097152
yes
The maximum size of the SGA for the lifetime of the instance.
sga_min_size
megabytes
2920
0
→ 1048576
no
The guaranteed SGA size for a pluggable database (PDB). When SGA_MIN_SIZE is set for a PDB, it guarantees the specified SGA size for the PDB.
sga_target
megabytes
5840
0
→ 2097152
no
The total size of all SGA components, acts as the minimum value for the size of the SGA.
shared_pool_reserved_size
megabytes
128
1
→ 2048
yes
The shared pool space reserved for large contiguous requests for shared pool memory.
shared_pool_size
megabytes
0
0
→ 65536
no
The size of the shared pool.
sort_area_retained_size
kilobytes
0
0
→ 2097151
no
The maximum amount of the User Global Area memory retained after a sort run completes.
sort_area_size
kilobytes
64
0
→ 2097151
no
The maximum amount of memory Oracle will use for a sort. If more space is required then temporary segments on disks are used.
streams_pool_size
megabytes
0
0
→ 2097152
no
Size of the streams pool.
use_large_pages
TRUE
ONLY
, FALSE
, TRUE
yes
Enable the use of large pages for SGA memory.
workarea_size_policy
AUTO
MANUAL
, AUTO
no
Policy used to size SQL working areas (MANUAL/AUTO).
commit_logging
BATCH
IMMEDIATE
, BATCH
no
Control how redo is batched by Log Writer.
log_archive_max_processes
processes
4
1
→ 30
no
Maximum number of active ARCH processes.
log_buffer
megabytes
16
2
→ 256
yes
The amount of memory that Oracle uses when buffering redo entries to a redo log file.
log_checkpoint_interval
blocks
0
0
→ 2147483647
no
The maximum number of log file blocks between incremental checkpoints.
log_checkpoint_timeout
seconds
1800
0
→ 2147483647
no
Maximum time interval between checkpoints. Guarantees a no buffer remains dirty for more than the specified time.
db_flashback_retention_target
minutes
1440
30
→ 2147483647
no
Maximum Flashback Database log retention time.
undo_retention
seconds
900
0
→ 2147483647
no
Low threshold value of undo retention.
optimizer_capture_sql_plan_baselines
FALSE
TRUE
, FALSE
no
Automatic capture of SQL plan baselines for repeatable statements
optimizer_dynamic_sampling
2
0
→ 11
no
Controls both when the database gathers dynamic statistics, and the size of the sample that the optimizer uses to gather the statistics.
optimizer_features_enable
11.2.0.4
11.2.0.4.1
, 11.2.0.4
, 11.2.0.3
, 11.2.0.2
, 11.2.0.1
, 11.1.0.7
, 11.1.0.6
, 10.2.0.5
, 10.2.0.4
, 10.2.0.3
, 10.2.0.2
, 10.2.0.1
, 10.1.0.5
, 10.1.0.4
, 10.1.0.3
, 10.1.0
, 9.2.0.8
, 9.2.0
, 9.0.1
, 9.0.0
, 8.1.7
, 8.1.6
, 8.1.5
, 8.1.4
, 8.1.3
, 8.1.0
, 8.0.7
, 8.0.6
, 8.0.5
, 8.0.4
, 8.0.3
, 8.0.0
no
Enable a series of optimizer features based on an Oracle release number.
optimizer_index_caching
0
0
→ 100
no
Adjust the behavior of cost-based optimization to favor nested loops joins and IN-list iterators.
optimizer_index_cost_adj
100
1
→ 10000
no
Tune optimizer behavior for access path selection to be more or less index friendly.
optimizer_mode
ALL_ROWS
ALL_ROWS
, FIRST_ROWS
, FIRST_ROWS_1
, FIRST_ROWS_10
, FIRST_ROWS_100
, FIRST_ROWS_1000
no
The default behavior for choosing an optimization approach for the instance.
optimizer_secure_view_merging
TRUE
TRUE
, FALSE
no
Enables security checks when the optimizer uses view merging.
optimizer_use_invisible_indexes
FALSE
TRUE
, FALSE
no
Enable or disables the use of invisible indexes.
optimizer_use_pending_statistics
FALSE
TRUE
, FALSE
no
Control whether the optimizer uses pending statistics when compiling SQL statements.
optimizer_use_sql_plan_baselines
TRUE
TRUE
, FALSE
no
Enables the use of SQL plan baselines stored in SQL Management Base.
parallel_degree_policy
MANUAL
MANUAL
, LIMITED
, AUTO
no
Policy used to compute the degree of parallelism (MANUAL/LIMITED/AUTO).
parallel_execution_message_size
16384
2148
→ 32768
yes
Message buffer size for parallel execution.
parallel_force_local
FALSE
TRUE
, FALSE
no
Force single instance execution.
parallel_max_servers
processes
0
0
→ 3600
no
The maximum number of parallel execution processes and parallel recovery processes for an instance.
parallel_min_servers
processes
0
0
→ 2000
no
The minimum number of execution processes kept alive to service parallel statements.
parallel_min_percent
percent
0
0
→ 100
yes
The minimum percentage of parallel execution processes (of the value of PARALLEL_MAX_SERVERS) required for parallel execution.
circuits
circuits
10
0
→ 3000
no
The total number of virtual circuits that are available for inbound and outbound network sessions.
cpu_count
cpus
0
0
→ 512
no
Number of CPUs available for the Oracle instance to use.
cursor_bind_capture_destination
MEMORY+DISK
OFF
, MEMORY
, MEMORY+DISK
no
Allowed destination for captured bind variables.
cursor_sharing
EXACT
FORCE
, EXACT
, SIMILAR
no
Cursor sharing mode.
cursor_space_for_time
FALSE
TRUE
, FALSE
yes
Use more memory in order to get faster execution.
db_files
files
200
200
→ 20000
yes
The maximum number of database files that can be opened for this database. This may be subject to OS constraints.
open_cursors
cursors
300
0
→ 65535
no
The maximum number of open cursors (handles to private SQL areas) a session can have at once.
open_links
connections
4
0
→ 255
yes
The maximum number of concurrent open connections to remote databases in one session.
open_links_per_instance
connections
4
0
→ 2147483647
yes
Maximum number of migratable open connections globally for each database instance.
processes
processes
100
80
→ 20000
yes
The maximum number of OS user processes that can simultaneously connect to Oracle.
serial_reuse
DISABLE
DISABLE
, ALL
, SELECT
, DML
, PLSQL
, FORCE
yes
Types of cursors that make use of the serial-reusable memory feature.
session_cached_cursors
50
0
→ 65535
no
Number of session cursors to cache.
session_max_open_files
10
1
→ 50
yes
Maximum number of open files allowed per session.
sessions
sessions
1262
100
→ 65532
no
The maximum number of sessions that can be created in the system, effectively the maximum number of concurrent users in the system.
transactions
transactions
1388
4
→ 2147483647
yes
The maximum number of concurrent transactions.
aq_tm_processes
1
0
→ 40
no
Number of AQ Time Managers to start.
audit_sys_operations
FALSE
TRUE
, FALSE
yes
Enable sys auditing
audit_trail
NONE
NONE
, OS
, DB
, TRUE
, FALSE
, DB_EXTENDED
, XML
, EXTENDED
yes
Configure system auditing.
client_result_cache_lag
milliseconds
3000
0
→ 60000
yes
Maximum time before checking the database for changes related to the queries cached on the client.
client_result_cache_size
kilobytes
0
0
→ 2147483647
yes
The maximum size of the client per-process result set cache.
db_block_checking
MEDIUM
FALSE
, OFF
, LOW
, MEDIUM
, TRUE
, FULL
no
Header checking and data and index block checking.
db_block_checksum
TYPICAL
OFF
, FALSE
, TYPICAL
, TRUE
, FULL
no
Store checksum in db blocks and check during reads.
db_file_multiblock_read_count
128
0
→ 1024
no
Db block to be read each IO.
db_keep_cache_size
megabytes
0
0
→ 2097152
no
Size of KEEP buffer pool for standard block size buffers.
db_lost_write_protect
NONE
NONE
, TYPICAL
, FULL
no
Enable lost write detection.
db_recovery_file_dest_size
megabytes
1024
1
→ 16777216
no
Database recovery files size limit.
db_recycle_cache_size
megabytes
0
0
→ 2097152
no
Size of RECYCLE buffer pool for standard block size buffers.
db_writer_processes
1
1
→ 36
yes
Number of background database writer processes to start.
ddl_lock_timeout
0
0
→ 1000000
no
Timeout to restrict the time that ddls wait for dml lock.
deferred_segment_creation
TRUE
TRUE
, FALSE
no
Defer segment creation to first insert.
distributed_lock_timeout
seconds
60
1
→ 2147483647
yes
Number of seconds a distributed transaction waits for a lock.
dml_locks
5552
0
→ 2000000
yes
The maximum number of DML locks - one for each table modified in a transaction.
enable_goldengate_replication
FALSE
TRUE
, FALSE
no
Enable GoldenGate replication.
fast_start_parallel_rollback
LOW
FALSE
, LOW
, HIGH
no
Max number of parallel recovery slaves that may be used.
hs_autoregister
TRUE
TRUE
, FALSE
no
Enable automatic server DD updates in HS agent self-registration.
java_jit_enabled
TRUE
TRUE
, FALSE
no
Enables the Just-in-Time (JIT) compiler for the Oracle Java Virtual Machine.
java_max_sessionspace_size
bytes
0
0
→ 2147483647
yes
Max allowed size in bytes of a Java sessionspace.
java_soft_sessionspace_limit
bytes
0
0
→ 2147483647
yes
Warning limit on size in bytes of a Java sessionspace.
job_queue_processes
1000
0
→ 1000
no
Maximum number of job queue slave processes.
object_cache_max_size_percent
percent
10
0
→ 100
no
Percentage of maximum size over optimal of the user sessions object cache.
object_cache_optimal_size
kilobytes
100
0
→ 67108864
no
Optimal size of the user sessions object cache.
plscope_settings
IDENTIFIERS:NONE
IDENTIFIERS:NONE
, IDENTIFIERS:ALL
no
Plscope_settings controls the compile-time collection, cross reference, and storage of PL/SQLsourcecode identifier data.
plsql_code_type
INTERPRETED
INTERPRETED
, NATIVE
no
PL/SQL code-type.
plsql_optimize_level
2
0
→ 3
no
PL/SQL optimize level.
query_rewrite_enabled
TRUE
FALSE
, TRUE
, FORCE
no
Allow rewrite of queries using materialized views if enabled.
query_rewrite_integrity
ENFORCED
ENFORCED
, TRUSTED
, STALE_TOLERATED
no
Perform rewrite using materialized views with desired integrity.
remote_dependencies_mode
TIMESTAMP
TIMESTAMP
, SIGNATURE
no
Remote-procedure-call dependencies mode parameter.
replication_dependency_tracking
TRUE
TRUE
, FALSE
yes
Tracking dependency for Replication parallel propagation.
resource_limit
FALSE
TRUE
, FALSE
no
Enforce resource limits in database profiles.
resourcemanager_cpu_allocation
2
0
→ 20
no
ResourceManager CPU allocation.
resumable_timeout
seconds
0
0
→ 2147483647
no
Enables resumable statements and specifies resumable timeout at the system level.
sql_trace
FALSE
TRUE
, FALSE
no
Enable SQL trace.
star_transformation_enabled
FALSE
FALSE
, TRUE
, TEMP_DISABLE
no
Enable the use of star transformation.
timed_os_statistics
0
0
→ 1000000
no
The interval at which Oracle collects operating system statistics.
timed_statistics
TRUE
TRUE
, FALSE
no
Maintain internal timing statistics.
trace_enabled
TRUE
TRUE
, FALSE
no
Enable in-memory tracing.
transactions_per_rollback_segment
5
1
→ 10000
yes
Expected number of active transactions per rollback segment.
db_cache_size
MAX(48MB, 4MB * cpu_num)
java_pool_size
24MB
if SGA_TARGET
is not set
0
if SGA_TARGET
is set, meaning the lower bound for the pool is automatically determined
shared_pool_reserved_size
5%
of shared_pool_size
shared_pool_size
0
if sga_target
is set, 128MB
otherwise
shared_pool_reserved_size
upper bound can’t exceed half the size of shared_pool_size
pga_aggregate_target
MAX(10MB, 0.2*sga_target)
pga_aggregate_limit
MEMORY_MAX_TARGET
if MEMORY_TARGET
explicit or
2 * PGA_AGGREGATE_TARGET
if PGA_AGGREGATE_TARGET
explicit or
0.9 * ({MEMORY_AVAILABLE} - SGA)
at least MAX(2GB, 3MB * db.processes)
hash_area_size
2 * sort_area_size
cpu_count
should match the available CPUs 0 to let the Oracle engine automatically determine the value
must not exceed the available CPUs
gcs_server_processes
0
if cluster_database=false
1
for 1-3 CPUs, or if ASM
2
for 4-15 CPUs2+lower(CPUs/32)
for 16+ CPUs
parallel_min_servers
CPU_COUNT * PARALLEL_THREADS_PER_CPU * 2
parallel_max_servers
PARALLEL_THREADS_PER_CPU * CPU_COUNT * concurrent_parallel_users * 5
sessions
1.5 * processes + 22
must be at least equal to the default value
transactions
1.1 * sessions
db.memory_target <= db.memory_max_target && db.memory_max_target < {MEMORY_AVAILABLE}
Add when tuning automatic memory management
db.sga_max_size + db.pga_aggregate_limit <= db.memory_max_target
Add when tuning SGA and PGA
db.sga_target + db.pga_aggregate_target <= db.memory_target
Add when tuning SGA and PGA
db.sga_target <= db.sga_max_size
Add when tuning SGA
db.db_cache_size + db.java_pool_size + db.large_pool_size + db.log_buffer + db.shared_pool_size + db.streams_pool_size < db.sga_max_size
Add when tuning SGA areas
db.pga_aggregate_target <= db.pga_aggregate_limit
Add when tuning PGA
db.shared_pool_reserved_size <= 0.5 * db.shared_pool_size
db.sort_area_retained_size <= db.sort_area_size
db.sessions < db.transactions
db.parallel_min_servers < db.parallel_max_servers
oracle_sga_total_size
bytes
The current memory size of the SGA.
oracle_sga_free_size
bytes
The amount of SGA currently available.
oracle_sga_max_size
bytes
The configured maximum memory size for the SGA.
oracle_pga_target_size
bytes
The configured target memory size for the PGA.
oracle_redo_buffers_size
bytes
The memory size of the redo buffers.
oracle_default_buffer_cache_size
bytes
The memory size for the DEFAULT buffer cache component.
oracle_default_2k_buffer_cache_size
bytes
The memory size for the DEFAULT 2k buffer cache component.
oracle_default_4k_buffer_cache_size
bytes
The memory size for the DEFAULT 4k buffer cache component.
oracle_default_8k_buffer_cache_size
bytes
The memory size for the DEFAULT 8k buffer cache component.
oracle_default_16k_buffer_cache_size
bytes
The memory size for the DEFAULT 16k buffer cache component.
oracle_default_32k_buffer_cache_size
bytes
The memory size for the DEFAULT 32k buffer cache component.
oracle_keep_buffer_cache_size
bytes
The memory size for the KEEP buffer cache component.
oracle_recycle_buffer_cache_size
bytes
The memory size for the RECYCLE buffer cache component.
oracle_asm_buffer_cache_size
bytes
The memory size for the ASM buffer cache component.
oracle_shared_io_pool_size
bytes
The memory size for the IO pool component.
oracle_java_pool_size
bytes
The memory size for the Java pool component.
oracle_large_pool_size
bytes
The memory size for the large pool component.
oracle_shared_pool_size
bytes
The memory size for the shared pool component.
oracle_streams_pool_size
bytes
The memory size for the streams pool component.
oracle_buffer_cache_hit_ratio
percent
How often a requested block has been found in the buffer cache without requiring disk access.
oracle_wait_class_commit
percent
The percentage of time spent waiting on the events of class 'Commit'.
oracle_wait_class_concurrency
percent
The percentage of time spent waiting on the events of class 'Concurrency'.
oracle_wait_class_system_io
percent
The percentage of time spent waiting on the events of class 'System I/O'.
oracle_wait_class_user_io
percent
The percentage of time spent waiting on the events of class 'User I/O'.
oracle_wait_class_other
percent
The percentage of time spent waiting on the events of class 'Other'.
oracle_wait_class_scheduler
percent
The percentage of time spent waiting on the events of class 'Scheduler'.
oracle_wait_class_idle
percent
The percentage of time spent waiting on the events of class 'Idle'.
oracle_wait_class_application
percent
The percentage of time spent waiting on the events of class 'Application'.
oracle_wait_class_network
percent
The percentage of time spent waiting on the events of class 'Network'.
oracle_wait_class_configuration
percent
The percentage of time spent waiting on the events of class 'Configuration'.
oracle_wait_event_log_file_sync
percent
The percentage of time spent waiting on the 'log file sync' event.
oracle_wait_event_log_file_parallel_write
percent
The percentage of time spent waiting on the 'log file parallel write' event.
oracle_wait_event_log_file_sequential_read
percent
The percentage of time spent waiting on the 'log file sequential read' event.
oracle_wait_event_enq_tx_contention
percent
The percentage of time spent waiting on the 'enq: TX - contention' event.
oracle_wait_event_enq_tx_row_lock_contention
percent
The percentage of time spent waiting on the 'enq: TX - row lock contention' event.
oracle_wait_event_latch_row_cache_objects
percent
The percentage of time spent waiting on the 'latch: row cache objects' event.
oracle_wait_event_latch_shared_pool
percent
The percentage of time spent waiting on the 'latch: shared pool' event.
oracle_wait_event_resmgr_cpu_quantum
percent
The percentage of time spent waiting on the 'resmgr:cpu quantum' event.
oracle_wait_event_sql_net_message_from_client
percent
The percentage of time spent waiting on the 'SQL*Net message from client' event.
oracle_wait_event_rdbms_ipc_message
percent
The percentage of time spent waiting on the 'rdbms ipc message' event.
oracle_wait_event_db_file_sequential_read
percent
The percentage of time spent waiting on the 'db file sequential read' event.
oracle_wait_event_log_file_switch_checkpoint_incomplete
percent
The percentage of time spent waiting on the 'log file switch (checkpoint incomplete)' event.
oracle_wait_event_row_cache_lock
percent
The percentage of time spent waiting on the 'row cache lock' event.
oracle_wait_event_buffer_busy_waits
percent
The percentage of time spent waiting on the 'buffer busy waits' event.
oracle_wait_event_db_file_async_io_submit
percent
The percentage of time spent waiting on the 'db file async I/O submit' event.
oracle_sessions_active_user
sessions
The number of active user sessions.
oracle_sessions_inactive_user
sessions
The number of inactive user sessions.
oracle_sessions_active_background
sessions
The number of active background sessions.
oracle_sessions_inactive_background
sessions
The number of inactive background sessions.
oracle_calls_execute_count
calls
Total number of calls (user and recursive) that executed SQL statements.
oracle_tuned_undoretention
seconds
The amount of time for which undo will not be recycled from the time it was committed.
oracle_max_query_length
seconds
The length of the longest query executed.
oracle_transaction_count
transactions
The total number of transactions executed within the period.
oracle_sso_errors
errors/s
The number of ORA-01555 (snapshot too old) errors raised per second.
oracle_redo_log_space_requests
requests
The number of times a user process waits for space in the redo log file, usually caused by checkpointing or log switching.
bitmap_merge_area_size
kilobytes
1048576
0
→ 2147483647
yes
The amount of memory Oracle uses to merge bitmaps retrieved from a range scan of the index.
create_bitmap_area_size
megabytes
8388608
0
→ 1073741824
yes
Size of create bitmap buffer for bitmap index. Relevant only for systems containing bitmap indexes.
db_cache_size
megabytes
48
0
→ 2097152
no
The size of the DEFAULT buffer pool for standard block size buffers. The value must be at least 4M * cpu number.
db_2k_cache_size
megabytes
0
0
→ 2097152
no
Size of cache for 2K buffers.
db_4k_cache_size
megabytes
0
0
→ 2097152
no
Size of cache for 4K buffers.
db_8k_cache_size
megabytes
0
0
→ 2097152
no
Size of cache for 8K buffers.
db_16k_cache_size
megabytes
0
0
→ 2097152
no
Size of cache for 16K buffers.
db_32k_cache_size
megabytes
0
0
→ 2097152
no
Size of cache for 32K buffers.
hash_area_size
kilobytes
131072
0
→ 2147483647
yes
Maximum size of in-memory hash work area maximum amount of memory.
java_pool_size
megabytes
24
0
→ 16384
no
The size of the Java pool. If SGA_TARGET is set, this value represents the minimum value for the memory pool.
large_pool_size
megabytes
0
0
→ 65536
no
The size of large pool allocation heap.
lock_sga
FALSE
TRUE
, FALSE
yes
Lock the entire SGA in physical memory.
memory_max_target
megabytes
8192
152
→ 2097152
yes
The maximum value to which a DBA can set the MEMORY_TARGET initialization parameter.
memory_target
megabytes
6864
0
→ 2097152
no
Oracle systemwide usable memory. The database tunes memory to the MEMORY_TARGET value, reducing or enlarging the SGA and PGA as needed.
olap_page_pool_size
bytes
0
0
→ 2147483647
no
Size of the olap page pool.
pga_aggregate_limit
megabytes
2048
0
→ 2097152
no
The limit on the aggregate PGA memory consumed by the instance.
pga_aggregate_target
megabytes
1024
0
→ 2097152
no
The target aggregate PGA memory available to all server processes attached to the instance.
pre_page_sga
FALSE
TRUE
, FALSE
yes
Read the entire SGA into memory at instance startup.
result_cache_max_result
percent
5
0
→ 100
no
Maximum result size as a percent of the cache size.
result_cache_max_size
megabytes
0
0
→ 65536
no
The maximum amount of SGA memory that can be used by the Result Cache.
result_cache_mode
MANUAL
MANUAL
, FORCE
no
Specifies when a ResultCache operator is spliced into a query's execution plan.
result_cache_remote_expiration
minutes
0
0
→ 10000
no
The expiration in minutes of remote objects. High values may cause stale answers.
sga_max_size
megabytes
8192
0
→ 2097152
yes
The maximum size of the SGA for the lifetime of the instance.
sga_min_size
megabytes
2920
0
→ 1048576
no
The guaranteed SGA size for a pluggable database (PDB). When SGA_MIN_SIZE is set for a PDB, it guarantees the specified SGA size for the PDB.
sga_target
megabytes
5840
0
→ 2097152
no
The total size of all SGA components, acts as the minimum value for the size of the SGA.
shared_pool_reserved_size
megabytes
128
1
→ 2048
yes
The shared pool space reserved for large contiguous requests for shared pool memory.
shared_pool_size
megabytes
0
0
→ 65536
no
The size of the shared pool.
sort_area_retained_size
kilobytes
0
0
→ 2147483647
no
The maximum amount of the User Global Area memory retained after a sort run completes.
sort_area_size
kilobytes
64
0
→ 2097151
no
The maximum amount of memory Oracle will use for a sort. If more space is required then temporary segments on disks are used.
streams_pool_size
megabytes
0
0
→ 2097152
no
Size of the streams pool.
use_large_pages
TRUE
ONLY
, FALSE
, TRUE
yes
Enable the use of large pages for SGA memory.
workarea_size_policy
AUTO
MANUAL
, AUTO
no
Policy used to size SQL working areas (MANUAL/AUTO).
commit_logging
BATCH
IMMEDIATE
, BATCH
no
Control how redo is batched by Log Writer.
commit_wait
WAIT
NOWAIT
, WAIT
, FORCE_WAIT
no
Control when the redo for a commit is flushed to the redo logs.
log_archive_max_processes
processes
4
1
→ 30
no
Maximum number of active ARCH processes.
log_buffer
megabytes
16
2
→ 256
yes
The amount of memory that Oracle uses when buffering redo entries to a redo log file.
log_checkpoint_interval
blocks
0
0
→ 2147483647
no
The maximum number of log file blocks between incremental checkpoints.
log_checkpoint_timeout
seconds
1800
0
→ 2147483647
no
Maximum time interval between checkpoints. Guarantees a no buffer remains dirty for more than the specified time.
db_flashback_retention_target
minutes
1440
30
→ 2147483647
no
Maximum Flashback Database log retention time.
undo_retention
seconds
900
0
→ 2147483647
no
Low threshold value of undo retention.
optimizer_adaptive_plans
FALSE
TRUE
, FALSE
no
Controls adaptive plans, execution plans built with alternative choices based on collected statistics.
optimizer_adaptive_statistics
FALSE
TRUE
, FALSE
no
Enable the optimizer to use adaptive statistics for complex queries.
optimizer_capture_sql_plan_baselines
FALSE
TRUE
, FALSE
no
Automatic capture of SQL plan baselines for repeatable statements
optimizer_dynamic_sampling
2
0
→ 11
no
Controls both when the database gathers dynamic statistics, and the size of the sample that the optimizer uses to gather the statistics.
optimizer_features_enable
19.1.0
19.1.0
, 18.1.0
, 12.2.0.1
, 12.1.0.2
, 12.1.0.1
, 11.2.0.4
no
Enable a series of optimizer features based on an Oracle release number.
optimizer_index_caching
0
0
→ 100
no
Adjust the behavior of cost-based optimization to favor nested loops joins and IN-list iterators.
optimizer_index_cost_adj
100
1
→ 10000
no
Tune optimizer behavior for access path selection to be more or less index friendly.
optimizer_inmemory_aware
TRUE
TRUE
, FALSE
no
Enables all of the optimizer cost model enhancements for in-memory.
optimizer_mode
ALL_ROWS
ALL_ROWS
, FIRST_ROWS
, FIRST_ROWS_1
, FIRST_ROWS_10
, FIRST_ROWS_100
, FIRST_ROWS_1000
no
The default behavior for choosing an optimization approach for the instance.
optimizer_use_invisible_indexes
FALSE
TRUE
, FALSE
no
Enable or disables the use of invisible indexes.
optimizer_use_pending_statistics
FALSE
TRUE
, FALSE
no
Control whether the optimizer uses pending statistics when compiling SQL statements.
optimizer_use_sql_plan_baselines
TRUE
TRUE
, FALSE
no
Enables the use of SQL plan baselines stored in SQL Management Base.
approx_for_aggregation
FALSE
TRUE
, FALSE
no
Replace exact query processing for aggregation queries with approximate query processing.
approx_for_count_distinct
FALSE
TRUE
, FALSE
no
Automatically replace COUNT (DISTINCT expr) queries with APPROX_COUNT_DISTINCT queries.
approx_for_percentile
NONE
NONE
, PERCENTILE_CONT
, PERCENTILE_CONT DETERMINISTIC
, PERCENTILE_DISC
, PERCENTILE_DISC DETERMINISTIC
, ALL
, ALL DETERMINISTIC
no
Converts exact percentile functions to their approximate percentile function counterparts.
parallel_degree_policy
MANUAL
MANUAL
, LIMITED
, AUTO
no
Policy used to compute the degree of parallelism (MANUAL/LIMITED/AUTO).
parallel_execution_message_size
16384
2148
→ 32768
yes
Message buffer size for parallel execution.
parallel_force_local
FALSE
TRUE
, FALSE
no
Force single instance execution.
parallel_max_servers
processes
0
0
→ 3600
no
The maximum number of parallel execution processes and parallel recovery processes for an instance.
parallel_min_servers
processes
0
0
→ 2000
no
The minimum number of execution processes kept alive to service parallel statements.
parallel_min_percent
percent
0
0
→ 100
yes
The minimum percentage of parallel execution processes (of the value of PARALLEL_MAX_SERVERS) required for parallel execution.
parallel_threads_per_cpu
2
1
→ 128
no
Number of parallel execution threads per CPU.
circuits
circuits
10
0
→ 3000
no
The total number of virtual circuits that are available for inbound and outbound network sessions.
cpu_count
cpus
0
0
→ 2048
no
Number of CPUs available for the Oracle instance to use.
cursor_bind_capture_destination
MEMORY+DISK
OFF
, MEMORY
, MEMORY+DISK
no
Allowed destination for captured bind variables.
cursor_invalidation
IMMEDIATE
DEFERRED
, IMMEDIATE
no
Whether deferred cursor invalidation or immediate cursor invalidation is used for DDL statements by default.
cursor_sharing
EXACT
FORCE
, EXACT
, SIMILAR
no
Cursor sharing mode.
cursor_space_for_time
FALSE
TRUE
, FALSE
yes
Use more memory in order to get faster execution.
db_files
files
200
200
→ 20000
yes
The maximum number of database files that can be opened for this database. This may be subject to OS constraints.
open_cursors
cursors
300
0
→ 65535
no
The maximum number of open cursors (handles to private SQL areas) a session can have at once.
open_links
connections
4
0
→ 255
yes
The maximum number of concurrent open connections to remote databases in one session.
open_links_per_instance
connections
4
0
→ 2147483647
yes
Maximum number of migratable open connections globally for each database instance.
processes
processes
100
80
→ 20000
yes
The maximum number of OS user processes that can simultaneously connect to Oracle.
read_only_open_delayed
FALSE
TRUE
, FALSE
yes
Delay opening of read only files until first access.
serial_reuse
DISABLE
DISABLE
, ALL
, SELECT
, DML
, PLSQL
, FORCE
yes
Types of cursors that make use of the serial-reusable memory feature.
session_cached_cursors
50
0
→ 65535
no
Number of session cursors to cache.
session_max_open_files
10
1
→ 50
yes
Maximum number of open files allowed per session.
sessions
sessions
1262
1
→ 65536
no
The maximum number of sessions that can be created in the system, effectively the maximum number of concurrent users in the system.
transactions
transactions
1388
4
→ 2147483647
yes
The maximum number of concurrent transactions.
audit_trail
NONE
NONE
, OS
, DB
, XML
, EXTENDED
yes
Configure system auditing.
client_result_cache_lag
milliseconds
3000
0
→ 60000
yes
Maximum time before checking the database for changes related to the queries cached on the client.
client_result_cache_size
kilobytes
0
0
→ 2147483647
yes
The maximum size of the client per-process result set cache.
db_block_checking
MEDIUM
FALSE
, OFF
, LOW
, MEDIUM
, TRUE
, FULL
no
Header checking and data and index block checking.
db_block_checksum
TYPICAL
OFF
, FALSE
, TYPICAL
, TRUE
, FULL
no
Store checksum in db blocks and check during reads.
db_file_multiblock_read_count
128
0
→ 1024
no
Db block to be read each IO.
db_keep_cache_size
megabytes
0
0
→ 2097152
no
Size of KEEP buffer pool for standard block size buffers.
db_lost_write_protect
NONE
NONE
, TYPICAL
, FULL
no
Enable lost write detection.
db_recycle_cache_size
megabytes
0
0
→ 2097152
no
Size of RECYCLE buffer pool for standard block size buffers.
db_writer_processes
1
1
→ 256
yes
Number of background database writer processes to start.
dbwr_io_slaves
0
0
→ 50
yes
The number of I/O server processes used by the DBW0 process.
ddl_lock_timeout
0
0
→ 1000000
no
Timeout to restrict the time that ddls wait for dml lock.
deferred_segment_creation
TRUE
TRUE
, FALSE
no
Defer segment creation to first insert.
distributed_lock_timeout
seconds
60
1
→ 2147483647
yes
Number of seconds a distributed transaction waits for a lock.
dml_locks
5552
0
→ 2000000
yes
The maximum number of DML locks - one for each table modified in a transaction.
fast_start_parallel_rollback
LOW
FALSE
, LOW
, HIGH
no
Max number of parallel recovery slaves that may be used.
gcs_server_processes
processes
0
0
→ 100
yes
The number of background GCS server processes to serve the inter-instance traffic among Oracle RAC instances.
java_jit_enabled
TRUE
TRUE
, FALSE
no
Enables the Just-in-Time (JIT) compiler for the Oracle Java Virtual Machine.
java_max_sessionspace_size
bytes
0
0
→ 2147483647
yes
Max allowed size in bytes of a Java sessionspace.
job_queue_processes
1000
0
→ 1000
no
Maximum number of job queue slave processes.
object_cache_max_size_percent
percent
10
0
→ 100
no
Percentage of maximum size over optimal of the user sessions object cache.
object_cache_optimal_size
kilobytes
100
0
→ 67108864
no
Optimal size of the user sessions object cache.
plsql_code_type
INTERPRETED
INTERPRETED
, NATIVE
no
PL/SQL code-type.
plsql_optimize_level
2
0
→ 3
no
PL/SQL optimize level.
query_rewrite_enabled
TRUE
FALSE
, TRUE
, FORCE
no
Allow rewrite of queries using materialized views if enabled.
query_rewrite_integrity
ENFORCED
ENFORCED
, TRUSTED
, STALE_TOLERATED
no
Perform rewrite using materialized views with desired integrity.
recyclebin
ON
ON
, OFF
no
Allow recovering of dropped tables.
replication_dependency_tracking
TRUE
TRUE
, FALSE
yes
Tracking dependency for Replication parallel propagation.
resourcemanager_cpu_allocation
2
0
→ 20
no
ResourceManager CPU allocation.
sql_trace
FALSE
TRUE
, FALSE
no
Enable SQL trace.
star_transformation_enabled
FALSE
FALSE
, TRUE
, TEMP_DISABLE
no
Enable the use of star transformation.
statistics_level
TYPICAL
BASIC
, TYPICAL
, ALL
no
Level of collection for database and operating system statistics.
transactions_per_rollback_segment
5
1
→ 10000
yes
Expected number of active transactions per rollback segment.
filesystemio_options
asynch
none
, setall
, directIO
, asynch
yes
Specifies I/O operations for file system files.
Parameter
Default value
Domain
db_cache_size
MAX(48MB, 4MB * cpu_num)
java_pool_size
24MB
if SGA_TARGET
is not set
0
if SGA_TARGET
is set, meaning the lower bound for the pool is automatically determined
shared_pool_reserved_size
5%
of shared_pool_size
shared_pool_size
0
if sga_target
is set, 128MB
otherwise
shared_pool_reserved_size
upper bound can’t exceed half the size of shared_pool_size
pga_aggregate_target
MAX(10MB, 0.2*sga_target)
pga_aggregate_limit
MEMORY_MAX_TARGET
if MEMORY_TARGET
explicit or
2 * PGA_AGGREGATE_TARGET
if PGA_AGGREGATE_TARGET
explicit or
0.9 * ({MEMORY_AVAILABLE} - SGA)
at least MAX(2GB, 3MB * db.processes)
hash_area_size
2 * sort_area_size
Parameter
Default value
Domain
cpu_count
should match the available CPUs 0 to let the Oracle engine automatically determine the value
must not exceed the available CPUs
gcs_server_processes
0
if cluster_database=false
1
for 1-3 CPUs, or if ASM
2
for 4-15 CPUs2+lower(CPUs/32)
for 16+ CPUs
parallel_min_servers
CPU_COUNT * PARALLEL_THREADS_PER_CPU * 2
parallel_max_servers
PARALLEL_THREADS_PER_CPU * CPU_COUNT * concurrent_parallel_users * 5
sessions
1.1 * processes + 5
must be at least equal to the default value
transactions
1.1 * sessions
db.memory_target <= db.memory_max_target && db.memory_max_target < {MEMORY_AVAILABLE}
Add when tuning automatic memory management
db.sga_max_size + db.pga_aggregate_limit <= db.memory_max_target
Add when tuning SGA and PGA
db.sga_target + db.pga_aggregate_target <= db.memory_target
Add when tuning SGA and PGA
db.sga_target <= db.sga_max_size
Add when tuning SGA
db.db_cache_size + db.java_pool_size + db.large_pool_size + db.log_buffer + db.shared_pool_size + db.streams_pool_size < db.sga_max_size
Add when tuning SGA areas
db.pga_aggregate_target <= db.pga_aggregate_limit
Add when tuning PGA
db.shared_pool_reserved_size <= 0.5 * db.shared_pool_size
db.sort_area_retained_size <= db.sort_area_size
db.sessions < db.transactions
db.parallel_min_servers < db.parallel_max_servers
The optimization pack for Oracle Database 18c.
The following parameters require their ranges or default values to be updated according to the described rules:
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters.
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|
Formula | Notes |
---|---|
Formula | Notes |
---|---|
oracle_sga_total_size
bytes
The current memory size of the SGA.
oracle_sga_free_size
bytes
The amount of SGA currently available.
oracle_sga_max_size
bytes
The configured maximum memory size for the SGA.
oracle_pga_target_size
bytes
The configured target memory size for the PGA.
oracle_redo_buffers_size
bytes
The memory size of the redo buffers.
oracle_default_buffer_cache_size
bytes
The memory size for the DEFAULT buffer cache component.
oracle_default_2k_buffer_cache_size
bytes
The memory size for the DEFAULT 2k buffer cache component.
oracle_default_4k_buffer_cache_size
bytes
The memory size for the DEFAULT 4k buffer cache component.
oracle_default_8k_buffer_cache_size
bytes
The memory size for the DEFAULT 8k buffer cache component.
oracle_default_16k_buffer_cache_size
bytes
The memory size for the DEFAULT 16k buffer cache component.
oracle_default_32k_buffer_cache_size
bytes
The memory size for the DEFAULT 32k buffer cache component.
oracle_keep_buffer_cache_size
bytes
The memory size for the KEEP buffer cache component.
oracle_recycle_buffer_cache_size
bytes
The memory size for the RECYCLE buffer cache component.
oracle_asm_buffer_cache_size
bytes
The memory size for the ASM buffer cache component.
oracle_shared_io_pool_size
bytes
The memory size for the IO pool component.
oracle_java_pool_size
bytes
The memory size for the Java pool component.
oracle_large_pool_size
bytes
The memory size for the large pool component.
oracle_shared_pool_size
bytes
The memory size for the shared pool component.
oracle_streams_pool_size
bytes
The memory size for the streams pool component.
oracle_buffer_cache_hit_ratio
percent
How often a requested block has been found in the buffer cache without requiring disk access.
oracle_wait_class_commit
percent
The percentage of time spent waiting on the events of class 'Commit'.
oracle_wait_class_concurrency
percent
The percentage of time spent waiting on the events of class 'Concurrency'.
oracle_wait_class_system_io
percent
The percentage of time spent waiting on the events of class 'System I/O'.
oracle_wait_class_user_io
percent
The percentage of time spent waiting on the events of class 'User I/O'.
oracle_wait_class_other
percent
The percentage of time spent waiting on the events of class 'Other'.
oracle_wait_class_scheduler
percent
The percentage of time spent waiting on the events of class 'Scheduler'.
oracle_wait_class_idle
percent
The percentage of time spent waiting on the events of class 'Idle'.
oracle_wait_class_application
percent
The percentage of time spent waiting on the events of class 'Application'.
oracle_wait_class_network
percent
The percentage of time spent waiting on the events of class 'Network'.
oracle_wait_class_configuration
percent
The percentage of time spent waiting on the events of class 'Configuration'.
oracle_wait_event_log_file_sync
percent
The percentage of time spent waiting on the 'log file sync' event.
oracle_wait_event_log_file_parallel_write
percent
The percentage of time spent waiting on the 'log file parallel write' event.
oracle_wait_event_log_file_sequential_read
percent
The percentage of time spent waiting on the 'log file sequential read' event.
oracle_wait_event_enq_tx_contention
percent
The percentage of time spent waiting on the 'enq: TX - contention' event.
oracle_wait_event_enq_tx_row_lock_contention
percent
The percentage of time spent waiting on the 'enq: TX - row lock contention' event.
oracle_wait_event_latch_row_cache_objects
percent
The percentage of time spent waiting on the 'latch: row cache objects' event.
oracle_wait_event_latch_shared_pool
percent
The percentage of time spent waiting on the 'latch: shared pool' event.
oracle_wait_event_resmgr_cpu_quantum
percent
The percentage of time spent waiting on the 'resmgr:cpu quantum' event.
oracle_wait_event_sql_net_message_from_client
percent
The percentage of time spent waiting on the 'SQL*Net message from client' event.
oracle_wait_event_rdbms_ipc_message
percent
The percentage of time spent waiting on the 'rdbms ipc message' event.
oracle_wait_event_db_file_sequential_read
percent
The percentage of time spent waiting on the 'db file sequential read' event.
oracle_wait_event_log_file_switch_checkpoint_incomplete
percent
The percentage of time spent waiting on the 'log file switch (checkpoint incomplete)' event.
oracle_wait_event_row_cache_lock
percent
The percentage of time spent waiting on the 'row cache lock' event.
oracle_wait_event_buffer_busy_waits
percent
The percentage of time spent waiting on the 'buffer busy waits' event.
oracle_wait_event_db_file_async_io_submit
percent
The percentage of time spent waiting on the 'db file async I/O submit' event.
oracle_sessions_active_user
sessions
The number of active user sessions.
oracle_sessions_inactive_user
sessions
The number of inactive user sessions.
oracle_sessions_active_background
sessions
The number of active background sessions.
oracle_sessions_inactive_background
sessions
The number of inactive background sessions.
oracle_calls_execute_count
calls
Total number of calls (user and recursive) that executed SQL statements.
oracle_tuned_undoretention
seconds
The amount of time for which undo will not be recycled from the time it was committed.
oracle_max_query_length
seconds
The length of the longest query executed.
oracle_transaction_count
transactions
The total number of transactions executed within the period.
oracle_sso_errors
errors/s
The number of ORA-01555 (snapshot too old) errors raised per second.
oracle_redo_log_space_requests
requests
The number of times a user process waits for space in the redo log file, usually caused by checkpointing or log switching.
bitmap_merge_area_size
kilobytes
1048576
0
→ 2147483647
yes
The amount of memory Oracle uses to merge bitmaps retrieved from a range scan of the index.
create_bitmap_area_size
megabytes
8388608
0
→ 1073741824
yes
Size of create bitmap buffer for bitmap index. Relevant only for systems containing bitmap indexes.
db_cache_size
megabytes
48
0
→ 2097152
no
The size of the DEFAULT buffer pool for standard block size buffers. The value must be at least 4M * cpu number.
db_2k_cache_size
megabytes
0
0
→ 2097152
no
Size of cache for 2K buffers.
db_4k_cache_size
megabytes
0
0
→ 2097152
no
Size of cache for 4K buffers.
db_8k_cache_size
megabytes
0
0
→ 2097152
no
Size of cache for 8K buffers.
db_16k_cache_size
megabytes
0
0
→ 2097152
no
Size of cache for 16K buffers.
db_32k_cache_size
megabytes
0
0
→ 2097152
no
Size of cache for 32K buffers.
hash_area_size
kilobytes
131072
0
→ 2147483647
yes
Maximum size of in-memory hash work area maximum amount of memory.
java_pool_size
megabytes
24
0
→ 16384
no
The size of the Java pool. If SGA_TARGET is set, this value represents the minimum value for the memory pool.
large_pool_size
megabytes
0
0
→ 65536
no
The size of large pool allocation heap.
lock_sga
FALSE
TRUE
, FALSE
yes
Lock the entire SGA in physical memory.
memory_max_target
megabytes
8192
152
→ 2097152
yes
The maximum value to which a DBA can set the MEMORY_TARGET initialization parameter.
memory_target
megabytes
6864
0
→ 2097152
no
Oracle systemwide usable memory. The database tunes memory to the MEMORY_TARGET value, reducing or enlarging the SGA and PGA as needed.
olap_page_pool_size
bytes
0
0
→ 2147483647
no
Size of the olap page pool.
pga_aggregate_limit
megabytes
2048
0
→ 2097152
no
The limit on the aggregate PGA memory consumed by the instance.
pga_aggregate_target
megabytes
1024
0
→ 2097152
no
The target aggregate PGA memory available to all server processes attached to the instance.
pre_page_sga
FALSE
TRUE
, FALSE
yes
Read the entire SGA into memory at instance startup.
result_cache_max_result
percent
5
0
→ 100
no
Maximum result size as a percent of the cache size.
result_cache_max_size
megabytes
0
0
→ 65536
no
The maximum amount of SGA memory that can be used by the Result Cache.
result_cache_mode
MANUAL
MANUAL
, FORCE
no
Specifies when a ResultCache operator is spliced into a query's execution plan.
result_cache_remote_expiration
minutes
0
0
→ 10000
no
The expiration in minutes of remote objects. High values may cause stale answers.
sga_max_size
megabytes
8192
0
→ 2097152
yes
The maximum size of the SGA for the lifetime of the instance.
sga_min_size
megabytes
2920
0
→ 1048576
no
The guaranteed SGA size for a pluggable database (PDB). When SGA_MIN_SIZE is set for a PDB, it guarantees the specified SGA size for the PDB.
sga_target
megabytes
5840
0
→ 2097152
no
The total size of all SGA components, acts as the minimum value for the size of the SGA.
shared_pool_reserved_size
megabytes
128
1
→ 2048
yes
The shared pool space reserved for large contiguous requests for shared pool memory.
shared_pool_size
megabytes
0
0
→ 65536
no
The size of the shared pool.
sort_area_retained_size
kilobytes
0
0
→ 2147483647
no
The maximum amount of the User Global Area memory retained after a sort run completes.
sort_area_size
kilobytes
64
0
→ 2097151
no
The maximum amount of memory Oracle will use for a sort. If more space is required then temporary segments on disks are used.
streams_pool_size
megabytes
0
0
→ 2097152
no
Size of the streams pool.
use_large_pages
TRUE
ONLY
, FALSE
, TRUE
yes
Enable the use of large pages for SGA memory.
workarea_size_policy
AUTO
MANUAL
, AUTO
no
Policy used to size SQL working areas (MANUAL/AUTO).
commit_logging
BATCH
IMMEDIATE
, BATCH
no
Control how redo is batched by Log Writer.
commit_wait
WAIT
NOWAIT
, WAIT
, FORCE_WAIT
no
Control when the redo for a commit is flushed to the redo logs.
log_archive_max_processes
processes
4
1
→ 30
no
Maximum number of active ARCH processes.
log_buffer
megabytes
16
2
→ 256
yes
The amount of memory that Oracle uses when buffering redo entries to a redo log file.
log_checkpoint_interval
blocks
0
0
→ 2147483647
no
The maximum number of log file blocks between incremental checkpoints.
log_checkpoint_timeout
seconds
1800
0
→ 2147483647
no
Maximum time interval between checkpoints. Guarantees a no buffer remains dirty for more than the specified time.
db_flashback_retention_target
minutes
1440
30
→ 2147483647
no
Maximum Flashback Database log retention time.
undo_retention
seconds
900
0
→ 2147483647
no
Low threshold value of undo retention.
optimizer_adaptive_plans
FALSE
TRUE
, FALSE
no
Controls adaptive plans, execution plans built with alternative choices based on collected statistics.
optimizer_adaptive_statistics
FALSE
TRUE
, FALSE
no
Enable the optimizer to use adaptive statistics for complex queries.
optimizer_capture_sql_plan_baselines
FALSE
TRUE
, FALSE
no
Automatic capture of SQL plan baselines for repeatable statements
optimizer_dynamic_sampling
2
0
→ 11
no
Controls both when the database gathers dynamic statistics, and the size of the sample that the optimizer uses to gather the statistics.
optimizer_features_enable
18.1.0
18.1.0
, 12.2.0.1
, 12.1.0.2
, 12.1.0.1
, 11.2.0.4
, 11.2.0.3
, 11.2.0.2
, 11.2.0.1
, 11.1.0.7
, 11.1.0.6
no
Enable a series of optimizer features based on an Oracle release number.
optimizer_index_caching
0
0
→ 100
no
Adjust the behavior of cost-based optimization to favor nested loops joins and IN-list iterators.
optimizer_index_cost_adj
100
1
→ 10000
no
Tune optimizer behavior for access path selection to be more or less index friendly.
optimizer_inmemory_aware
TRUE
TRUE
, FALSE
no
Enables all of the optimizer cost model enhancements for in-memory.
optimizer_mode
ALL_ROWS
ALL_ROWS
, FIRST_ROWS
, FIRST_ROWS_1
, FIRST_ROWS_10
, FIRST_ROWS_100
, FIRST_ROWS_1000
no
The default behavior for choosing an optimization approach for the instance.
optimizer_use_invisible_indexes
FALSE
TRUE
, FALSE
no
Enable or disables the use of invisible indexes.
optimizer_use_pending_statistics
FALSE
TRUE
, FALSE
no
Control whether the optimizer uses pending statistics when compiling SQL statements.
optimizer_use_sql_plan_baselines
TRUE
TRUE
, FALSE
no
Enables the use of SQL plan baselines stored in SQL Management Base.
approx_for_aggregation
FALSE
TRUE
, FALSE
no
Replace exact query processing for aggregation queries with approximate query processing.
approx_for_count_distinct
FALSE
TRUE
, FALSE
no
Automatically replace COUNT (DISTINCT expr) queries with APPROX_COUNT_DISTINCT queries.
approx_for_percentile
NONE
NONE
, PERCENTILE_CONT
, PERCENTILE_CONT DETERMINISTIC
, PERCENTILE_DISC
, PERCENTILE_DISC DETERMINISTIC
, ALL
, ALL DETERMINISTIC
no
Converts exact percentile functions to their approximate percentile function counterparts.
parallel_degree_policy
MANUAL
MANUAL
, LIMITED
, AUTO
no
Policy used to compute the degree of parallelism (MANUAL/LIMITED/AUTO).
parallel_execution_message_size
16384
2148
→ 32768
yes
Message buffer size for parallel execution.
parallel_force_local
FALSE
TRUE
, FALSE
no
Force single instance execution.
parallel_max_servers
processes
0
0
→ 3600
no
The maximum number of parallel execution processes and parallel recovery processes for an instance.
parallel_min_servers
processes
0
0
→ 2000
no
The minimum number of execution processes kept alive to service parallel statements.
parallel_min_percent
percent
0
0
→ 100
yes
The minimum percentage of parallel execution processes (of the value of PARALLEL_MAX_SERVERS) required for parallel execution.
parallel_threads_per_cpu
2
1
→ 128
no
Number of parallel execution threads per CPU.
circuits
circuits
10
0
→ 3000
no
The total number of virtual circuits that are available for inbound and outbound network sessions.
cpu_count
cpus
0
0
→ 2048
no
Number of CPUs available for the Oracle instance to use.
cursor_bind_capture_destination
MEMORY+DISK
OFF
, MEMORY
, MEMORY+DISK
no
Allowed destination for captured bind variables.
cursor_invalidation
IMMEDIATE
DEFERRED
, IMMEDIATE
no
Whether deferred cursor invalidation or immediate cursor invalidation is used for DDL statements by default.
cursor_sharing
EXACT
FORCE
, EXACT
, SIMILAR
no
Cursor sharing mode.
cursor_space_for_time
FALSE
TRUE
, FALSE
yes
Use more memory in order to get faster execution.
db_files
files
200
200
→ 20000
yes
The maximum number of database files that can be opened for this database. This may be subject to OS constraints.
open_cursors
cursors
300
0
→ 65535
no
The maximum number of open cursors (handles to private SQL areas) a session can have at once.
open_links
connections
4
0
→ 255
yes
The maximum number of concurrent open connections to remote databases in one session.
open_links_per_instance
connections
4
0
→ 2147483647
yes
Maximum number of migratable open connections globally for each database instance.
processes
processes
100
80
→ 20000
yes
The maximum number of OS user processes that can simultaneously connect to Oracle.
read_only_open_delayed
FALSE
TRUE
, FALSE
yes
Delay opening of read only files until first access.
serial_reuse
DISABLE
DISABLE
, ALL
, SELECT
, DML
, PLSQL
, FORCE
yes
Types of cursors that make use of the serial-reusable memory feature.
session_cached_cursors
50
0
→ 65535
no
Number of session cursors to cache.
session_max_open_files
10
1
→ 50
yes
Maximum number of open files allowed per session.
sessions
sessions
1262
1
→ 65536
no
The maximum number of sessions that can be created in the system, effectively the maximum number of concurrent users in the system.
transactions
transactions
1388
4
→ 2147483647
yes
The maximum number of concurrent transactions.
audit_trail
NONE
NONE
, OS
, DB
, XML
, EXTENDED
yes
Configure system auditing.
client_result_cache_lag
milliseconds
3000
0
→ 60000
yes
Maximum time before checking the database for changes related to the queries cached on the client.
client_result_cache_size
kilobytes
0
0
→ 2147483647
yes
The maximum size of the client per-process result set cache.
db_block_checking
MEDIUM
FALSE
, OFF
, LOW
, MEDIUM
, TRUE
, FULL
no
Header checking and data and index block checking.
db_block_checksum
TYPICAL
OFF
, FALSE
, TYPICAL
, TRUE
, FULL
no
Store checksum in db blocks and check during reads.
db_file_multiblock_read_count
128
0
→ 1024
no
Db block to be read each IO.
db_keep_cache_size
megabytes
0
0
→ 2097152
no
Size of KEEP buffer pool for standard block size buffers.
db_lost_write_protect
NONE
NONE
, TYPICAL
, FULL
no
Enable lost write detection.
db_recycle_cache_size
megabytes
0
0
→ 2097152
no
Size of RECYCLE buffer pool for standard block size buffers.
db_writer_processes
1
1
→ 256
yes
Number of background database writer processes to start.
dbwr_io_slaves
0
0
→ 50
yes
The number of I/O server processes used by the DBW0 process.
ddl_lock_timeout
0
0
→ 1000000
no
Timeout to restrict the time that ddls wait for dml lock.
deferred_segment_creation
TRUE
TRUE
, FALSE
no
Defer segment creation to first insert.
distributed_lock_timeout
seconds
60
1
→ 2147483647
yes
Number of seconds a distributed transaction waits for a lock.
dml_locks
5552
0
→ 2000000
yes
The maximum number of DML locks - one for each table modified in a transaction.
fast_start_parallel_rollback
LOW
FALSE
, LOW
, HIGH
no
Max number of parallel recovery slaves that may be used.
gcs_server_processes
processes
0
0
→ 100
yes
The number of background GCS server processes to serve the inter-instance traffic among Oracle RAC instances.
java_jit_enabled
TRUE
TRUE
, FALSE
no
Enables the Just-in-Time (JIT) compiler for the Oracle Java Virtual Machine.
java_max_sessionspace_size
bytes
0
0
→ 2147483647
yes
Max allowed size in bytes of a Java sessionspace.
job_queue_processes
1000
0
→ 1000
no
Maximum number of job queue slave processes.
object_cache_max_size_percent
percent
10
0
→ 100
no
Percentage of maximum size over optimal of the user sessions object cache.
object_cache_optimal_size
kilobytes
100
0
→ 67108864
no
Optimal size of the user sessions object cache.
plsql_code_type
INTERPRETED
INTERPRETED
, NATIVE
no
PL/SQL code-type.
plsql_optimize_level
2
0
→ 3
no
PL/SQL optimize level.
query_rewrite_enabled
TRUE
FALSE
, TRUE
, FORCE
no
Allow rewrite of queries using materialized views if enabled.
query_rewrite_integrity
ENFORCED
ENFORCED
, TRUSTED
, STALE_TOLERATED
no
Perform rewrite using materialized views with desired integrity.
recyclebin
ON
ON
, OFF
no
Allow recovering of dropped tables.
replication_dependency_tracking
TRUE
TRUE
, FALSE
yes
Tracking dependency for Replication parallel propagation.
resourcemanager_cpu_allocation
2
0
→ 20
no
ResourceManager CPU allocation.
sql_trace
FALSE
TRUE
, FALSE
no
Enable SQL trace.
star_transformation_enabled
FALSE
FALSE
, TRUE
, TEMP_DISABLE
no
Enable the use of star transformation.
statistics_level
TYPICAL
BASIC
, TYPICAL
, ALL
no
Level of collection for database and operating system statistics.
transactions_per_rollback_segment
5
1
→ 10000
yes
Expected number of active transactions per rollback segment.
filesystemio_options
asynch
none
, setall
, directIO
, asynch
yes
Specifies I/O operations for file system files.
Parameter
Default value
Domain
db_cache_size
MAX(48MB, 4MB * cpu_num)
java_pool_size
24MB
if SGA_TARGET
is not set
0
if SGA_TARGET
is set, meaning the lower bound for the pool is automatically determined
shared_pool_reserved_size
5%
of shared_pool_size
shared_pool_size
0
if sga_target
is set, 128MB
otherwise
shared_pool_reserved_size
upper bound can’t exceed half the size of shared_pool_size
pga_aggregate_target
MAX(10MB, 0.2*sga_target)
pga_aggregate_limit
MEMORY_MAX_TARGET
if MEMORY_TARGET
explicit or
2 * PGA_AGGREGATE_TARGET
if PGA_AGGREGATE_TARGET
explicit or
0.9 * ({MEMORY_AVAILABLE} - SGA)
at least MAX(2GB, 3MB * db.processes)
hash_area_size
2 * sort_area_size
Parameter
Default value
Domain
cpu_count
should match the available CPUs 0 to let the Oracle engine automatically determine the value
must not exceed the available CPUs
gcs_server_processes
0
if cluster_database=false
1
for 1-3 CPUs, or if ASM
2
for 4-15 CPUs2+lower(CPUs/32)
for 16+ CPUs
parallel_min_servers
CPU_COUNT * PARALLEL_THREADS_PER_CPU * 2
parallel_max_servers
PARALLEL_THREADS_PER_CPU * CPU_COUNT * concurrent_parallel_users * 5
sessions
1.1 * processes + 5
must be at least equal to the default value
transactions
1.1 * sessions
db.memory_target <= db.memory_max_target && db.memory_max_target < {MEMORY_AVAILABLE}
Add when tuning automatic memory management
db.sga_max_size + db.pga_aggregate_limit <= db.memory_max_target
Add when tuning SGA and PGA
db.sga_target + db.pga_aggregate_target <= db.memory_target
Add when tuning SGA and PGA
db.sga_target <= db.sga_max_size
Add when tuning SGA
db.db_cache_size + db.java_pool_size + db.large_pool_size + db.log_buffer + db.shared_pool_size + db.streams_pool_size < db.sga_max_size
Add when tuning SGA areas
db.pga_aggregate_target <= db.pga_aggregate_limit
Add when tuning PGA
db.shared_pool_reserved_size <= 0.5 * db.shared_pool_size
db.sort_area_retained_size <= db.sort_area_size
db.sessions < db.transactions
db.parallel_min_servers < db.parallel_max_servers