Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The Linux optimization pack helps you optimize Linux-based systems. The optimization pack provides component types for various Linux distributions, thus enabling performance improvements on a plethora of different configurations.
Through this optimization pack, Akamas is able to tackle the problem of performance of Linux-based systems from both the point of you of cost savings, as well as quality and level of service: the included component types bring in parameters that act on the memory footprint of systems, on their ability to sustain higher levels of traffic, on their capacity of leveraging all the available resources and on their potential for lower latency transactions.
Each component type providers parameters that cover four main areas of tuning:
CPU tasks scheduling (for example, if to auto-group and schedule together similar tasks)
Memory (for example, the limit on memory usage for which start swapping pages on disk)
Network (for example, the size of the buffers used to write/read network packets)
Storage (for example, the type of storage scheduler)
Here’s the command to install the Linux optimization pack using the Akamas CLI:
For more information on the process of installing or upgrading an optimization pack refer to Install Optimization Packs.
This page describes the Optimization Pack for the component type RHEL 7.
Notice: you can use a device
custom filter to monitor a specific disk with Prometheus. You can find more information on Prometheus queries and the %FILTERS%
placeholder here: Prometheus provider and here: Prometheus provider metrics mapping.
This section documents Akamas out-of-the-box optimization packs.
This page describes the Optimization Pack for the component type RHEL 8.
Notice: you can use a device
custom filter to monitor a specific disk with Prometheus. You can find more information on Prometheus queries and the %FILTERS%
placeholder here: Prometheus provider and here: Prometheus provider metrics mapping.
Metric | Description | |
---|---|---|
Metric | Description | |
---|---|---|
Metric | Description | |
---|---|---|
Metric | Description | |
---|---|---|
Metric | Description | |
---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Component Type | Description |
---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
Metric | Description | |
---|---|---|
Metric | Description | |
---|---|---|
Metric | Description | |
---|---|---|
Metric | Description | |
---|---|---|
Metric | Description | |
---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Optimization Pack | Support for applications |
---|---|
Metric | Description | |
---|---|---|
Metric | Description | |
---|---|---|
Metric | Description | |
---|---|---|
Metric | Description | |
---|---|---|
Metric | Description | |
---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
cpu_load_avg
tasks
The system load average (i.e., the number of active tasks in the system)
cpu_num
CPUs
The number of CPUs available in the system (physical and logical)
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
cpu_used
CPUs
The average number of CPUs used in the system (physical and logical)
cpu_util_details
percent
The average CPU utilization % broken down by usage type and CPU number (e.g., cp1 user, cp2 system, cp3 soft-irq)
mem_fault
faults/s
The number of memory faults (minor+major)
mem_fault_major
faults/s
The number of major memory faults (i.e., faults that cause disk access) per second
mem_fault_minor
faults/s
The number of minor memory faults (i.e., faults that do not cause disk access) per second
mem_swapins
pages/s
The number of memory pages swapped in per second
mem_swapouts
pages/s
The number of memory pages swapped out per second
mem_total
bytes
The total amount of installed memory
mem_used
bytes
The total amount of memory used
mem_used_nocache
bytes
The total amount of memory used without considering memory reserved for caching purposes
mem_util
percent
The memory utilization % (i.e, the % of memory used)
mem_util_details
percent
The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory)
mem_util_nocache
percent
The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes
disk_io_inflight_details
ops
The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01)
disk_iops
ops/s
The average number of IO disk operations per second across all disks
disk_iops_details
ops/s
The number of IO disk-write operations per second broken down by disk (e.g., disk /dev/nvme01)
disk_iops_reads
ops/s
The average number of IO disk-read operations per second across all disks
disk_iops_writes
ops/s
The average number of IO disk-write operations per second across all disks
disk_read_bytes
bytes/s
The number of bytes per second read across all disks
disk_read_bytes_details
bytes/s
The average response time of IO disk operations broken down by disk (e.g., disk C://)
disk_read_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_response_time_details
seconds
The average response time of IO disk operations broken down by disk (e.g., disk C://)
disk_response_time_read
seconds
The average response time of read disk operations
disk_response_time_worst
seconds
The average response time of IO disk operations of the slowest disk
disk_response_time_write
seconds
The average response time of write on disk operations
disk_swap_used
bytes
The total amount of space used by swap disks
disk_swap_util
percent
The average space utilization % of swap disks
disk_util_details
percent
The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://)
disk_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_write_bytes_details
bytes/s
The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE)
filesystem_size
bytes
The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01)
filesystem_used
bytes
The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01)
filesystem_util
percent
The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1)
network_in_bytes_details
bytes/s
The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0)
network_out_bytes_details
bytes/s
The number of outbound network packets in bytes per second broken down by network device (e.g., eth01)
network_tcp_retrans
retrans/s
The number of network TCP retransmissions per second
os_context_switch
switches/s
The number of context switches per second
proc_blocked
processes
The number of processes blocked (e.g, for IO or swapping reasons)
os_cpuSchedMinGranularity
integer
nanoseconds
1500000
300000 → 30000000
no
Minimal preemption granularity (in nanoseconds) for CPU bound tasks
os_cpuSchedWakeupGranularity
integer
nanoseconds
2000000
400000 → 40000000
no
Scheduler Wakeup Granularity (in nanoseconds)
os_CPUSchedMigrationCost
integer
nanoseconds
500000
100000 → 5000000
no
Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations
os_CPUSchedChildRunsFirst
integer
0
0
, 1
no
A freshly forked child runs before the parent continues execution
os_CPUSchedLatency
integer
nanoseconds
12000000
2400000 → 240000000
no
Targeted preemption latency (in nanoseconds) for CPU bound tasks
os_CPUSchedAutogroupEnabled
integer
0
0
, 1
no
Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads
os_CPUSchedNrMigrate
integer
32
3 → 320
no
Scheduler NR Migrate
os_MemorySwappiness
integer
percent
60
0 → 100
no
The percentage of RAM free space for which the kernel will start swapping pages to disk
os_MemoryVmVfsCachePressure
integer
100
10 → 100
no
VFS Cache Pressure
os_MemoryVmCompactionProactiveness
integer
Determines how aggressively compaction is done in the background
os_MemoryVmMinFree
integer
67584
10240 → 1024000
no
Minimum Free Memory (in kbytes)
os_MemoryTransparentHugepageEnabled
categorical
madvise
always
, never
, madvise
no
Transparent Hugepage Enablement Flag
os_MemoryTransparentHugepageDefrag
categorical
madvise
always
, never
, defer+madvise
, madvise
, defer
no
Transparent Hugepage Enablement Defrag
os_MemorySwap
categorical
swapon
swapon
, swapoff
no
Memory Swap
os_MemoryVmDirtyRatio
integer
20
1 → 99
no
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyBackgroundRatio
integer
10
1 → 99
no
When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background
os_MemoryVmDirtyExpire
integer
centiseconds
3000
300 → 30000
no
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyWriteback
integer
centiseconds
500
50 → 5000
no
Memory Dirty Writeback (in centisecs)
os_NetworkNetCoreSomaxconn
integer
megabytes
128
12 → 8192
no
Network Max Connections
os_NetworkNetCoreNetdevMaxBacklog
integer
megabytes/s
1000
100 → 10000
no
Network Max Backlog
os_NetworkNetIpv4TcpMaxSynBacklog
integer
milliseconds
256
52 → 5120
no
Network IPV4 Max Sync Backlog
os_NetworkNetCoreNetdevBudget
integer
300
30 → 30000
no
Network Budget
os_NetworkNetCoreRmemMax
integer
212992
21299 → 2129920
no
Maximum network receive buffer size that applications can request
os_NetworkNetCoreWmemMax
integer
212992
21299 → 2129920
no
Maximum network transmit buffer size that applications can request
os_NetworkNetIpv4TcpSlowStartAfterIdle
integer
1
0
, 1
no
Network Slow Start After Idle Flag
os_NetworkNetIpv4TcpFinTimeout
integer
60
6 → 600
no
Network TCP timeout
os_NetworkRfs
integer
0
0 → 131072
no
If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running
os_StorageReadAhead
integer
kilobytes
128
0 → 4096
no
Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk
os_StorageNrRequests
integer
32
12 → 1280
no
Storage Number of Requests
os_StorageRqAffinity
integer
1
1
, 2
no
Storage Requests Affinity
os_StorageNomerges
integer
0
0 → 2
no
Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried
os_StorageMaxSectorsKb
integer
kilobytes
256
32 → 256
no
The largest IO size that the OS can issue to a block device
Amazon Linux AMI
Amazon Linux 2 AMI
Amazon Linux 2022 AMI
CentOS Linux distribution version 7.x
CentOS Linux distribution version 8.x
Red Hat Enterprise Linux distribution version 7.x
Red Hat Enterprise Linux distribution version 8.x
Ubuntu Linux distribution by Canonical version 16.04 (LTS)
Ubuntu Linux distribution by Canonical version 18.04 (LTS)
Ubuntu Linux distribution by Canonical version 20.04 (LTS)
cpu_num
CPUs
The number of CPUs available in the system (physical and logical)
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
cpu_util_details
percent
The average CPU utilization % broken down by usage type and cpu number (e.g., cp1 user, cp2 system, cp3 soft-irq)
cpu_load_avg
tasks
The system load average (i.e., the number of active tasks in the system)
mem_util
percent
The memory utilization % (i.e, the % of memory used)
mem_util_nocache
percent
The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes
mem_util_details
percent
The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory)
mem_used
bytes
The total amount of memory used
mem_used_nocache
bytes
The total amount of memory used without considering memory reserved for caching purposes
mem_total
bytes
The total amount of installed memory
mem_fault_minor
faults/s
The number of minor memory faults (i.e., faults that do not cause disk access) per second
mem_fault_major
faults/s
The number of major memory faults (i.e., faults that cause disk access) per second
mem_fault
faults/s
The number of memory faults (major + minor)
mem_swapins
pages/s
The number of memory pages swapped in per second
mem_swapouts
pages/s
The number of memory pages swapped out per second
network_tcp_retrans
retrans/s
The number of network TCP retransmissions per second
network_in_bytes_details
bytes/s
The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0)
network_out_bytes_details
bytes/s
The number of outbound network packets in bytes per second broken down by network device (e.g., eth01)
disk_swap_util
percent
The average space utilization % of swap disks
disk_swap_used
bytes
The total amount of space used by swap disks
disk_util_details
percent
The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://)
disk_iops_writes
ops/s
The average number of IO disk-write operations per second across all disks
disk_iops_reads
ops/s
The average number of IO disk-read operations per second across all disks
disk_iops
ops/s
The average number of IO disk operations per second across all disks
disk_response_time_read
seconds
The average response time of IO read-disk operations
disk_response_time_worst
seconds
The average response time of IO disk operations of the slowest disk
disk_response_time_write
seconds
The average response time of IO write-disk operations
disk_response_time_details
ops/s
The average response time of IO disk operations broken down by disk (e.g., disk /dev/nvme01 )
disk_iops_details
ops/s
The number of IO disk-write operations of per second broken down by disk (e.g., disk /dev/nvme01)
disk_io_inflight_details
ops
The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01)
disk_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_read_bytes
bytes/s
The number of bytes per second read across all disks
disk_read_write_bytes
bytes/s
The number of bytes per second read and written across all disks
disk_write_bytes_details
bytes/s
The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE)
disk_read_bytes_details
bytes/s
The number of bytes per second read from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation READ)
filesystem_util
percent
The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1)
filesystem_used
bytes
The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01)
filesystem_size
bytes
The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01)
proc_blocked
processes
The number of processes blocked (e.g, for IO or swapping reasons)
os_context_switch
switches/s
The number of context switches per second
os_cpuSchedMinGranularity
2250000 ns
300000→30000000 ns
Minimal preemption granularity (in nanoseconds) for CPU bound tasks
os_cpuSchedWakeupGranularity
3000000 ns
400000→40000000 ns
Scheduler Wakeup Granularity (in nanoseconds)
os_CPUSchedMigrationCost
500000 ns
100000→5000000 ns
Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations
os_CPUSchedChildRunsFirst
0
0→1
A freshly forked child runs before the parent continues execution
os_CPUSchedLatency
18000000 ns
2400000→240000000 ns
Targeted preemption latency (in nanoseconds) for CPU bound tasks
os_CPUSchedAutogroupEnabled
1
0→1
Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads
os_CPUSchedNrMigrate
32
3→320
Scheduler NR Migrate
os_MemorySwappiness
1
0→100
Memory Swappiness
os_MemoryVmVfsCachePressure
100 %
10→100 %
VFS Cache Pressure
os_MemoryVmMinFree
67584 KB
10240→1024000 KB
Minimum Free Memory
os_MemoryVmDirtyRatio
20 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyBackgroundRatio
10 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background
os_MemoryTransparentHugepageEnabled
always
always
never
Transparent Hugepage Enablement
os_MemoryTransparentHugepageDefrag
always
always
never
Transparent Hugepage Enablement Defrag
os_MemorySwap
swapon
swapon
swapoff
Memory Swap
os_MemoryVmDirtyExpire
3000 centisecs
300→30000 centisecs
Memory Dirty Expiration Time
os_MemoryVmDirtyWriteback
500 centisecs
50→5000 centisecs
Memory Dirty Writeback
os_NetworkNetCoreSomaxconn
128 connections
12→1200 connections
Network Max Connections
os_NetworkNetCoreNetdevMaxBacklog
1000 packets
100→10000 packets
Network Max Backlog
os_NetworkNetIpv4TcpMaxSynBacklog
1024 packets
52→15120 packets
Network IPV4 Max Sync Backlog
os_NetworkNetCoreNetdevBudget
300 packets
30→3000 packets
Network Budget
os_NetworkNetCoreRmemMax
212992 bytes
21299→2129920 bytes
Maximum network receive buffer size that applications can request
os_NetworkNetCoreWmemMax
21299→2129920 bytes
21299→2129920 bytes
Maximum network transmit buffer size that applications can request
os_NetworkNetIpv4TcpSlowStartAfterIdle
1
0→1
Network Slow Start After Idle Flag
os_NetworkNetIpv4TcpFinTimeout
60
6 →600 seconds
Network TCP timeout
os_NetworkRfs
0
0→131072
If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running
os_StorageReadAhead
128 KB
0→1024 KB
Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk
os_StorageNrRequests
1000 packets
100→10000 packets
Network Max Backlog
os_StorageRqAffinity
1
1→2
Storage Requests Affinity
os_StorageQueueScheduler
none
none
kyber
Storage Queue Scheduler Type
os_StorageNomerges
0
0→2
Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried
os_StorageMaxSectorsKb
128 KB
32→128 KB
The largest IO size that the OS c
cpu_load_avg
tasks
The system load average (i.e., the number of active tasks in the system)
cpu_num
CPUs
The number of CPUs available in the system (physical and logical)
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
cpu_used
CPUs
The average number of CPUs used in the system (physical and logical)
cpu_util_details
percent
The average CPU utilization % broken down by usage type and CPU number (e.g., cp1 user, cp2 system, cp3 soft-irq)
mem_fault
faults/s
The number of memory faults (minor+major)
mem_fault_major
faults/s
The number of major memory faults (i.e., faults that cause disk access) per second
mem_fault_minor
faults/s
The number of minor memory faults (i.e., faults that do not cause disk access) per second
mem_swapins
pages/s
The number of memory pages swapped in per second
mem_swapouts
pages/s
The number of memory pages swapped out per second
mem_total
bytes
The total amount of installed memory
mem_used
bytes
The total amount of memory used
mem_used_nocache
bytes
The total amount of memory used without considering memory reserved for caching purposes
mem_util
percent
The memory utilization % (i.e, the % of memory used)
mem_util_details
percent
The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory)
mem_util_nocache
percent
The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes
disk_io_inflight_details
ops
The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01)
disk_iops
ops/s
The average number of IO disk operations per second across all disks
disk_iops_details
ops/s
The number of IO disk-write operations per second broken down by disk (e.g., disk /dev/nvme01)
disk_iops_reads
ops/s
The average number of IO disk-read operations per second across all disks
disk_iops_writes
ops/s
The average number of IO disk-write operations per second across all disks
disk_read_bytes
bytes/s
The number of bytes per second read across all disks
disk_read_bytes_details
bytes/s
The average response time of IO disk operations broken down by disk (e.g., disk C://)
disk_read_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_response_time_details
seconds
The average response time of IO disk operations broken down by disk (e.g., disk C://)
disk_response_time_read
seconds
The average response time of read disk operations
disk_response_time_worst
seconds
The average response time of IO disk operations of the slowest disk
disk_response_time_write
seconds
The average response time of write on disk operations
disk_swap_used
bytes
The total amount of space used by swap disks
disk_swap_util
percent
The average space utilization % of swap disks
disk_util_details
percent
The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://)
disk_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_write_bytes_details
bytes/s
The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE)
filesystem_size
bytes
The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01)
filesystem_used
bytes
The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01)
filesystem_util
percent
The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1)
network_in_bytes_details
bytes/s
The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0)
network_out_bytes_details
bytes/s
The number of outbound network packets in bytes per second broken down by network device (e.g., eth01)
network_tcp_retrans
retrans/s
The number of network TCP retransmissions per second
os_context_switch
switches/s
The number of context switches per second
proc_blocked
processes
The number of processes blocked (e.g, for IO or swapping reasons)
os_cpuSchedMinGranularity
integer
nanoseconds
1500000
300000 → 30000000
no
Minimal preemption granularity (in nanoseconds) for CPU bound tasks
os_cpuSchedWakeupGranularity
integer
nanoseconds
2000000
400000 → 40000000
no
Scheduler Wakeup Granularity (in nanoseconds)
os_CPUSchedMigrationCost
integer
nanoseconds
500000
100000 → 5000000
no
Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations
os_CPUSchedChildRunsFirst
integer
0
0
, 1
no
A freshly forked child runs before the parent continues execution
os_CPUSchedLatency
integer
nanoseconds
12000000
2400000 → 240000000
no
Targeted preemption latency (in nanoseconds) for CPU bound tasks
os_CPUSchedAutogroupEnabled
integer
0
0
, 1
no
Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads
os_CPUSchedNrMigrate
integer
32
3 → 320
no
Scheduler NR Migrate
os_MemorySwappiness
integer
percent
60
0 → 100
no
The percentage of RAM free space for which the kernel will start swapping pages to disk
os_MemoryVmVfsCachePressure
integer
100
10 → 100
no
VFS Cache Pressure
os_MemoryVmCompactionProactiveness
integer
20
10 → 100
no
Determines how aggressively compaction is done in the background
os_MemoryVmPageLockUnfairness
integer
5
0 → 1000
no
Set the level of unfairness in the page lock queue.
os_MemoryVmWatermarkScaleFactor
integer
10
0 → 1000
no
The amount of memory, expressed as fractions of 10'000, left in a node/system before kswapd is woken up and how much memory needs to be free before kswapd goes back to sleep
os_MemoryVmWatermarkBoostFactor
integer
15000
0 → 30000
no
The level of reclaim when the memory is being fragmented, expressed as fractions of 10'000 of a zone's high watermark
os_MemoryVmMinFree
integer
67584
10240 → 1024000
no
Minimum Free Memory (in kbytes)
os_MemoryTransparentHugepageEnabled
categorical
madvise
always
, never
, madvise
no
Transparent Hugepage Enablement Flag
os_MemoryTransparentHugepageDefrag
categorical
madvise
always
, never
, defer+madvise
, madvise
, defer
no
Transparent Hugepage Enablement Defrag
os_MemorySwap
categorical
swapon
swapon
, swapoff
no
Memory Swap
os_MemoryVmDirtyRatio
integer
20
1 → 99
no
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyBackgroundRatio
integer
10
1 → 99
no
When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background
os_MemoryVmDirtyExpire
integer
centiseconds
3000
300 → 30000
no
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyWriteback
integer
centiseconds
500
50 → 5000
no
Memory Dirty Writeback (in centisecs)
os_NetworkNetCoreSomaxconn
integer
megabytes
128
12 → 8192
no
Network Max Connections
os_NetworkNetCoreNetdevMaxBacklog
integer
megabytes/s
1000
100 → 10000
no
Network Max Backlog
os_NetworkNetIpv4TcpMaxSynBacklog
integer
milliseconds
256
52 → 5120
no
Network IPV4 Max Sync Backlog
os_NetworkNetCoreNetdevBudget
integer
300
30 → 30000
no
Network Budget
os_NetworkNetCoreRmemMax
integer
212992
21299 → 2129920
no
Maximum network receive buffer size that applications can request
os_NetworkNetCoreWmemMax
integer
212992
21299 → 2129920
no
Maximum network transmit buffer size that applications can request
os_NetworkNetIpv4TcpSlowStartAfterIdle
integer
1
0
, 1
no
Network Slow Start After Idle Flag
os_NetworkNetIpv4TcpFinTimeout
integer
60
6 → 600
no
Network TCP timeout
os_NetworkRfs
integer
0
0 → 131072
no
If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running
os_StorageReadAhead
integer
kilobytes
128
0 → 4096
no
Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk
os_StorageNrRequests
integer
32
12 → 1280
no
Storage Number of Requests
os_StorageRqAffinity
integer
1
1
, 2
no
Storage Requests Affinity
os_StorageQueueScheduler
integer
none
none
, kyber
, mq-deadline
, bfq
no
Storage Queue Scheduler Type
os_StorageNomerges
integer
0
0 → 2
no
Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried
os_StorageMaxSectorsKb
integer
kilobytes
256
32 → 256
no
The largest IO size that the OS can issue to a block device
based on Linux operating system
based on MS .Net technology
based on OpenJDK and Oracle HotSpot JVM
based on Eclipse OpenJ9 VM (formerly known as IBM J9)
based on NodeJS
based on GO runtime (aka Golang)
exposed as web applications
based on Docker containters
based on Kubernetes containters
based on WebSphere middleware
based on Apache Spark middleware
based on PostgreSQL database
based on Cassandra database
based on MySQL database
based on Oracle database
based on MongoDB database
based on Elasticsearch database
based on AWS EC2 or Lambda resources
cpu_load_avg
tasks
The system load average (i.e., the number of active tasks in the system)
cpu_num
CPUs
The number of CPUs available in the system (physical and logical)
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
cpu_used
CPUs
The average number of CPUs used in the system (physical and logical)
cpu_util_details
percent
The average CPU utilization % broken down by usage type and CPU number (e.g., cp1 user, cp2 system, cp3 soft-irq)
mem_fault
faults/s
The number of memory faults (minor+major)
mem_fault_major
faults/s
The number of major memory faults (i.e., faults that cause disk access) per second
mem_fault_minor
faults/s
The number of minor memory faults (i.e., faults that do not cause disk access) per second
mem_swapins
pages/s
The number of memory pages swapped in per second
mem_swapouts
pages/s
The number of memory pages swapped out per second
mem_total
bytes
The total amount of installed memory
mem_used
bytes
The total amount of memory used
mem_used_nocache
bytes
The total amount of memory used without considering memory reserved for caching purposes
mem_util
percent
The memory utilization % (i.e, the % of memory used)
mem_util_details
percent
The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory)
mem_util_nocache
percent
The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes
disk_io_inflight_details
ops
The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01)
disk_iops
ops/s
The average number of IO disk operations per second across all disks
disk_iops_details
ops/s
The number of IO disk-write operations per second broken down by disk (e.g., disk /dev/nvme01)
disk_iops_reads
ops/s
The average number of IO disk-read operations per second across all disks
disk_iops_writes
ops/s
The average number of IO disk-write operations per second across all disks
disk_read_bytes
bytes/s
The number of bytes per second read across all disks
disk_read_bytes_details
bytes/s
The average response time of IO disk operations broken down by disk (e.g., disk C://)
disk_read_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_response_time_details
seconds
The average response time of IO disk operations broken down by disk (e.g., disk C://)
disk_response_time_read
seconds
The average response time of read disk operations
disk_response_time_worst
seconds
The average response time of IO disk operations of the slowest disk
disk_response_time_write
seconds
The average response time of write on disk operations
disk_swap_used
bytes
The total amount of space used by swap disks
disk_swap_util
percent
The average space utilization % of swap disks
disk_util_details
percent
The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://)
disk_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_write_bytes_details
bytes/s
The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE)
filesystem_size
bytes
The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01)
filesystem_used
bytes
The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01)
filesystem_util
percent
The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1)
network_in_bytes_details
bytes/s
The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0)
network_out_bytes_details
bytes/s
The number of outbound network packets in bytes per second broken down by network device (e.g., eth01)
network_tcp_retrans
retrans/s
The number of network TCP retransmissions per second
os_context_switch
switches/s
The number of context switches per second
proc_blocked
processes
The number of processes blocked (e.g, for IO or swapping reasons)
os_cpuSchedMinGranularity
integer
nanoseconds
1500000
300000 → 30000000
no
Minimal preemption granularity (in nanoseconds) for CPU bound tasks
os_cpuSchedWakeupGranularity
integer
nanoseconds
2000000
400000 → 40000000
no
Scheduler Wakeup Granularity (in nanoseconds)
os_CPUSchedMigrationCost
integer
nanoseconds
500000
100000 → 5000000
no
Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations
os_CPUSchedChildRunsFirst
integer
0
0
, 1
no
A freshly forked child runs before the parent continues execution
os_CPUSchedLatency
integer
nanoseconds
12000000
2400000 → 240000000
no
Targeted preemption latency (in nanoseconds) for CPU bound tasks
os_CPUSchedAutogroupEnabled
integer
0
0
, 1
no
Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads
os_CPUSchedNrMigrate
integer
32
3 → 320
no
Scheduler NR Migrate
os_MemorySwappiness
integer
percent
60
0 → 100
no
The percentage of RAM free space for which the kernel will start swapping pages to disk
os_MemoryVmVfsCachePressure
integer
100
10 → 100
no
VFS Cache Pressure
os_MemoryVmCompactionProactiveness
integer
20
0 → 100
Determines how aggressively compaction is done in the background
os_MemoryVmPageLockUnfairness
integer
5
0 → 1000
no
Set the level of unfairness in the page lock queue.
os_MemoryVmWatermarkScaleFactor
integer
10
0 → 1000
no
The amount of memory, expressed as fractions of 10'000, left in a node/system before kswapd is woken up and how much memory needs to be free before kswapd goes back to sleep
os_MemoryVmWatermarkBoostFactor
integer
15000
0 → 30000
no
The level of reclaim when the memory is being fragmented, expressed as fractions of 10'000 of a zone's high watermark
os_MemoryVmMinFree
integer
67584
10240 → 1024000
no
Minimum Free Memory (in kbytes)
os_MemoryTransparentHugepageEnabled
categorical
madvise
always
, never
, madvise
no
Transparent Hugepage Enablement Flag
os_MemoryTransparentHugepageDefrag
categorical
madvise
always
, never
, defer+madvise
, madvise
, defer
no
Transparent Hugepage Enablement Defrag
os_MemorySwap
categorical
swapon
swapon
, swapoff
no
Memory Swap
os_MemoryVmDirtyRatio
integer
20
1 → 99
no
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyBackgroundRatio
integer
10
1 → 99
no
When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background
os_MemoryVmDirtyExpire
integer
centiseconds
3000
300 → 30000
no
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyWriteback
integer
centiseconds
500
50 → 5000
no
Memory Dirty Writeback (in centisecs)
os_NetworkNetCoreSomaxconn
integer
megabytes
128
12 → 8192
no
Network Max Connections
os_NetworkNetCoreNetdevMaxBacklog
integer
megabytes/s
1000
100 → 10000
no
Network Max Backlog
os_NetworkNetIpv4TcpMaxSynBacklog
integer
milliseconds
256
52 → 5120
no
Network IPV4 Max Sync Backlog
os_NetworkNetCoreNetdevBudget
integer
300
30 → 30000
no
Network Budget
os_NetworkNetCoreRmemMax
integer
212992
21299 → 2129920
no
Maximum network receive buffer size that applications can request
os_NetworkNetCoreWmemMax
integer
212992
21299 → 2129920
no
Maximum network transmit buffer size that applications can request
os_NetworkNetIpv4TcpSlowStartAfterIdle
integer
1
0
, 1
no
Network Slow Start After Idle Flag
os_NetworkNetIpv4TcpFinTimeout
integer
60
6 → 600
no
Network TCP timeout
os_NetworkRfs
integer
0
0 → 131072
no
If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running
os_StorageReadAhead
integer
kilobytes
128
0 → 4096
no
Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk
os_StorageNrRequests
integer
32
12 → 1280
no
Storage Number of Requests
os_StorageRqAffinity
integer
1
1
, 2
no
Storage Requests Affinity
os_StorageQueueScheduler
integer
none
none
, kyber
, mq-deadline
, bfq
no
Storage Queue Scheduler Type
os_StorageNomerges
integer
0
0 → 2
no
Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried
os_StorageMaxSectorsKb
integer
kilobytes
256
32 → 256
no
The largest IO size that the OS can issue to a block device
cpu_num
CPUs
The number of CPUs available in the system (physical and logical)
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
cpu_util_details
percent
The average CPU utilization % broken down by usage type and cpu number (e.g., cp1 user, cp2 system, cp3 soft-irq)
cpu_load_avg
tasks
The system load average (i.e., the number of active tasks in the system)
mem_util
percent
The memory utilization % (i.e, the % of memory used)
mem_util_nocache
percent
The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes
mem_util_details
percent
The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory)
mem_used
bytes
The total amount of memory used
mem_used_nocache
bytes
The total amount of memory used without considering memory reserved for caching purposes
mem_total
bytes
The total amount of installed memory
mem_fault_minor
faults/s
The number of minor memory faults (i.e., faults that do not cause disk access) per second
mem_fault_major
faults/s
The number of major memory faults (i.e., faults that cause disk access) per second
mem_fault
faults/s
The number of memory faults (major + minor)
mem_swapins
pages/s
The number of memory pages swapped in per second
mem_swapouts
pages/s
The number of memory pages swapped out per second
network_tcp_retrans
retrans/s
The number of network TCP retransmissions per second
network_in_bytes_details
bytes/s
The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0)
network_out_bytes_details
bytes/s
The number of outbound network packets in bytes per second broken down by network device (e.g., eth01)
disk_swap_util
percent
The average space utilization % of swap disks
disk_swap_used
bytes
The total amount of space used by swap disks
disk_util_details
percent
The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://)
disk_iops_writes
ops/s
The average number of IO disk-write operations per second across all disks
disk_iops_reads
ops/s
The average number of IO disk-read operations per second across all disks
disk_iops
ops/s
The average number of IO disk operations per second across all disks
disk_response_time_read
seconds
The average response time of IO read-disk operations
disk_response_time_worst
seconds
The average response time of IO disk operations of the slowest disk
disk_response_time_write
seconds
The average response time of IO write-disk operations
disk_response_time_details
ops/s
The average response time of IO disk operations broken down by disk (e.g., disk /dev/nvme01 )
disk_iops_details
ops/s
The number of IO disk-write operations of per second broken down by disk (e.g., disk /dev/nvme01)
disk_io_inflight_details
ops
The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01)
disk_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_read_bytes
bytes/s
The number of bytes per second read across all disks
disk_read_write_bytes
bytes/s
The number of bytes per second read and written across all disks
disk_write_bytes_details
bytes/s
The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE)
disk_read_bytes_details
bytes/s
The number of bytes per second read from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation READ)
filesystem_util
percent
The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1)
filesystem_used
bytes
The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01)
filesystem_size
bytes
The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01)
proc_blocked
processes
The number of processes blocked (e.g, for IO or swapping reasons)
os_context_switch
switches/s
The number of context switches per second
os_cpuSchedMinGranularity
2250000 ns
300000→30000000 ns
Minimal preemption granularity (in nanoseconds) for CPU bound tasks
os_cpuSchedWakeupGranularity
3000000 ns
400000→40000000 ns
Scheduler Wakeup Granularity (in nanoseconds)
os_CPUSchedMigrationCost
500000 ns
100000→5000000 ns
Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations
os_CPUSchedChildRunsFirst
0
0→1
A freshly forked child runs before the parent continues execution
os_CPUSchedLatency
18000000 ns
2400000→240000000 ns
Targeted preemption latency (in nanoseconds) for CPU bound tasks
os_CPUSchedAutogroupEnabled
1
0→1
Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads
os_CPUSchedNrMigrate
32
3→320
Scheduler NR Migrate
os_MemorySwappiness
30
0→100
Memory Swappiness
os_MemoryVmVfsCachePressure
100 %
10→100 %
VFS Cache Pressure
os_MemoryVmMinFree
67584 KB
10240→1024000 KB
Minimum Free Memory
os_MemoryVmDirtyRatio
30 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyBackgroundRatio
10 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background
os_MemoryTransparentHugepageEnabled
never
always
never
madvise
Transparent Hugepage Enablement
os_MemoryTransparentHugepageDefrag
always
always
never
madvise
defer
defer+madvise
Transparent Hugepage Enablement Defrag
os_MemorySwap
swapon
swapon
swapoff
Memory Swap
os_MemoryVmDirtyExpire
3000 centisecs
300→30000 centisecs
Memory Dirty Expiration Time
os_MemoryVmDirtyWriteback
500 centisecs
50→5000 centisecs
Memory Dirty Writeback
os_NetworkNetCoreSomaxconn
128 connections
12→1200 connections
Network Max Connections
os_NetworkNetCoreNetdevMaxBacklog
1000 packets
100→10000 packets
Network Max Backlog
os_NetworkNetIpv4TcpMaxSynBacklog
512 packets
52→15120 packets
Network IPV4 Max Sync Backlog
os_NetworkNetCoreNetdevBudget
300 packets
30→3000 packets
Network Budget
os_NetworkNetCoreRmemMax
212992 bytes
21299→2129920 bytes
Maximum network receive buffer size that applications can request
os_NetworkNetCoreWmemMax
21299→2129920 bytes
21299→2129920 bytes
Maximum network transmit buffer size that applications can request
os_NetworkNetIpv4TcpSlowStartAfterIdle
1
0→1
Network Slow Start After Idle Flag
os_NetworkNetIpv4TcpFinTimeout
60
6 →600 seconds
Network TCP timeout
os_NetworkRfs
0
0→131072
If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running
os_StorageReadAhead
128 KB
0→1024 KB
Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk
os_StorageNrRequests
1000 packets
100→10000 packets
Network Max Backlog
os_StorageRqAffinity
1
1→2
Storage Requests Affinity
os_StorageQueueScheduler
none
none
kyber
mq-deadline
bfq
Storage Queue Scheduler Type
os_StorageNomerges
0
0→2
Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried
os_StorageMaxSectorsKb
256 KB
32→256 KB
The largest IO size that the OS c
This page describes the Optimization Pack for the component type CentOS 7.
Notice: you can use a device
custom filter to monitor a specific disk with Prometheus. You can find more information on Prometheus queries and the %FILTERS%
placeholder here: Prometheus provider and here: Prometheus provider metrics mapping.
There are no general constraints among CentOS 7 parameters.
This page describes the Optimization Pack for the component type Ubuntu 16.04.
Notice: you can use a device
custom filter to monitor a specific disk with Prometheus. You can find more information on Prometheus queries and the %FILTERS%
placeholder here: Prometheus provider and here: Prometheus provider metrics mapping.
This page describes the Optimization Pack for the component type CentOS 8.
Notice: you can use a device
custom filter to monitor a specific disk with Prometheus. You can find more information on Prometheus queries and the %FILTERS%
placeholder here: Prometheus provider and here: Prometheus provider metrics mapping.
There are no general constraints among RHEL 8 parameters.
The OpenJ9 optimization pack enables the ability to optimize Java applications based on the Eclipse OpenJ9 VM, formerly known as IBM J9. Through this optimization pack, Akamas is able to tackle the problem of performance of JVM-based applications from both the point of view of cost savings and quality of service.
To achieve these goals the optimization pack provides parameters that focus on the following areas:
Garbage collection
Heap
JIT
Similarly, the bundled metrics provide visibility on the following aspects of tuned applications:
Heap and memory utilization
Garbage Collection
Execution threads
The optimization pack supports the most used versions of JVM.
Here’s the command to install the Eclipse OpenJ9 optimization pack using the Akamas CLI:
This page describes the Optimization Pack for Java OpenJDK 8 JVM.
The following parameters require their ranges or default values to be updated according to the described rules:
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters:
This page describes the Optimization Pack for Java OpenJDK 11 JVM.
The following parameters require their ranges or default values to be updated according to the described rules:
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters:
This page describes the Optimization Pack for Eclipse OpenJ9 (formerly known as IBM J9) Virtual Machine version 6.
The following parameters require their ranges or default values to be updated according to the described rules:
Notice that the value nocompressedreferences
for j9vm_compressedReferences
can only be specified for JVMs compiled with the proper --with-noncompressedrefs
flag. If this is not the case you cannot actively disable compressed references, meaning:
for Xmx <= 57GB is useless to tune this parameter since compressed references are active by default and it is not possible to explicitly disable it
for Xmx > 57GB, since the by default (blank value) compressed references are disabled, Akamas can try to enable it. This requires removing the nocompressedreferences
from the domain
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters:
Notice that
j9vm_newSpaceFixed
is mutually exclusive with j9vm_minNewSpace
and j9vm_maxNewSpace
j9vm_oldSpaceFixed
is mutually exclusive with j9vm_minOldSpace
and j9vm_maxOldSpace
the sum of j9vm_minNewSpace
and j9vm_minOldSpace
must be equal to j9vm_minHeapSize
, so it's useless to tune all of them together. Max values seem to be more complex.
Component Type | Description |
---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
Component Type | Description |
---|
For more information on the process of installing or upgrading an optimization pack refer to .
Metric | Unit | Description |
---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Name | Type | Unit | Dafault | Domain | Restart | Description |
---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|
Formula | Notes |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Name | Type | Unit | Dafault | Domain | Restart | Description |
---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|
Formula | Notes |
---|
Name | Unit | Description |
---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|
Parameter | Default value | Domain |
---|
Formula | Notes |
---|
MS .NET 3.1
cpu_num
CPUs
The number of CPUs available in the system (physical and logical)
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
cpu_util_details
percent
The average CPU utilization % broken down by usage type and cpu number (e.g., cp1 user, cp2 system, cp3 soft-irq)
cpu_load_avg
tasks
The system load average (i.e., the number of active tasks in the system)
mem_util
percent
The memory utilization % (i.e, the % of memory used)
mem_util_nocache
percent
The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes
mem_util_details
percent
The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory)
mem_used
bytes
The total amount of memory used
mem_used_nocache
bytes
The total amount of memory used without considering memory reserved for caching purposes
mem_total
bytes
The total amount of installed memory
mem_fault_minor
faults/s
The number of minor memory faults (i.e., faults that do not cause disk access) per second
mem_fault_major
faults/s
The number of major memory faults (i.e., faults that cause disk access) per second
mem_fault
faults/s
The number of memory faults (major + minor)
mem_swapins
pages/s
The number of memory pages swapped in per second
mem_swapouts
pages/s
The number of memory pages swapped out per second
network_tcp_retrans
retrans/s
The number of network TCP retransmissions per second
network_in_bytes_details
bytes/s
The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0)
network_out_bytes_details
bytes/s
The number of outbound network packets in bytes per second broken down by network device (e.g., eth01)
disk_swap_util
percent
The average space utilization % of swap disks
disk_swap_used
bytes
The total amount of space used by swap disks
disk_util_details
percent
The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://)
disk_iops_writes
ops/s
The average number of IO disk-write operations per second across all disks
disk_iops_reads
ops/s
The average number of IO disk-read operations per second across all disks
disk_iops
ops/s
The average number of IO disk operations per second across all disks
disk_response_time_read
seconds
The average response time of IO read-disk operations
disk_response_time_worst
seconds
The average response time of IO disk operations of the slowest disk
disk_response_time_write
seconds
The average response time of IO write-disk operations
disk_response_time_details
ops/s
The average response time of IO disk operations broken down by disk (e.g., disk /dev/nvme01 )
disk_iops_details
ops/s
The number of IO disk-write operations of per second broken down by disk (e.g., disk /dev/nvme01)
disk_io_inflight_details
ops
The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01)
disk_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_read_bytes
bytes/s
The number of bytes per second read across all disks
disk_read_write_bytes
bytes/s
The number of bytes per second read and written across all disks
disk_write_bytes_details
bytes/s
The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE)
disk_read_bytes_details
bytes/s
The number of bytes per second read from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation READ)
filesystem_util
percent
The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1)
filesystem_used
bytes
The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01)
filesystem_size
bytes
The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01)
proc_blocked
processes
The number of processes blocked (e.g, for IO or swapping reasons)
os_context_switch
switches/s
The number of context switches per second
os_cpuSchedMinGranularity
2250000 ns
300000→30000000 ns
Minimal preemption granularity (in nanoseconds) for CPU bound tasks
os_cpuSchedWakeupGranularity
3000000 ns
400000→40000000 ns
Scheduler Wakeup Granularity (in nanoseconds)
os_CPUSchedMigrationCost
500000 ns
100000→5000000 ns
Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations
os_CPUSchedChildRunsFirst
0
0→1
A freshly forked child runs before the parent continues execution
os_CPUSchedLatency
18000000 ns
2400000→240000000 ns
Targeted preemption latency (in nanoseconds) for CPU bound tasks
os_CPUSchedAutogroupEnabled
1
0→1
Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads
os_CPUSchedNrMigrate
32
3→320
Scheduler NR Migrate
os_MemorySwappiness
1
0→100
Memory Swappiness
os_MemoryVmVfsCachePressure
100 %
10→100 %
VFS Cache Pressure
os_MemoryVmMinFree
67584 KB
10240→1024000 KB
Minimum Free Memory
os_MemoryVmDirtyRatio
20 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyBackgroundRatio
10 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background
os_MemoryTransparentHugepageEnabled
always
always
never
Transparent Hugepage Enablement
os_MemoryTransparentHugepageDefrag
always
always
never
Transparent Hugepage Enablement Defrag
os_MemorySwap
swapon
swapon
swapoff
Memory Swap
os_MemoryVmDirtyExpire
3000 centisecs
300→30000 centisecs
Memory Dirty Expiration Time
os_MemoryVmDirtyWriteback
500 centisecs
50→5000 centisecs
Memory Dirty Writeback
os_NetworkNetCoreSomaxconn
128 connections
12→1200 connections
Network Max Connections
os_NetworkNetCoreNetdevMaxBacklog
1000 packets
100→10000 packets
Network Max Backlog
os_NetworkNetIpv4TcpMaxSynBacklog
1024 packets
52→15120 packets
Network IPV4 Max Sync Backlog
os_NetworkNetCoreNetdevBudget
300 packets
30→3000 packets
Network Budget
os_NetworkNetCoreRmemMax
212992 bytes
21299→2129920 bytes
Maximum network receive buffer size that applications can request
os_NetworkNetCoreWmemMax
21299→2129920 bytes
21299→2129920 bytes
Maximum network transmit buffer size that applications can request
os_NetworkNetIpv4TcpSlowStartAfterIdle
1
0→1
Network Slow Start After Idle Flag
os_NetworkNetIpv4TcpFinTimeout
60
6 →600 seconds
Network TCP timeout
os_NetworkRfs
0
0→131072
If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running
os_StorageReadAhead
128 KB
0→1024 KB
Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk
os_StorageNrRequests
1000 packets
100→10000 packets
Network Max Backlog
os_StorageRqAffinity
1
1→2
Storage Requests Affinity
os_StorageQueueScheduler
none
none
kyber
Storage Queue Scheduler Type
os_StorageNomerges
0
0→2
Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried
os_StorageMaxSectorsKb
128 KB
32→128 KB
The largest IO size that the OS c
cpu_num
CPUs
The number of CPUs available in the system (physical and logical)
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
cpu_util_details
percent
The average CPU utilization % broken down by usage type and cpu number (e.g., cp1 user, cp2 system, cp3 soft-irq)
cpu_load_avg
tasks
The system load average (i.e., the number of active tasks in the system)
mem_util
percent
The memory utilization % (i.e, the % of memory used)
mem_util_nocache
percent
The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes
mem_util_details
percent
The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory)
mem_used
bytes
The total amount of memory used
mem_used_nocache
bytes
The total amount of memory used without considering memory reserved for caching purposes
mem_total
bytes
The total amount of installed memory
mem_fault_minor
faults/s
The number of minor memory faults (i.e., faults that do not cause disk access) per second
mem_fault_major
faults/s
The number of major memory faults (i.e., faults that cause disk access) per second
mem_fault
faults/s
The number of memory faults (major + minor)
mem_swapins
pages/s
The number of memory pages swapped in per second
mem_swapouts
pages/s
The number of memory pages swapped out per second
network_tcp_retrans
retrans/s
The number of network TCP retransmissions per second
network_in_bytes_details
bytes/s
The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0)
network_out_bytes_details
bytes/s
The number of outbound network packets in bytes per second broken down by network device (e.g., eth01)
disk_swap_util
percent
The average space utilization % of swap disks
disk_swap_used
bytes
The total amount of space used by swap disks
disk_util_details
percent
The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://)
disk_iops_writes
ops/s
The average number of IO disk-write operations per second across all disks
disk_iops_reads
ops/s
The average number of IO disk-read operations per second across all disks
disk_iops
ops/s
The average number of IO disk operations per second across all disks
disk_response_time_read
seconds
The average response time of IO read-disk operations
disk_response_time_worst
seconds
The average response time of IO disk operations of the slowest disk
disk_response_time_write
seconds
The average response time of IO write-disk operations
disk_response_time_details
ops/s
The average response time of IO disk operations broken down by disk (e.g., disk /dev/nvme01 )
disk_iops_details
ops/s
The number of IO disk-write operations of per second broken down by disk (e.g., disk /dev/nvme01)
disk_io_inflight_details
ops
The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01)
disk_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_read_bytes
bytes/s
The number of bytes per second read across all disks
disk_read_write_bytes
bytes/s
The number of bytes per second read and written across all disks
disk_write_bytes_details
bytes/s
The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE)
disk_read_bytes_details
bytes/s
The number of bytes per second read from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation READ)
filesystem_util
percent
The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1)
filesystem_used
bytes
The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01)
filesystem_size
bytes
The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01)
proc_blocked
processes
The number of processes blocked (e.g, for IO or swapping reasons)
os_context_switch
switches/s
The number of context switches per second
os_cpuSchedMinGranularity
2250000 ns
300000→30000000 ns
Minimal preemption granularity (in nanoseconds) for CPU bound tasks
os_cpuSchedWakeupGranularity
3000000 ns
400000→40000000 ns
Scheduler Wakeup Granularity (in nanoseconds)
os_CPUSchedMigrationCost
500000 ns
100000→5000000 ns
Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations
os_CPUSchedChildRunsFirst
0
0→1
A freshly forked child runs before the parent continues execution
os_CPUSchedLatency
18000000 ns
2400000→240000000 ns
Targeted preemption latency (in nanoseconds) for CPU bound tasks
os_CPUSchedAutogroupEnabled
1
0→1
Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads
os_CPUSchedNrMigrate
32
3→320
Scheduler NR Migrate
os_MemorySwappiness
1
0→100
Memory Swappiness
os_MemoryVmVfsCachePressure
100 %
10→100 %
VFS Cache Pressure
os_MemoryVmMinFree
67584 KB
10240→1024000 KB
Minimum Free Memory
os_MemoryVmDirtyRatio
20 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyBackgroundRatio
10 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background
os_MemoryTransparentHugepageEnabled
always
always
never
Transparent Hugepage Enablement
os_MemoryTransparentHugepageDefrag
always
always
never
Transparent Hugepage Enablement Defrag
os_MemorySwap
swapon
swapon
swapoff
Memory Swap
os_MemoryVmDirtyExpire
3000 centisecs
300→30000 centisecs
Memory Dirty Expiration Time
os_MemoryVmDirtyWriteback
500 centisecs
50→5000 centisecs
Memory Dirty Writeback
os_NetworkNetCoreSomaxconn
128 connections
12→1200 connections
Network Max Connections
os_NetworkNetCoreNetdevMaxBacklog
1000 packets
100→10000 packets
Network Max Backlog
os_NetworkNetIpv4TcpMaxSynBacklog
1024 packets
52→15120 packets
Network IPV4 Max Sync Backlog
os_NetworkNetCoreNetdevBudget
300 packets
30→3000 packets
Network Budget
os_NetworkNetCoreRmemMax
212992 bytes
21299→2129920 bytes
Maximum network receive buffer size that applications can request
os_NetworkNetCoreWmemMax
21299→2129920 bytes
21299→2129920 bytes
Maximum network transmit buffer size that applications can request
os_NetworkNetIpv4TcpSlowStartAfterIdle
1
0→1
Network Slow Start After Idle Flag
os_NetworkNetIpv4TcpFinTimeout
60
6 →600 seconds
Network TCP timeout
os_NetworkRfs
0
0→131072
If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running
os_StorageReadAhead
128 KB
0→1024 KB
Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk
os_StorageNrRequests
1000 packets
100→10000 packets
Network Max Backlog
os_StorageRqAffinity
1
1→2
Storage Requests Affinity
os_StorageQueueScheduler
none
none
kyber
Storage Queue Scheduler Type
os_StorageNomerges
0
0→2
Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried
os_StorageMaxSectorsKb
128 KB
32→128 KB
The largest IO size that the OS c
cpu_num
CPUs
The number of CPUs available in the system (physical and logical)
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
cpu_util_details
percent
The average CPU utilization % broken down by usage type and cpu number (e.g., cp1 user, cp2 system, cp3 soft-irq)
cpu_load_avg
tasks
The system load average (i.e., the number of active tasks in the system)
mem_util
percent
The memory utilization % (i.e, the % of memory used)
mem_util_nocache
percent
The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes
mem_util_details
percent
The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory)
mem_used
bytes
The total amount of memory used
mem_used_nocache
bytes
The total amount of memory used without considering memory reserved for caching purposes
mem_total
bytes
The total amount of installed memory
mem_fault_minor
faults/s
The number of minor memory faults (i.e., faults that do not cause disk access) per second
mem_fault_major
faults/s
The number of major memory faults (i.e., faults that cause disk access) per second
mem_fault
faults/s
The number of memory faults (major + minor)
mem_swapins
pages/s
The number of memory pages swapped in per second
mem_swapouts
pages/s
The number of memory pages swapped out per second
network_tcp_retrans
retrans/s
The number of network TCP retransmissions per second
network_in_bytes_details
bytes/s
The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0)
network_out_bytes_details
bytes/s
The number of outbound network packets in bytes per second broken down by network device (e.g., eth01)
disk_swap_util
percent
The average space utilization % of swap disks
disk_swap_used
bytes
The total amount of space used by swap disks
disk_util_details
percent
The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://)
disk_iops_writes
ops/s
The average number of IO disk-write operations per second across all disks
disk_iops_reads
ops/s
The average number of IO disk-read operations per second across all disks
disk_iops
ops/s
The average number of IO disk operations per second across all disks
disk_response_time_read
seconds
The average response time of IO read-disk operations
disk_response_time_worst
seconds
The average response time of IO disk operations of the slowest disk
disk_response_time_write
seconds
The average response time of IO write-disk operations
disk_response_time_details
ops/s
The average response time of IO disk operations broken down by disk (e.g., disk /dev/nvme01 )
disk_iops_details
ops/s
The number of IO disk-write operations of per second broken down by disk (e.g., disk /dev/nvme01)
disk_io_inflight_details
ops
The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01)
disk_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_read_bytes
bytes/s
The number of bytes per second read across all disks
disk_read_write_bytes
bytes/s
The number of bytes per second read and written across all disks
disk_write_bytes_details
bytes/s
The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE)
disk_read_bytes_details
bytes/s
The number of bytes per second read from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation READ)
filesystem_util
percent
The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1)
filesystem_used
bytes
The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01)
filesystem_size
bytes
The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01)
proc_blocked
processes
The number of processes blocked (e.g, for IO or swapping reasons)
os_context_switch
switches/s
The number of context switches per second
os_cpuSchedMinGranularity
2250000 ns
300000→30000000 ns
Minimal preemption granularity (in nanoseconds) for CPU bound tasks
os_cpuSchedWakeupGranularity
3000000 ns
400000→40000000 ns
Scheduler Wakeup Granularity (in nanoseconds)
os_CPUSchedMigrationCost
500000 ns
100000→5000000 ns
Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations
os_CPUSchedChildRunsFirst
0
0→1
A freshly forked child runs before the parent continues execution
os_CPUSchedLatency
18000000 ns
2400000→240000000 ns
Targeted preemption latency (in nanoseconds) for CPU bound tasks
os_CPUSchedAutogroupEnabled
1
0→1
Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads
os_CPUSchedNrMigrate
32
3→320
Scheduler NR Migrate
os_MemorySwappiness
1
0→100
Memory Swappiness
os_MemoryVmVfsCachePressure
100 %
10→100 %
VFS Cache Pressure
os_MemoryVmMinFree
67584 KB
10240→1024000 KB
Minimum Free Memory
os_MemoryVmDirtyRatio
20 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyBackgroundRatio
10 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background
os_MemoryTransparentHugepageEnabled
never
always
never
madvise
Transparent Hugepage Enablement
os_MemoryTransparentHugepageDefrag
always
always
never
madvise
defer
defer+madvise
Transparent Hugepage Enablement Defrag
os_MemorySwap
swapon
swapon
swapoff
Memory Swap
os_MemoryVmDirtyExpire
3000 centisecs
300→30000 centisecs
Memory Dirty Expiration Time
os_MemoryVmDirtyWriteback
500 centisecs
50→5000 centisecs
Memory Dirty Writeback
os_NetworkNetCoreSomaxconn
128 connections
12→1200 connections
Network Max Connections
os_NetworkNetCoreNetdevMaxBacklog
1000 packets
100→10000 packets
Network Max Backlog
os_NetworkNetIpv4TcpMaxSynBacklog
512 packets
52→15120 packets
Network IPV4 Max Sync Backlog
os_NetworkNetCoreNetdevBudget
300 packets
30→3000 packets
Network Budget
os_NetworkNetCoreRmemMax
212992 bytes
21299→2129920 bytes
Maximum network receive buffer size that applications can request
os_NetworkNetCoreWmemMax
21299→2129920 bytes
21299→2129920 bytes
Maximum network transmit buffer size that applications can request
os_NetworkNetIpv4TcpSlowStartAfterIdle
1
0→1
Network Slow Start After Idle Flag
os_NetworkNetIpv4TcpFinTimeout
60
6 →600 seconds
Network TCP timeout
os_NetworkRfs
0
0→131072
If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running
os_StorageReadAhead
128 KB
0→1024 KB
Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk
os_StorageNrRequests
1000 packets
100→10000 packets
Network Max Backlog
os_StorageRqAffinity
1
1→2
Storage Requests Affinity
os_StorageQueueScheduler
none
none
kyber
mq-deadline
bfq
Storage Queue Scheduler Type
os_StorageNomerges
0
0→2
Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried
os_StorageMaxSectorsKb
128 KB
32→128 KB
The largest IO size that the OS c
gc_count | collections/s | The total number of garbage collections |
gc_duration | seconds | The garbage collection duration |
heap_hard_limit | bytes | The size of the heap |
csproj_System_GC_Server | categorical | CPUs |
|
| yes | The main flavor of the GC: set it to false for workstation GC or true for server GC. To be set in csproj file and requires rebuild. |
csproj_System_GC_Concurrent | categorical | boolean |
|
| yes | Configures whether background (concurrent) garbage collection is enabled (setting to true). To be set in csproj file and requires rebuild. |
runtime_System_GC_Server | categorical | boolean |
|
| yes | The main flavor of the GC: set it to false for workstation GC or true for server GC. To be set in csproj file and requires rebuild. |
runtime_System_GC_Concurrent | categorical | boolean |
|
| yes | Configures whether background (concurrent) garbage collection is enabled (setting to true). To be set in csproj file and requires rebuild. |
runtime_System_GC_HeapCount | integer | heapcount |
|
| no | Limits the number of heaps created by the garbage collector. To be set in runtimeconfig.json in runtimeOptions: configProperties |
runtime_System_GC_CpuGroup | categorical | boolean |
|
| no | Configures whether the garbage collector uses CPU groups or not. Default is false. To be set in runtimeconfig.json |
runtime_System_GC_NoAffinitize | categorical | boolean |
|
| no | Specifies whether to affinitize garbage collection threads with processors. To affinitize a GC thread means that it can only run on its specific CPU. To be set in runtimeconfig.json in runtimeOptions: configProperties |
runtime_System_GC_HeapHardLimit | integer | bytes |
|
| no | Specifies the maximum commit size, in bytes, for the GC heap and GC bookkeeping. To be set in runtimeconfig.json in runtimeOptions: configProperties |
runtime_System_GC_HeapHardLimitPercent | real | percent |
|
| no | Specifies the allowable GC heap usage as a percentage of the total physical memory. To be set in runtimeconfig.json in runtimeOptions: configProperties. |
runtime_System_GC_HighMemoryPercent | integer | bytes |
|
| no | Specify the memory threshold that triggers the execution of a garbage collection. To be set in runtimeconfig.json. |
runtime_System_GC_RetainVM | categorical | boolean |
|
| no | Configures whether segments that should be deleted are put on a standby list for future use or are released back to the operating system (OS). Default is false. To be set in runtimeconfig.json in runtimeOptions: configProperties |
runtime_System_GC_LOHThreshold | integer | bytes |
|
| no | Specifies the threshold size, in bytes, that causes objects to go on the large object heap (LOH). To be set in runtimeconfig.json in runtimeOptions: configProperties |
webconf_maxconnection | integer | connections |
|
| no | This setting controls the maximum number of outgoing HTTP connections that you can initiate from a client. To be set in web.config (target app only) or machine.config (global) |
webconf_maxIoThreads | integer | threads |
|
| no | Controls the maximum number of I/O threads in the .NET thread pool. Automatically multiplied by the number of available CPUs. To be set in web.config (target app only) or machine.config (global). It requires autoConfig=false |
webconf_minIoThreads | integer | threads |
|
| no | The minIoThreads setting enable you to configure a minimum number of worker threads and I/O threads for load conditions. To be set in web.config (target app only) or machine.config (global). It requires autoConfig=false |
webconf_maxWorkerThreads | integer | threads |
|
| no | This setting controls the maximum number of worker threads in the thread pool. This number is then automatically multiplied by the number of available CPUs.To be set in web.config (target app only) or machine.config (global).It requires autoConfig=false |
webconf_minWorkerThreads | integer | threads |
|
| no | The minWorkerThreads setting enable you to configure a minimum number of worker threads and I/O threads for load conditions. To be set in web.config (target app only) or machine.config (global). It requires autoConfig=false |
webconf_minFreeThreads | integer | threads |
|
| no | Used by the worker process to queue all the incoming requests if the number of available threads in the thread pool falls below its value. To be set in web.config (target app only) or machine.config (global). It requires autoConfig=false |
webconf_minLocalRequestFreeThreads | integer | threads |
|
| no | Used to queue requests from localhost (where a Web application sends requests to a local Web service) if the number of available threads falls below it. To be set in web.config (target app only) or machine.config (global). It requires autoConfig=false |
webconf_autoConfig | categori | boolean |
|
| no | Enable settings the system.web configuration parameters. To be set in web.config (target app only) or machine.config (global) |
mem_used | bytes | The total amount of memory used |
requests_throughput | requests/s | The number of requests performed per second |
requests_response_time | milliseconds | The average request response time |
jvm_heap_size | bytes | The size of the JVM heap memory |
jvm_heap_used | bytes | The amount of heap memory used |
jvm_heap_util | percent | The utilization % of heap memory |
jvm_memory_used | bytes | The total amount of memory used across all the JVM memory pools |
jvm_memory_used_details | bytes | The total amount of memory used broken down by pool (e.g., code-cache, compressed-class-space) |
jvm_memory_buffer_pool_used | bytes | The total amount bytes used by buffers within the JVM buffer memory pool |
cpu_util | percent | The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work) |
cpu_used | CPUs | The total amount of CPUs used |
jvm_gc_time | percent | The % of wall clock time the JVM spent doing stop the world garbage collection activities |
jvm_gc_time_details | percent | The % of wall clock time the JVM spent doing stop the world garbage collection activities broken down by type of garbage collection algorithm (e.g., ParNew) |
jvm_gc_count | collections/s | The total number of stop the world JVM garbage collections that have occurred per second |
jvm_gc_count_details | collections/s | The total number of stop the world JVM garbage collections that have occurred per second, broken down by type of garbage collection algorithm (e.g., G1, CMS) |
jvm_gc_duration | seconds | The average duration of a stop the world JVM garbage collection |
jvm_gc_duration_details | seconds | The average duration of a stop the world JVM garbage collection broken down by type of garbage collection algorithm (e.g., G1, CMS) |
jvm_threads_current | threads | The total number of active threads within the JVM |
jvm_threads_deadlocked | threads | The total number of deadlocked threads within the JVM |
jvm_compilation_time | milliseconds | The total time spent by the JVM JIT compiler compiling bytecode |
jvm_minHeapSize | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The inimum heap size. |
jvm_maxHeapSize | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The maximum heap size. |
jvm_maxRAM | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The maximum amount of memory used by the JVM. |
jvm_initialRAMPercentage | real | percent |
|
| yes | The initial percentage of memory used by the JVM. |
jvm_maxRAMPercentage | real | percent |
|
| yes | The percentage of memory used for maximum heap size. Requires Java 10, Java 8 Update 191 or later. |
jvm_alwaysPreTouch | categorical |
|
|
| yes | Pretouch pages during initialization. |
jvm_metaspaceSize | integer | megabytes |
| You should select your own domain. | yes | The initial size of the allocated class metadata space. |
jvm_maxMetaspaceSize | integer | megabytes |
| You should select your own domain. | yes | The maximum size of the allocated class metadata space. |
jvm_useTransparentHugePages | categorical |
|
|
| yes | Enables the use of large pages that can dynamically grow or shrink. |
jvm_allocatePrefetchInstr | integer |
|
|
| yes | Prefetch ahead of the allocation pointer. |
jvm_allocatePrefetchDistance | integer | bytes |
|
| yes | Distance to prefetch ahead of allocation pointer. -1 use system-specific value (automatically determined). |
jvm_allocatePrefetchLines | integer | lines |
|
| yes | The number of lines to prefetch ahead of array allocation pointer. |
jvm_allocatePrefetchStyle | integer |
|
|
| yes | Selects the prefetch instruction to generate. |
jvm_useLargePages | categorical |
|
|
| yes | Enable the use of large page memory. |
jvm_newSize | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | Sets the initial and maximum size of the heap for the young generation (nursery). |
jvm_maxNewSize | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | Specifies the upper bound for the young generation size. |
jvm_survivorRatio | integer |
|
|
| yes | The ratio between the Eden and each Survivor-space within the JVM. For example, a jvm_survivorRatio would mean that the Eden-space is 6 times one Survivor-space. |
jvm_useAdaptiveSizePolicy | categorical |
|
|
| yes | Enable adaptive generation sizing. Disable coupled with jvm_targetSurvivorRatio. |
jvm_adaptiveSizePolicyWeight | integer | percent |
|
| yes | The weighting given to the current Garbage Collection time versus previous GC times when checking the timing goal. |
jvm_targetSurvivorRatio | integer |
|
|
| yes | The desired percentage of Survivor-space used after young garbage collection. |
jvm_minHeapFreeRatio | integer | percent |
|
| yes | The minimum percentage of heap free after garbage collection to avoid shrinking. |
jvm_maxHeapFreeRatio | integer | percent |
|
| yes | The maximum percentage of heap free after garbage collection to avoid shrinking. |
jvm_maxTenuringThreshold | integer |
|
|
| yes | The maximum value for the tenuring threshold. |
jvm_gcType | categorical |
|
|
| yes | Type of the garbage collection algorithm. |
jvm_useParallelOldGC | categorical |
|
|
| yes | Enables Parallel Mark and Compact Garbage Collection in Old/Tenured generations. |
jvm_concurrentGCThreads | integer | threads | You should select your own default value. | You should select your own domain. | yes | The number of threads concurrent garbage collection will use. |
jvm_parallelGCThreads | integer | threads | You should select your own default value. | You should select your own domain. | yes | The number of threads garbage collection will use for parallel phases. |
jvm_maxGCPauseMillis | integer | milliseconds |
|
| yes | Adaptive size policy maximum GC pause time goal in millisecond. |
jvm_resizePLAB | categorical |
|
|
| yes | Enables the dynamic resizing of promotion LABs. |
jvm_GCTimeRatio | integer |
|
|
| yes | The target fraction of time that can be spent in garbage collection before increasing the heap, computet as 1 / (1 + GCTimeRatio). |
jvm_initiatingHeapOccupancyPercent | integer |
|
|
| yes | Sets the percentage of the heap occupancy at which to start a concurrent GC cycle. |
jvm_youngGenerationSizeIncrement | integer | percent |
|
| yes | The increment size for Young Generation adaptive resizing. |
jvm_tenuredGenerationSizeIncrement | integer | percent |
|
| yes | The increment size for Old/Tenured Generation adaptive resizing. |
jvm_adaptiveSizeDecrementScaleFactor | integer | percent |
|
| yes | Specifies the scale factor for goal-driven generation resizing. |
jvm_CMSTriggerRatio | integer |
|
|
| yes | The percentage of MinHeapFreeRatio allocated before CMS GC starts |
jvm_CMSInitiatingOccupancyFraction | integer |
|
|
| yes | Configure oldgen occupancy fraction threshold for CMS GC. Negative values default to CMSTriggerRatio. |
jvm_CMSClassUnloadingEnabled | categorical |
|
|
| yes | Enables class unloading when using CMS. |
jvm_useCMSInitiatingOccupancyOnly | categorical |
|
|
| yes | Use of the occupancy value as the only criterion for initiating the CMS collector. |
jvm_G1HeapRegionSize | integer | megabytes |
|
| yes | Sets the size of the regions for G1. |
jvm_G1ReservePercent | integer |
|
|
| yes | Sets the percentage of the heap that is reserved as a false ceiling to reduce the possibility of promotion failure for the G1 collector. |
jvm_G1NewSizePercent | integer |
|
|
| yes | Sets the percentage of the heap to use as the minimum for the young generation size. |
jvm_G1MaxNewSizePercent | integer |
|
|
| yes | Sets the percentage of the heap size to use as the maximum for young generation size. |
jvm_G1MixedGCLiveThresholdPercent | integer |
|
|
| yes | Sets the occupancy threshold for an old region to be included in a mixed garbage collection cycle. |
jvm_G1HeapWastePercent | integer |
|
|
| yes | The maximum percentage of the reclaimable heap before starting mixed GC. |
jvm_G1MixedGCCountTarget | integer | collections |
|
| yes | Sets the target number of mixed garbage collections after a marking cycle to collect old regions with at most G1MixedGCLIveThresholdPercent live data. The default is 8 mixed garbage collections. |
jvm_G1OldCSetRegionThresholdPercent | integer |
|
|
| yes | The upper limit on the number of old regions to be collected during mixed GC. |
jvm_reservedCodeCacheSize | integer | megabytes |
|
| yes | The maximum size of the compiled code cache pool. |
jvm_tieredCompilation | categorical |
|
|
| yes | The type of the garbage collection algorithm. |
jvm_tieredCompilationStopAtLevel | integer |
|
|
| yes | Overrides the number of detected CPUs that the VM will use to calculate the size of thread pools. |
jvm_compilationThreads | integer | threads | You should select your own default value. | You should select your own domain. | yes | The number of compilation threads. |
jvm_backgroundCompilation | categorical |
|
|
| yes | Allow async interpreted execution of a method while it is being compiled. |
jvm_inline | categorical |
|
|
| yes | Enable inlining. |
jvm_maxInlineSize | integer | bytes |
|
| yes | The bytecode size limit (in bytes) of the inlined methods. |
jvm_inlineSmallCode | integer | bytes |
|
| yes | The maximum compiled code size limit (in bytes) of the inlined methods. |
jvm_aggressiveOpts | categorical |
|
|
| yes | Turn on point performance compiler optimizations. |
jvm_usePerfData | categorical |
|
|
| yes | Enable monitoring of performance data. |
jvm_useNUMA | categorical |
|
|
| yes | Enable NUMA. |
jvm_useBiasedLocking | categorical |
|
|
| yes | Manage the use of biased locking. |
jvm_activeProcessorCount | integer | CPUs |
|
| yes | Overrides the number of detected CPUs that the VM will use to calculate the size of thread pools. |
Parameter | Default value | Domain |
jvm_minHeapSize |
| Depends on the instance available memory |
jvm_maxHeapSize |
| Depends on the instance available memory |
jvm_newSize |
| Depends on the configured heap |
jvm_maxNewSize |
| Depends on the configured heap |
jvm_concurrentGCThreads | Depends on the available CPU cores | Depends on the available CPU cores |
jvm_parallelGCThreads | Depends on the available CPU cores | Depends on the available CPU cores |
jvm_compilation_threads | Depends on the available CPU cores | Depends on the available CPU cores |
jvm.jvm_minHeapSize <= jvm.jvm_maxHeapSize |
|
jvm.jvm_minHeapFreeRatio <= jvm.jvm_maxHeapFreeRatio |
|
jvm.jvm_maxNewSize < jvm.jvm_maxHeapSize |
|
jvm.jvm_concurrentGCThreads <= jvm.jvm_parallelGCThreads |
mem_used | bytes | The total amount of memory used |
requests_throughput | requests/s | The number of requests performed per second |
requests_response_time | milliseconds | The average request response time |
jvm_heap_size | bytes | The size of the JVM heap memory |
jvm_heap_used | bytes | The amount of heap memory used |
jvm_heap_util | percent | The utilization % of heap memory |
jvm_memory_used | bytes | The total amount of memory used across all the JVM memory pools |
jvm_memory_used_details | bytes | The total amount of memory used broken down by pool (e.g., code-cache, compressed-class-space) |
jvm_memory_buffer_pool_used | bytes | The total amount bytes used by buffers within the JVM buffer memory pool |
cpu_util | percent | The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work) |
cpu_used | CPUs | The total amount of CPUs used |
jvm_gc_time | percent | The % of wall clock time the JVM spent doing stop the world garbage collection activities |
jvm_gc_time_details | percent | The % of wall clock time the JVM spent doing stop the world garbage collection activities broken down by type of garbage collection algorithm (e.g., ParNew) |
jvm_gc_count | collections/s | The total number of stop the world JVM garbage collections that have occurred per second |
jvm_gc_count_details | collections/s | The total number of stop the world JVM garbage collections that have occurred per second, broken down by type of garbage collection algorithm (e.g., G1, CMS) |
jvm_gc_duration | seconds | The average duration of a stop the world JVM garbage collection |
jvm_gc_duration_details | seconds | The average duration of a stop the world JVM garbage collection broken down by type of garbage collection algorithm (e.g., G1, CMS) |
jvm_threads_current | threads | The total number of active threads within the JVM |
jvm_threads_deadlocked | threads | The total number of deadlocked threads within the JVM |
jvm_compilation_time | milliseconds | The total time spent by the JVM JIT compiler compiling bytecode |
jvm_minHeapSize | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The inimum heap size. |
jvm_maxHeapSize | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The maximum heap size. |
jvm_maxRAM | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The maximum amount of memory used by the JVM. |
jvm_initialRAMPercentage | real | percent |
|
| yes | The initial percentage of memory used by the JVM. |
jvm_maxRAMPercentage | real | percent |
|
| yes | The percentage of memory used for maximum heap size. Requires Java 10, Java 8 Update 191 or later. |
jvm_alwaysPreTouch | categorical |
|
|
| yes | Pretouch pages during initialization. |
jvm_metaspaceSize | integer | megabytes |
| You should select your own domain. | yes | The initial size of the allocated class metadata space. |
jvm_maxMetaspaceSize | integer | megabytes |
| You should select your own domain. | yes | The maximum size of the allocated class metadata space. |
jvm_useTransparentHugePages | categorical |
|
|
| yes | Enables the use of large pages that can dynamically grow or shrink. |
jvm_allocatePrefetchInstr | integer |
|
|
| yes | Prefetch ahead of the allocation pointer. |
jvm_allocatePrefetchDistance | integer | bytes |
|
| yes | Distance to prefetch ahead of allocation pointer. -1 use system-specific value (automatically determined). |
jvm_allocatePrefetchLines | integer | lines |
|
| yes | The number of lines to prefetch ahead of array allocation pointer. |
jvm_allocatePrefetchStyle | integer |
|
|
| yes | Selects the prefetch instruction to generate. |
jvm_useLargePages | categorical |
|
|
| yes | Enable the use of large page memory. |
jvm_aggressiveHeap | categorical |
|
| yes | Optimize heap options for long-running memory intensive apps. |
jvm_newSize | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | Sets the initial and maximum size of the heap for the young generation (nursery). |
jvm_maxNewSize | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | Specifies the upper bound for the young generation size. |
jvm_survivorRatio | integer |
|
|
| yes | The ratio between the Eden and each Survivor-space within the JVM. For example, a jvm_survivorRatio would mean that the Eden-space is 6 times one Survivor-space. |
jvm_useAdaptiveSizePolicy | categorical |
|
|
| yes | Enable adaptive generation sizing. Disable coupled with jvm_targetSurvivorRatio. |
jvm_adaptiveSizePolicyWeight | integer | percent |
|
| yes | The weighting given to the current Garbage Collection time versus previous GC times when checking the timing goal. |
jvm_targetSurvivorRatio | integer |
|
|
| yes | The desired percentage of Survivor-space used after young garbage collection. |
jvm_minHeapFreeRatio | integer | percent |
|
| yes | The minimum percentage of heap free after garbage collection to avoid shrinking. |
jvm_maxHeapFreeRatio | integer | percent |
|
| yes | The maximum percentage of heap free after garbage collection to avoid shrinking. |
jvm_maxTenuringThreshold | integer |
|
|
| yes | The maximum value for the tenuring threshold. |
jvm_gcType | categorical |
|
|
| yes | Type of the garbage collection algorithm. |
jvm_useParallelOldGC | categorical |
|
|
| yes | Enables Parallel Mark and Compact Garbage Collection in Old/Tenured generations. |
jvm_concurrentGCThreads | integer | threads | You should select your own default value. | You should select your own domain. | yes | The number of threads concurrent garbage collection will use. |
jvm_parallelGCThreads | integer | threads | You should select your own default value. | You should select your own domain. | yes | The number of threads garbage collection will use for parallel phases. |
jvm_maxGCPauseMillis | integer | milliseconds |
|
| yes | Adaptive size policy maximum GC pause time goal in millisecond. |
jvm_resizePLAB | categorical |
|
|
| yes | Enables the dynamic resizing of promotion LABs. |
jvm_GCTimeRatio | integer |
|
|
| yes | The target fraction of time that can be spent in garbage collection before increasing the heap, computet as 1 / (1 + GCTimeRatio). |
jvm_initiatingHeapOccupancyPercent | integer |
|
|
| yes | Sets the percentage of the heap occupancy at which to start a concurrent GC cycle. |
jvm_youngGenerationSizeIncrement | integer | percent |
|
| yes | The increment size for Young Generation adaptive resizing. |
jvm_tenuredGenerationSizeIncrement | integer | percent |
|
| yes | The increment size for Old/Tenured Generation adaptive resizing. |
jvm_adaptiveSizeDecrementScaleFactor | integer | percent |
|
| yes | Specifies the scale factor for goal-driven generation resizing. |
jvm_CMSTriggerRatio | integer |
|
|
| yes | The percentage of MinHeapFreeRatio allocated before CMS GC starts |
jvm_CMSInitiatingOccupancyFraction | integer |
|
|
| yes | Configure oldgen occupancy fraction threshold for CMS GC. Negative values default to CMSTriggerRatio. |
jvm_CMSClassUnloadingEnabled | categorical |
|
|
| yes | Enables class unloading when using CMS. |
jvm_useCMSInitiatingOccupancyOnly | categorical |
|
|
| yes | Use of the occupancy value as the only criterion for initiating the CMS collector. |
jvm_G1HeapRegionSize | integer | megabytes |
|
| yes | Sets the size of the regions for G1. |
jvm_G1ReservePercent | integer |
|
|
| yes | Sets the percentage of the heap that is reserved as a false ceiling to reduce the possibility of promotion failure for the G1 collector. |
jvm_G1NewSizePercent | integer |
|
|
| yes | Sets the percentage of the heap to use as the minimum for the young generation size. |
jvm_G1MaxNewSizePercent | integer |
|
|
| yes | Sets the percentage of the heap size to use as the maximum for young generation size. |
jvm_G1MixedGCLiveThresholdPercent | integer |
|
|
| yes | Sets the occupancy threshold for an old region to be included in a mixed garbage collection cycle. |
jvm_G1HeapWastePercent | integer |
|
|
| yes | The maximum percentage of the reclaimable heap before starting mixed GC. |
jvm_G1MixedGCCountTarget | integer | collections |
|
| yes | Sets the target number of mixed garbage collections after a marking cycle to collect old regions with at most |
jvm_G1OldCSetRegionThresholdPercent | integer |
|
|
| yes | The upper limit on the number of old regions to be collected during mixed GC. |
jvm_G1AdaptiveIHOPNumInitialSamples | integer |
|
| yes | The number of completed time periods from initial mark to first mixed GC required to use the input values for prediction of the optimal occupancy to start marking. |
jvm_G1UseAdaptiveIHOP | categorical |
|
| yes | Adaptively adjust the initiating heap occupancy from the initial value of |
jvm_reservedCodeCacheSize | integer | megabytes |
|
| yes | The maximum size of the compiled code cache pool. |
jvm_tieredCompilation | categorical |
|
|
| yes | The type of the garbage collection algorithm. |
jvm_tieredCompilationStopAtLevel | integer |
|
|
| yes | Overrides the number of detected CPUs that the VM will use to calculate the size of thread pools. |
jvm_compilationThreads | integer | threads | You should select your own default value. | You should select your own domain. | yes | The number of compilation threads. |
jvm_backgroundCompilation | categorical |
|
|
| yes | Allow async interpreted execution of a method while it is being compiled. |
jvm_inline | categorical |
|
|
| yes | Enable inlining. |
jvm_maxInlineSize | integer | bytes |
|
| yes | The bytecode size limit (in bytes) of the inlined methods. |
jvm_inlineSmallCode | integer | bytes |
|
| yes | The maximum compiled code size limit (in bytes) of the inlined methods. |
jvm_aggressiveOpts | categorical |
|
|
| yes | Turn on point performance compiler optimizations. |
jvm_usePerfData | categorical |
|
|
| yes | Enable monitoring of performance data. |
jvm_useNUMA | categorical |
|
|
| yes | Enable NUMA. |
jvm_useBiasedLocking | categorical |
|
|
| yes | Manage the use of biased locking. |
jvm_activeProcessorCount | integer | CPUs |
|
| yes | Overrides the number of detected CPUs that the VM will use to calculate the size of thread pools. |
Parameter | Default value | Domain |
jvm_minHeapSize |
| Depends on the instance available memory |
jvm_maxHeapSize |
| Depends on the instance available memory |
jvm_newSize |
| Depends on the configured heap |
jvm_maxNewSize |
| Depends on the configured heap |
jvm_concurrentGCThreads | Depends on the available CPU cores | Depends on the available CPU cores |
jvm_parallelGCThreads | Depends on the available CPU cores | Depends on the available CPU cores |
jvm_compilation_threads | Depends on the available CPU cores | Depends on the available CPU cores |
jvm.jvm_minHeapSize <= jvm.jvm_maxHeapSize |
|
jvm.jvm_minHeapFreeRatio <= jvm.jvm_maxHeapFreeRatio |
|
jvm.jvm_maxNewSize < jvm.jvm_maxHeapSize * 0.8 |
|
jvm.jvm_concurrentGCThreads <= jvm.jvm_parallelGCThreads |
jvm_activeProcessorCount < container.cpu_limits + 1 |
jvm_heap_size | bytes | The size of the JVM heap memory |
jvm_heap_used | bytes | The amount of heap memory used |
jvm_heap_util | percent | The utilization % of heap memory |
jvm_memory_used | bytes | The total amount of memory used across all the JVM memory pools |
jvm_memory_used_details | bytes | The total amount of memory used broken down by pool (e.g., code-cache, compressed-class-space) |
jvm_memory_buffer_pool_used | bytes | The total amount of bytes used by buffers within the JVM buffer memory pool |
jvm_gc_time | percent | The % of wall clock time the JVM spent doing stop the world garbage collection activities |
jvm_gc_time_details | percent | The % of wall clock time the JVM spent doing stop the world garbage collection activities broken down by type of garbage collection algorithm (e.g., ParNew) |
jvm_gc_count | collections/s | The total number of stop the world JVM garbage collections that have occurred per second |
jvm_gc_count_details | collections/s | The total number of stop the world JVM garbage collections that have occurred per second, broken down by type of garbage collection algorithm (e.g., G1, CMS) |
jvm_gc_duration | seconds | The average duration of a stop the world JVM garbage collection |
jvm_gc_duration_details | seconds | The average duration of a stop the world JVM garbage collection broken down by type of garbage collection algorithm (e.g., G1, CMS) |
jvm_threads_current | threads | The total number of active threads within the JVM |
jvm_threads_deadlocked | threads | The total number of deadlocked threads within the JVM |
jvm_compilation_time | milliseconds | The total time spent by the JVM JIT compiler compiling bytecode |
j9vm_minHeapSize | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | Minimum heap size (in megabytes) |
j9vm_maxHeapSize | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | Maximum heap size (in megabytes) |
j9vm_minFreeHeap | real | percent |
|
| yes | Specify the minimum % free heap required after global GC |
j9vm_maxFreeHeap | real | percent |
|
| yes | Specify the maximum % free heap required after global GC |
j9vm_gcPolicy | categorical |
|
|
| yes | GC policy to use |
j9vm_gcThreads | integer | threads | You should select your own default value. |
| yes | Number of threads the garbage collector uses for parallel operations |
j9vm_scvTenureAge | integer |
|
|
| yes | Set the initial tenuring threshold for generational concurrent GC policy |
j9vm_scvAdaptiveTenureAge | categorical |
| blank | blank, | yes | Enable the adaptive tenure age for generational concurrent GC policy |
j9vm_newSpaceFixed | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The fixed size of the new area when using the gencon GC policy. Must not be set alongside min or max |
j9vm_minNewSpace | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The initial size of the new area when using the gencon GC policy |
j9vm_maxNewSpace | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The maximum size of the new area when using the gencon GC policy |
j9vm_oldSpaceFixed | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The fixed size of the old area when using the gencon GC policy. Must not be set alongside min or max |
j9vm_minOldSpace | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The initial size of the old area when using the gencon GC policy |
j9vm_maxOldSpace | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | The maximum size of the old area when using the gencon GC policy |
j9vm_concurrentScavenge | categorical |
|
|
| yes | Support pause-less garbage collection mode with gencon |
j9vm_gcPartialCompact | categorical |
|
|
| yes | Enable partial compaction |
j9vm_concurrentMeter | categorical |
|
|
| yes | Determine which area is monitored by the concurrent mark |
j9vm_concurrentBackground | integer |
|
|
| yes | The number of background threads assisting the mutator threads in concurrent mark |
j9vm_concurrentSlack | integer | megabytes |
| You should select your own domain. | yes | The target size of free heap space for concurrent collectors |
j9vm_concurrentLevel | integer | percent |
|
| yes | The ratio between the amount of heap allocated and the amount of heap marked |
j9vm_gcCompact | categorical |
| blank | blank, | yes | Enables full compaction on all garbage collections (system and global) |
j9vm_minGcTime | real | percent |
|
| yes | The minimum percentage of time to be spent in garbage collection, triggering the resize of the heap to meet the specified values |
j9vm_maxGcTime | real | percent |
|
| yes | The maximum percentage of time to be spent in garbage collection, triggering the resize of the heap to meet the specified values |
j9vm_loa | categorical |
|
|
| yes | Enable the allocation of the large area object during garbage collection |
j9vm_loa_initial | real |
|
|
| yes | The initial portion of the tenure area allocated to the large area object |
j9vm_loa_minimum | real |
|
|
| yes | The minimum portion of the tenure area allocated to the large area object |
j9vm_loa_maximum | real |
|
|
| yes | The maximum portion of the tenure area allocated to the large area object |
j9vm_jitOptlevel | ordinal |
|
|
| yes | Force the JIT compiler to compile all methods at a specific optimization level |
j9vm_codeCacheTotal | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | Maximum size limit in MB for the JIT code cache |
j9vm_jit_count | integer |
|
|
| yes | The number of times a method is called before it is compiled |
j9vm_compressedReferences | categorical |
| blank | blank, | yes | Enable/disable the use of compressed references |
j9vm_aggressiveOpts | categorical |
| blank | blank, | yes | Enable the use of aggressive performance optimization features, which are expected to become default in upcoming releases |
j9vm_virtualized | categorical |
| blank | blank, | yes | Optimize the VM for virtualized environment, reducing CPU usage when idle |
j9vm_shareclasses | categorical |
| blank | blank, | yes | Enable class sharing |
j9vm_quickstart | categorical |
| blank | blank, | yes | Run JIT with only a subset of optimizations, improving the performance of short-running applications |
j9vm_minimizeUserCpu | categorical |
| blank | blank, | yes | Minimizes user-mode CPU usage in thread synchronization where possible |
j9vm_minNewSpace | 25% of | must not exceed |
j9vm_maxNewSpace | 25% of j9vm_maxHeapSize | must not exceed |
j9vm_minOldSpace | 75% of j9vm_minHeapSize | must not exceed |
j9vm_maxOldSpace | same as j9vm_maxHeapSize | must not exceed |
j9vm_gcthreads | number of CPUs - 1, up to a maximum of 64 | capped to default, no benefit in exceeding that value |
j9vm_compressedReferences | enabled for j9vm_maxHeapSize<= 57 GB |
|
jvm.j9vm_minHeapSize < jvm.j9vm_maxHeapSize |
|
jvm.j9vm_minNewSpace < jvm.j9vm_maxNewSpace && jvm.j9vm_minNewSpace < jvm.j9vm_minHeapSize && jvm.j9vm_maxNewSpace < jvm.j9vm_maxHeapSize |
|
jvm.j9vm_minOldSpace < jvm.j9vm_maxOldSpace && jvm.j9vm_minOldSpace < jvm.j9vm_minHeapSize && jvm.j9vm_maxOldSpace < jvm.j9vm_maxHeapSize |
|
jvm.j9vm_loa_minimum <= jvm.j9vm_loa_initial && jvm.j9vm_loa_initial <= jvm.j9vm_loa_maximum |
|
jvm.j9vm_minFreeHeap + 0.05 < jvm.j9vm_maxFreeHeap |
|
jvm.j9vm_minGcTimeMin < jvm.j9vm_maxGcTime |
|
Eclipse OpenJ9 (formerly known as IBM J9) Virtual Machine version 6 |
Eclipse OpenJ9 (formerly known as IBM J9) Virtual Machine version 8 |
Eclipse OpenJ9 (formerly known as IBM J9) version 11 |
This page describes the Optimization Pack for Eclipse OpenJ9 (formerly known as IBM J9) Virtual Machine version 8.
The following parameters require their ranges or default values to be updated according to the described rules:
Notice that the value nocompressedreferences
for j9vm_compressedReferences
can only be specified for JVMs compiled with the proper --with-noncompressedrefs
flag. If this is not the case you cannot actively disable compressed references, meaning:
for Xmx <= 57GB is useless to tune this parameter since compressed references are active by default and it is not possible to explicitly disable it
for Xmx > 57GB, since the by default (blank value) compressed references are disabled, Akamas can try to enable it. This requires removing the nocompressedreferences
from the domain
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters:
Notice that
j9vm_newSpaceFixed
is mutually exclusive with j9vm_minNewSpace
and j9vm_maxNewSpace
j9vm_oldSpaceFixed
is mutually exclusive with j9vm_minOldSpace
and j9vm_maxOldSpace
the sum of j9vm_minNewSpace
and j9vm_minOldSpace
must be equal to j9vm_minHeapSize
, so it's useless to tune all of them together. Max values seem to be more complex.
This page describes the Optimization Pack for Eclipse OpenJ9 (formerly known as IBM J9) version 11.
The following parameters require their ranges or default values to be updated according to the described rules:
Notice that the value nocompressedreferences
for j9vm_compressedReferences
can only be specified for JVMs compiled with the proper --with-noncompressedrefs
flag. If this is not the case you cannot actively disable compressed references, meaning:
for Xmx <= 57GB is useless to tune this parameter since compressed references are active by default and it is not possible to explicitly disable it
for Xmx > 57GB, since the by default (blank value) compressed references are disabled, Akamas can try to enable it. This requires removing the nocompressedreferences
from the domain
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters:
Notice that
j9vm_newSpaceFixed
is mutually exclusive with j9vm_minNewSpace
and j9vm_maxNewSpace
j9vm_oldSpaceFixed
is mutually exclusive with j9vm_minOldSpace
and j9vm_maxOldSpace
the sum of j9vm_minNewSpace
and j9vm_minOldSpace
must be equal to j9vm_minHeapSize
, so it's useless to tune all of them together. Max values seem to be more complex.
The Web Application optimization pack provides a component type apt for monitoring the performances from the end-user perspective of a generic web application, to evaluate the configuration of the technologies in the underlying stack.
The bundled component type provides Akamas with performance metrics representing concepts like throughput, response time, error rate, and user load, split into different levels of detail such as transaction, page, and single request.
Here’s the command to install the Web Application optimization pack using the Akamas CLI:
Component Type | Description |
---|---|
Name | Unit | Description |
---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Default value | Domain |
---|---|---|
Formula | Notes |
---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Unit | Description |
---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Default value | Domain |
---|---|---|
Formula | Notes |
---|---|
Component Type | Description |
---|
Component Type | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Unit | Description |
---|
Metric | Unit | Description |
---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|
The Node JS runtime
jvm_heap_size
bytes
The size of the JVM heap memory
jvm_heap_used
bytes
The amount of heap memory used
jvm_heap_util
percent
The utilization % of heap memory
jvm_memory_used
bytes
The total amount of memory used across all the JVM memory pools
jvm_memory_used_details
bytes
The total amount of memory used broken down by pool (e.g., code-cache, compressed-class-space)
jvm_memory_buffer_pool_used
bytes
The total amount of bytes used by buffers within the JVM buffer memory pool
jvm_gc_time
percent
The % of wall clock time the JVM spent doing stop the world garbage collection activities
jvm_gc_time_details
percent
The % of wall clock time the JVM spent doing stop the world garbage collection activities broken down by type of garbage collection algorithm (e.g., ParNew)
jvm_gc_count
collections/s
The total number of stop the world JVM garbage collections that have occurred per second
jvm_gc_count_details
collections/s
The total number of stop the world JVM garbage collections that have occurred per second, broken down by type of garbage collection algorithm (e.g., G1, CMS)
jvm_gc_duration
seconds
The average duration of a stop the world JVM garbage collection
jvm_gc_duration_details
seconds
The average duration of a stop the world JVM garbage collection broken down by type of garbage collection algorithm (e.g., G1, CMS)
jvm_threads_current
threads
The total number of active threads within the JVM
jvm_threads_deadlocked
threads
The total number of deadlocked threads within the JVM
jvm_compilation_time
milliseconds
The total time spent by the JVM JIT compiler compiling bytecode
j9vm_minHeapSize
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
Minimum heap size (in megabytes)
j9vm_maxHeapSize
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
Maximum heap size (in megabytes)
j9vm_minFreeHeap
real
percent
0.3
0.1
→ 0.5
yes
Specify the minimum % free heap required after global GC
j9vm_maxFreeHeap
real
percent
0.6
0.4
→ 0.9
yes
Specify the maximum % free heap required after global GC
j9vm_gcPolicy
categorical
gencon
gencon
, subpool
, optavgpause
, optthruput
, nogc
yes
GC policy to use
j9vm_gcThreads
integer
threads
You should select your own default value.
1
→ 64
yes
Number of threads the garbage collector uses for parallel operations
j9vm_scvTenureAge
integer
10
1
→ 14
yes
Set the initial tenuring threshold for generational concurrent GC policy
j9vm_scvAdaptiveTenureAge
categorical
blank
blank, -Xgc:scvNoAdaptiveTenure
yes
Enable the adaptive tenure age for generational concurrent GC policy
j9vm_newSpaceFixed
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The fixed size of the new area when using the gencon GC policy. Must not be set alongside min or max
j9vm_minNewSpace
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The initial size of the new area when using the gencon GC policy
j9vm_maxNewSpace
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The maximum size of the new area when using the gencon GC policy
j9vm_oldSpaceFixed
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The fixed size of the old area when using the gencon GC policy. Must not be set alongside min or max
j9vm_minOldSpace
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The initial size of the old area when using the gencon GC policy
j9vm_maxOldSpace
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The maximum size of the old area when using the gencon GC policy
j9vm_concurrentScavenge
categorical
concurrentScavenge
concurrentScavenge
, noConcurrentScavenge
yes
Support pause-less garbage collection mode with gencon
j9vm_gcPartialCompact
categorical
nopartialcompactgc
nopartialcompactgc
, partialcompactgc
yes
Enable partial compaction
j9vm_concurrentMeter
categorical
soa
soa
, loa
, dynamic
yes
Determine which area is monitored by the concurrent mark
j9vm_concurrentBackground
integer
0
0
→ 128
yes
The number of background threads assisting the mutator threads in concurrent mark
j9vm_concurrentSlack
integer
megabytes
0
You should select your own domain.
yes
The target size of free heap space for concurrent collectors
j9vm_concurrentLevel
integer
percent
8
0
→ 100
yes
The ratio between the amount of heap allocated and the amount of heap marked
j9vm_gcCompact
categorical
blank
blank, -Xcompactgc
, -Xnocompactgc
yes
Enables full compaction on all garbage collections (system and global)
j9vm_minGcTime
real
percent
0.05
0.0
→ 1.0
yes
The minimum percentage of time to be spent in garbage collection, triggering the resize of the heap to meet the specified values
j9vm_maxGcTime
real
percent
0.13
0.0
→ 1.0
yes
The maximum percentage of time to be spent in garbage collection, triggering the resize of the heap to meet the specified values
j9vm_loa
categorical
loa
loa
, noloa
yes
Enable the allocation of the large area object during garbage collection
j9vm_loa_initial
real
0.05
0.0
→ 0.95
yes
The initial portion of the tenure area allocated to the large area object
j9vm_loa_minimum
real
0.01
0.0
→ 0.95
yes
The minimum portion of the tenure area allocated to the large area object
j9vm_loa_maximum
real
0.5
0.0
→ 0.95
yes
The maximum portion of the tenure area allocated to the large area object
j9vm_jitOptlevel
ordinal
noOpt
noOpt
, cold
, warm
, hot
, veryHot
, scorching
yes
Force the JIT compiler to compile all methods at a specific optimization level
j9vm_compilationThreads
integer
integer
You should select your own default value.
1
→ 7
yes
Number of JIT threads
j9vm_codeCacheTotal
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
Maximum size limit in MB for the JIT code cache
j9vm_jit_count
integer
10000
0
→ 1000000
yes
The number of times a method is called before it is compiled
j9vm_lockReservation
categorical
categorical
blank, -XlockReservation
no
Enables an optimization that presumes a monitor is owned by the thread that last acquired it
j9vm_compressedReferences
categorical
blank
blank, -Xcompressedrefs
, -Xnocompressedrefs
yes
Enable/disable the use of compressed references
j9vm_aggressiveOpts
categorical
blank
blank, -Xaggressive
yes
Enable the use of aggressive performance optimization features, which are expected to become default in upcoming releases
j9vm_virtualized
categorical
blank
blank, -Xtune:virtualized
yes
Optimize the VM for virtualized environment, reducing CPU usage when idle
j9vm_shareclasses
categorical
blank
blank, -Xshareclasses
yes
Enable class sharing
j9vm_quickstart
categorical
blank
blank, -Xquickstart
yes
Run JIT with only a subset of optimizations, improving the performance of short-running applications
j9vm_minimizeUserCpu
categorical
blank
blank, -Xthr:minimizeUserCPU
yes
Minimizes user-mode CPU usage in thread synchronization where possible
j9vm_minNewSpace
25% of j9vm_minHeapSize
must not exceed j9vm_minHeapSize
j9vm_maxNewSpace
25% of j9vm_maxHeapSize
must not exceed j9vm_maxHeapSize
j9vm_minOldSpace
75% of j9vm_minHeapSize
must not exceed j9vm_minHeapSize
j9vm_maxOldSpace
same as j9vm_maxHeapSize
must not exceed j9vm_maxHeapSize
j9vm_gcthreads
number of CPUs - 1, up to a maximum of 64
capped to default, no benefit in exceeding that value
j9vm_compressedReferences
enabled for j9vm_maxHeapSize<= 57 GB
jvm.j9vm_minHeapSize < jvm.j9vm_maxHeapSize
jvm.j9vm_minNewSpace < jvm.j9vm_maxNewSpace && jvm.j9vm_minNewSpace < jvm.j9vm_minHeapSize && jvm.j9vm_maxNewSpace < jvm.j9vm_maxHeapSize
jvm.j9vm_minOldSpace < jvm.j9vm_maxOldSpace && jvm.j9vm_minOldSpace < jvm.j9vm_minHeapSize && jvm.j9vm_maxOldSpace < jvm.j9vm_maxHeapSize
jvm.j9vm_loa_minimum <= jvm.j9vm_loa_initial && jvm.j9vm_loa_initial <= jvm.j9vm_loa_maximum
jvm.j9vm_minFreeHeap + 0.05 < jvm.j9vm_maxFreeHeap
jvm.j9vm_minGcTimeMin < jvm.j9vm_maxGcTime
cpu_used
CPUs
The total amount of CPUs used
cpu_util
percents
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
memory_used
bytes
The total amount of memory used
memory_util
percent
The average memory utilization %
nodejs_gc_heap_used
bytes
GC heap used
nodejs_rss
bytes
Process Resident Set Size (RSS)
nodejs_v8_heap_total
bytes
V8 heap total
nodejs_v8_heap_used
bytes
V8 heap used
nodejs_number_active_threads
threads
Number of active threads
v8_allocation_size_pretenuring
categorical
true
true
, false
yes
Pretenure with allocation sites
v8_min_semi_space_size
integer
megabytes
0
0
→ 1048576
yes
Min size of a semi-space, the new space consists of two semi-spaces
v8_min_semi_space_size
integer
megabytes
0
0
→ 1048576
yes
Max size of a semi-space, the new space consists of two semi-spaces
v8_semi_space_grouth_factor
integer
2
0
→ 100
yes
Factor by which to grow the new space
v8_max_old_space_size
integer
megabytes
0
0
→ 1048576
yes
Max size of the old space
v8_max_heap_size
integer
megabytes
0
0
→ 1048576
yes
Max size of the heap both max_semi_space_size and max_old_space_size take precedence. All three flags cannot be specified at the same time.
v8_initial_heap_size
integer
megabytes
0
0
→ 1048576
yes
Initial size of the heap
v8_initial_old_space_size
integer
megabytes
0
0
→ 1048576
yes
Initial old space size
v8_parallel_scavenge
categorical
true
true
, false
yes
Parallel scavenge
v8_scavenge_task_trigger
integer
80
1
→ 100
yes
Scavenge task trigger in percent of the current heap limit
v8_scavenge_separate_stack_scanning
categorical
false
true
, false
yes
Use a separate phase for stack scanning in scavenge
v8_concurrent_marking
categorical
true
true
, false
yes
Use concurrent marking
v8_parallel_marking
categorical
true
true
, false
yes
Use parallel marking in atomic pause
v8_concurrent_sweeping
categorical
true
true
, false
yes
Use concurrent sweeping
v8_heap_growing_percent
integer
0
0
→ 99
yes
Specifies heap growing factor as (1 + heap_growing_percent/100)
v8_os_page_size
integer
kylobytes
0
0
→ 1048576
yes
Override OS page size
v8_stack_size
integer
kylobytes
984
16
→ 1048576
yes
Default size of stack region v8 is allowed to use
v8_single_threaded
categorical
false
true
, false
yes
Disable the use of background tasks
v8_single_threaded_gc
categorical
false
true
, false
yes
Disable the use of background gc tasks
jvm_heap_size
bytes
The size of the JVM heap memory
jvm_heap_used
bytes
The amount of heap memory used
jvm_heap_util
percent
The utilization % of heap memory
jvm_memory_used
bytes
The total amount of memory used across all the JVM memory pools
jvm_memory_used_details
bytes
The total amount of memory used broken down by pool (e.g., code-cache, compressed-class-space)
jvm_memory_buffer_pool_used
bytes
The total amount of bytes used by buffers within the JVM buffer memory pool
jvm_gc_time
percent
The % of wall clock time the JVM spent doing stop the world garbage collection activities
jvm_gc_time_details
percent
The % of wall clock time the JVM spent doing stop the world garbage collection activities broken down by type of garbage collection algorithm (e.g., ParNew)
jvm_gc_count
collections/s
The total number of stop the world JVM garbage collections that have occurred per second
jvm_gc_count_details
collections/s
The total number of stop the world JVM garbage collections that have occurred per second, broken down by type of garbage collection algorithm (e.g., G1, CMS)
jvm_gc_duration
seconds
The average duration of a stop the world JVM garbage collection
jvm_gc_duration_details
seconds
The average duration of a stop the world JVM garbage collection broken down by type of garbage collection algorithm (e.g., G1, CMS)
jvm_threads_current
threads
The total number of active threads within the JVM
jvm_threads_deadlocked
threads
The total number of deadlocked threads within the JVM
jvm_compilation_time
milliseconds
The total time spent by the JVM JIT compiler compiling bytecode
j9vm_minHeapSize
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
Minimum heap size (in megabytes)
j9vm_maxHeapSize
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
Maximum heap size (in megabytes)
j9vm_minFreeHeap
real
percent
0.3
0.1
→ 0.5
yes
Specify the minimum % free heap required after global GC
j9vm_maxFreeHeap
real
percent
0.6
0.4
→ 0.9
yes
Specify the maximum % free heap required after global GC
j9vm_gcPolicy
categorical
gencon
gencon
, subpool
, optavgpause
, optthruput
, nogc
yes
GC policy to use
j9vm_gcThreads
integer
threads
You should select your own default value.
1
→ 64
yes
Number of threads the garbage collector uses for parallel operations
j9vm_scvTenureAge
integer
10
1
→ 14
yes
Set the initial tenuring threshold for generational concurrent GC policy
j9vm_scvAdaptiveTenureAge
categorical
blank
blank, -Xgc:scvNoAdaptiveTenure
yes
Enable the adaptive tenure age for generational concurrent GC policy
j9vm_newSpaceFixed
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The fixed size of the new area when using the gencon GC policy. Must not be set alongside min or max
j9vm_minNewSpace
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The initial size of the new area when using the gencon GC policy
j9vm_maxNewSpace
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The maximum size of the new area when using the gencon GC policy
j9vm_oldSpaceFixed
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The fixed size of the old area when using the gencon GC policy. Must not be set alongside min or max
j9vm_minOldSpace
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The initial size of the old area when using the gencon GC policy
j9vm_maxOldSpace
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
The maximum size of the old area when using the gencon GC policy
j9vm_concurrentScavenge
categorical
concurrentScavenge
concurrentScavenge
, noConcurrentScavenge
yes
Support pause-less garbage collection mode with gencon
j9vm_gcPartialCompact
categorical
nopartialcompactgc
nopartialcompactgc
, partialcompactgc
yes
Enable partial compaction
j9vm_concurrentMeter
categorical
soa
soa
, loa
, dynamic
yes
Determine which area is monitored by the concurrent mark
j9vm_concurrentBackground
integer
0
0
→ 128
yes
The number of background threads assisting the mutator threads in concurrent mark
j9vm_concurrentSlack
integer
megabytes
0
You should select your own domain.
yes
The target size of free heap space for concurrent collectors
j9vm_concurrentLevel
integer
percent
8
0
→ 100
yes
The ratio between the amount of heap allocated and the amount of heap marked
j9vm_gcCompact
categorical
blank
blank, -Xcompactgc
, -Xnocompactgc
yes
Enables full compaction on all garbage collections (system and global)
j9vm_minGcTime
real
percent
0.05
0.0
→ 1.0
yes
The minimum percentage of time to be spent in garbage collection, triggering the resize of the heap to meet the specified values
j9vm_maxGcTime
real
percent
0.13
0.0
→ 1.0
yes
The maximum percentage of time to be spent in garbage collection, triggering the resize of the heap to meet the specified values
j9vm_loa
categorical
loa
loa
, noloa
yes
Enable the allocation of the large area object during garbage collection
j9vm_loa_initial
real
0.05
0.0
→ 0.95
yes
The initial portion of the tenure area allocated to the large area object
j9vm_loa_minimum
real
0.01
0.0
→ 0.95
yes
The minimum portion of the tenure area allocated to the large area object
j9vm_loa_maximum
real
0.5
0.0
→ 0.95
yes
The maximum portion of the tenure area allocated to the large area object
j9vm_jitOptlevel
ordinal
noOpt
noOpt
, cold
, warm
, hot
, veryHot
, scorching
yes
Force the JIT compiler to compile all methods at a specific optimization level
j9vm_compilationThreads
integer
threads
You should select your own default value.
1
→ 7
yes
Number of JIT threads
j9vm_codeCacheTotal
integer
megabytes
You should select your own default value.
You should select your own domain.
yes
Maximum size limit in MB for the JIT code cache
j9vm_jit_count
integer
10000
0
→ 1000000
yes
The number of times a method is called before it is compiled
j9vm_lockReservation
categorical
blank
blank, -XlockReservation
no
Enables an optimization that presumes a monitor is owned by the thread that last acquired it
j9vm_compressedReferences
categorical
blank
blank, -Xcompressedrefs
, -Xnocompressedrefs
yes
Enable/disable the use of compressed references
j9vm_aggressiveOpts
categorical
blank
blank, -Xaggressive
yes
Enable the use of aggressive performance optimization features, which are expected to become default in upcoming releases
j9vm_virtualized
categorical
blank
blank, -Xtune:virtualized
yes
Optimize the VM for virtualized environment, reducing CPU usage when idle
j9vm_shareclasses
categorical
blank
blank, -Xshareclasses
yes
Enable class sharing
j9vm_quickstart
categorical
blank
blank, -Xquickstart
yes
Run JIT with only a subset of optimizations, improving the performance of short-running applications
j9vm_minimizeUserCpu
categorical
blank
blank, -Xthr:minimizeUserCPU
yes
Minimizes user-mode CPU usage in thread synchronization where possible
j9vm_minNewSpace
25% of j9vm_minHeapSize
must not exceed j9vm_minHeapSize
j9vm_maxNewSpace
25% of j9vm_maxHeapSize
must not exceed j9vm_maxHeapSize
j9vm_minOldSpace
75% of j9vm_minHeapSize
must not exceed j9vm_minHeapSize
j9vm_maxOldSpace
same as j9vm_maxHeapSize
must not exceed j9vm_maxHeapSize
j9vm_gcthreads
number of CPUs - 1, up to a maximum of 64
capped to default, no benefit in exceeding that value
j9vm_compressedReferences
enabled for j9vm_maxHeapSize<= 57 GB
jvm.j9vm_minHeapSize < jvm.j9vm_maxHeapSize
jvm.j9vm_minNewSpace < jvm.j9vm_maxNewSpace && jvm.j9vm_minNewSpace < jvm.j9vm_minHeapSize && jvm.j9vm_maxNewSpace < jvm.j9vm_maxHeapSize
jvm.j9vm_minOldSpace < jvm.j9vm_maxOldSpace && jvm.j9vm_minOldSpace < jvm.j9vm_minHeapSize && jvm.j9vm_maxOldSpace < jvm.j9vm_maxHeapSize
jvm.j9vm_loa_minimum <= jvm.j9vm_loa_initial && jvm.j9vm_loa_initial <= jvm.j9vm_loa_maximum
jvm.j9vm_minFreeHeap + 0.05 < jvm.j9vm_maxFreeHeap
jvm.j9vm_minGcTimeMin < jvm.j9vm_maxGcTime
transactions_throughput | transactions/s | The number of transactions executed per second |
transactions_response_time | milliseconds | The average transaction response time |
transactions_response_time_max | milliseconds | The maximum recorded transaction response time |
transactions_response_time_min | milliseconds | The minimum recorded transaction response time |
pages_throughput | pages/s | The number of pages requested per second |
pages_response_time | milliseconds | The average page response time |
pages_response_time_max | milliseconds | The maximum recorded page response time |
pages_response_time_min | milliseconds | The minimum recorded page response time |
requests_throughput | requests/s | The number of requests performed per second |
requests_response_time | milliseconds | The average request response time |
requests_response_time_max | milliseconds | The maximum recorded request response time |
requests_response_time_min | milliseconds | The minimum recorded request response time |
transactions_error_rate | percent | The percentage of transactions flagged as error |
transactions_error_throughput | transactions/s | The number of transactions flagged as error per second |
pages_error_rate | percent | The percentage of pages flagged as error |
pages_error_throughput | pages/s | The number of pages flagged as error per second |
requests_error_rate | percent | The requests of requests flagged as error |
requests_error_throughput | requests/s | The number of requests flagged as error per second |
users | users | The number of users performing requests on the web |
cpu_used | CPUs | The total amount of CPUs used |
cpu_util | percents | The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work) |
go_heap_size | bytes | The largest size reached by the Go heap memory |
go_heap_used | bytes | The amount of heap memory used |
go_heap_util | bytes | The amount of heap memory used |
go_memory_used | bytes | The total amount of memory used by Go |
go_gc_time | percent | The % of wall clock time the Go spent doing stop the world garbage collection activities |
go_gc_duration | seconds | The average duration of a stop the world Go garbage collection |
go_gc_count | collections/s | The total number of stop the world Go garbage collections that have occurred per second |
go_threads_current | threads | The total number of active Go threads |
go_goroutines_current | goroutines | The total number of active Goroutines |
go_gcTargetPercentage | integer |
|
| yes | Sets the GOGC variable which controls the aggressiveness of the garbage collector |
go_maxProcs | integer |
|
| yes | Limits the number of operating system threads that can execute user-level code simultaneously |
The Golang runtime 1 |
Web Application |
The Kubernetes optimization pack allows optimizing containerized applications running on a Kubernetes cluster. Through this optimization pack, Akamas is able to tackle the problem of distributing resources to containerized applications in order to minimize waste and ensure the quality of service.
To achieve these goals the optimization pack provides parameters that focus on the following areas:
Memory allocation
CPU allocation
Number of replicas
Similarly, the bundled metrics provide visibility on the following aspects of tuned applications:
Memory utilization
CPU utilization
The component types provided in this optimization pack allow modeling the entities found in a Kubernetes-based application, optimizing their parameters, and monitoring the key performance metrics.
Here’s the command to install the Kubernetes optimization pack optimization-pack using the Akamas CLI:
Metric | Unit | Description |
---|---|---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Component Type | Description |
---|---|
Component Type | Description |
---|---|
Name | Unit | Description |
---|
Name | Unit | Type | Default Value | Domain | Restart | Description |
---|
Name | Unit | Description |
---|
Name | Unit | Description |
---|
Name | Unit | Description |
---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|
Formula | Notes |
---|
container_cpu_util
CPUs
The number of CPUs (or fraction of CPUs) allowed for a container
container_cpu_used
percents
CPUs used by the container per second
container_cpu_throttle_time
CPUs
CPUs used by the container per second
container_cpu_limit
percent
The number of CPUs (or fraction of CPUs) allowed for a container
container_mem_util_nocache
percent
Percentage of memory used with respect to the limit. Memory used includes all types of memory, including file system cache
container_mem_util
percent
Percentage of working set memory used with respect to the limit
container_mem_used
bytes
The total amount of memory used by the container. Memory used includes all types of memory, including file system cache
container_mem_limit
bytes
Memory limit for the container
container_mem_working_set
threads
Current working set in bytes
container_mem_limit_hits
hits/s
Number of times memory usage hits memory limit per second
limits_cpu
integer
CPUs
0.7
0.1
→ 100.0
Limits on the amount of CPU resources usage in CPU units
requests_cpu
integer
megabytes
0.7
0.1
→ 100.0
Limits on the amount of memory resources usage
limits_memory
integer
CPUs
128
64
→ 64000
Amount of CPU resources requests in CPU units
requests_memory
integer
megabytes
128
64
→ 64000
Amount of memory resources requests
Docker container
Kubernetes Container
Kubernetes Pod
Kubernetes Workload
Kubernetes Namespace
Kubernetes Cluster
k8s_workload_desired_pods | pods | Number of desired pods per workload |
k8s_workload_pods | pods | Pods per workload and phase |
k8s_workload_running_pods | pods | The number of running pods per workload |
k8s_workload_cpu_used | millicores | The total amount of CPUs used by the entire workload |
k8s_workload_memory_used | bytes | The total amount of memory used by the entire workload |
k8s_workload_replicas | integer | pods |
|
| yes | Number of desired pods in the deployment |
k8s_pod_cpu_used | millicores | The CPUs used by the pod |
k8s_pod_cpu_throttle_time | percent | The percentage of time the CPU has been throttled |
k8s_pod_cpu_request | millicores | The CPUs requested for the pod |
k8s_pod_cpu_limit | millicores | The CPUs allowed for the pod |
k8s_pod_cpu_util | percent | Percentage of CPUs used with respect to the limit |
k8s_pod_memory_util | percent | The percentage of memory used with respect to the limit. Memory used includes all types of memory, including file system cache |
k8s_pod_memory_util_nocache | percent | The percentage of working set memory used with respect to the limit |
k8s_pod_memory_used | bytes | The total amount of memory used by the pod. Memory used includes all types of memory, including file system cache |
k8s_pod_memory_request | bytes | The memory requested for the pod |
k8s_pod_memory_limit | bytes | The memory limit for the pod |
k8s_pod_memory_working_set | bytes | Current working set in bytes |
k8s_pod_memory_limit_hits | hits/s | The number of times per second the used memory hit the limit |
k8s_pod_container_restarts | events | The number of pod restarts for a container (Labels: container) |
k8s_pod_desired_containers | containers | The number of desired c |
container_cpu_used | millicores | The CPUs used by the container |
container_cpu_throttle_time | percent | The amount of time the CPU has been throttled |
container_cpu_request | millicores | The CPUs requested for the container |
container_cpu_throttled_millicores | millicores | The CPUs throttling per container in millicores |
container_cpu_limit | millicores | The CPUs allowed for the container |
container_cpu_util | percent | The percentage of CPUs used with respect to the limit |
container_memory_util | percent | The percentage of memory used with respect to the limit. Memory used includes all types of memory, including file system cache |
container_memory_util_nocache | percent | The percentage of working set memory used with respect to the limit |
container_memory_used | bytes | The total amount of memory used by the container. Memory used includes all types of memory, including file system cache |
container_memory_request | bytes | The memory requested for the container |
container_memory_limit | bytes | The memory limit for the container |
container_memory_working_set | bytes | The current working set in bytes |
container_memory_limit_hits | hits/s | The number of times per second the used memory hit the limit |
container_memory_limit_util | percent | Percent memory limit per container relative to total physical memory of the host. |
container_host_memory_total | bytes | Total physical memory on the host. |
cpu_request | integer | millicores | You should select your own default value. | You should select your own domain. | yes | Amount of CPU resources requests in CPU units (milllicores) |
cpu_limit | integer | millicores | You should select your own default value. | You should select your own domain. | yes | Limits on the amount of CPU resources usage in CPU units (millicores) |
memory_request | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | Amount of memory resources requests in megabytes |
memory_limit | integer | megabytes | You should select your own default value. | You should select your own domain. | yes | Limits on the amount of memory resources usage in megabytes |
component_name.cpu_request <= component_name.cpu_limit |
component_name.memory_request <= component_name.memory_limit |
This page describes the Optimization Pack for AWS EC2.
Notice: for the following parameters to take effect, the instance needs to be stopped and changes need to be applied before restarting the instance.
The following table shows a sample of constraints that are required in the definition of the study, depending on the tuned parameters.
Notice that AWS does not support all combinations of instance types and sizes, so it is better to specify them beforehand in your constraints to avoid unnecessary experiment failures.
Instance size is an ordinal parameter, this means you constraint it by using a 0-based index, as in this example:
Name | Unit | Description |
---|---|---|
Name | Unit | Description |
---|---|---|
Component Type | Description |
---|---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Component Type | Description |
---|
Name | Unit | Description |
---|
Name | Unit | Type | Default Value | Domain | Restart | Description |
---|
Name | Unit | Description |
---|
Name | Unit | Description |
---|
Name | Unit | Description |
---|
Name | Unit | Description |
---|
Name | Unit | Description |
---|
Name | Unit | Type | Default Value | Domain | Restart | Description |
---|
k8s_cluster_cpu
millicores
The CPUs in the cluster
k8s_cluster_cpu_available
millicores
The CPUs available for additional pods in the cluster
k8s_cluster_cpu_util
percent
The percentage of used CPUs in the cluster
k8s_cluster_cpu_request
millicores
The total CPUs requested in the cluster
k8s_cluster_memory
bytes
The overall memory in the cluster
k8s_cluster_memory_available
bytes
The amount of memory available for additional pods in the cluster
k8s_cluster_memory_util
percent
The percentage of used memory in the cluster
k8s_cluster_memory_request
bytes
The total memory requested in the cluster
k8s_cluster_nodes
nodes
The number of nodes in the cluster
k8s_namespace_cpu_limit
millicores
The CPU limit for the namespace
k8s_namespace_cpu_request
millicores
The CPUs requested for the namespace
k8s_namespace_memory_limit
bytes
The memory limit for the namespace
k8s_namespace_memory_request
bytes
Memory requested for the namespace
k8s_namespace_running_pods
pdds
The number of running pods in the namespace
IBM WebSphere Application Server 8.5
IBM WebSphere Liberty ND
cm_maxPoolSize
integer
50
0
→ 1000
yes
Maximum number of physical connections for a pool
cm_minPoolSize
integer
0
0
→ 1000
yes
Minimum number of physical connections for a pool
cm_maxConnectionsPerThread
integer
1
0
→ 30
yes
Maximum number of connections per thread
cm_numConnectionsPerThreadLocal
integer
1
0
→ 30
yes
Maximum number of connections per local thread
cm_purgePolicy
categorical
EntirePool
EntirePool
, FailingConnectionOnly
, ValidateAllConnections
yes
Purge Policy
cm_connectionTimeout
categorical
30s
-1
, 0
, 5s
, 10s
, 30s
, 60s
, 90s
, 120s
yes
Connection Timeout
cm_maxIdleTime
categorical
30m
-1
, 1m
, 5m
, 10m
, 15m
, 30m
yes
Max Idle Time
cm_reapTime
categorical
3m
-1
, 30s
, 1m
, 3m
, 5m
yes
Reap Time
exe_coreThreads
integer
-1
-1
, 4
, 6
, 8
, 10
, 12
, 14
, 16
, 18
, 20
yes
Number of core threads
exe_maxThreads
integer
-1
-1
→ 200
yes
Max number of threads
db_minPoolSize
integer
0
0
→ 1000
yes
Mimimum pool size
db_maxPoolSize
integer
50
0
→ 1000
yes
Maximum pool size
db_connectionWaitTime
integer
180
0
→ 3600
yes
Connection wait time
was_tcp_maxppenconnections_tcp2
integer
connections
20000
1
→ 128000
yes
Maximum number of connections that are available for a server to use (TCP_2)
was_tcp_listenBacklog_tcp2
integer
connections
511
1
→ 1024
yes
Maximum number of outstanding connection requests that the operating system can buffer while it waits for the application server to accept the connections (TCP_2)
was_tcp_maxppenconnections_tcp4
integer
connections
20000
1
→ 128000
yes
Maximum number of connections that are available for a server to use (TCP_4)
was_tcp_listenBacklog_tcp4
integer
connections
511
1
→ 1024
yes
Maximum number of outstanding connection requests that the operating system can buffer while it waits for the application server to accept the connections (TCP_4)
was_http_maximumPersistentRequests_http2
integer
requests
10000
1
→ 20000
yes
Maximum number of persistent requests that are allowed on a single HTTP connection (HTTP_2)
was_http_maximumPersistentRequests_http4
integer
requests
10000
1
→ 20000
yes
Maximum number of persistent requests that are allowed on a single HTTP connection (HTTP_4)
was_threadpools_minimumSize_webcontainer
integer
threads
50
1
→ 100
yes
Minimum number of threads to allow in the pool (Web Container)
was_threadpools_maximumsize_webcontainer
integer
threads
50
1
→ 500
yes
Maximum number of threads to maintain in the thread pool (Web Container)
was_threadpools_minimumSize_default
integer
threads
20
1
→ 100
yes
Maximum number of threads to maintain in the thread pool (Web Container)
was_threadpools_maximumsize_default
integer
threads
20
1
→ 500
yes
Maximum number of threads to maintain in the default thread pool (default)
was_threadpools_minimumSize_threadpoolmanager_orb
integer
threads
10
1
→ 100
yes
Minimum number of threads to allow in the pool (ThreadPoolManager ORB)
was_threadpools_maximumsize_threadpoolmanager_orb
integer
threads
50
1
→ 500
yes
Maximum number of threads to maintain in the thread pool (ThreadPoolManager ORB)
was_threadpools_minimumSize_objectrequestbroker_orb
integer
threads
10
1
→ 100
yes
Minimum number of threads to allow in the pool (ObjectRequestBroker ORB)
was_threadpools_maximumsize_objectrequestbroker_orb
integer
threads
50
1
→ 500
yes
Maximum number of threads to maintain in the thread pool (ObjectRequestBroker ORB)
was_threadpools_minimumSize_custom_TCPChannel_DCS
integer
20
1
→ 100
yes
Minimum number of threads to allow in the pool (TCPChannel.DCS)
was_threadpools_maximumsize_custom_TCPChannel_DCS
integer
threads
100
1
→ 500
yes
Maximum number of threads to maintain in the thread pool (TCPChannel.DCS)
was_auth_cacheTimeout
integer
milliseconds
600
0
→ 7200
yes
The time period at which the authenticated credential in the cache expires
was_webserverplugin_serverIOtimeout
integer
milliseconds
900
-1
→ 1800
yes
How long should the plug-in wait for a response from the application
was_Server_provisionComponents
categorical
false
true
, false
yes
Select this property if you want the server components started as they are needed by an application that is running on this server
was_ObjectRequestBroker_noLocalCopies
categorical
false
true
, false
yes
Specifies how the ORB passes parameters. If enabled, the ORB passes parameters by reference instead of by value, to avoid making an object copy. If disabled, a copy of the parameter passes rather than the parameter object itself.
was_PMIService_statisticSet
categorical
basic
true
, false
yes
When PMI service is enabled, the monitoring of individual components can be enabled or disabled dynamically. PMI provides four predefined statistic sets that can be used to enable a set of statistics
aws_lambda_duration | seconds | The duration of an AWS Lambda function execution |
aws_lambda_memory_size | megabytes | The memory size allocated for an AWS Lambda function |
aws_lambda_cost | dollars | The elaboration cost of an AWS Lambda function |
aws_lambda_reserved_concurrency | instances | The maximum number of concurrent instances for an AWS Lambda function |
aws_lambda_provisioned_concurrency | instances | The number of prepared environments for an AWS Lambda function |
aws_lambda_memory_size |
| integer | 128 | 128 → 10240 | no | The memory size allocated for an AWS Lambda function |
aws_lambda_reserved_concurrency |
| integer | 100 | 0→ 1000 | no | The maximum number of concurrent instances for an AWS Lambda function |
aws_lambda_provisioned_concurrency | integer | 0 | 0→100 | no | The number of prepared environments for an AWS Lambda function |
cpu_util | percent | The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work) |
network_in_bytes_details | bytes/s | The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0) |
network_out_bytes_details | bytes/s | The number of outbound network packets in bytes per second broken down by network device (e.g., eth01) |
disk_read_bytes | bytes/s | The number of bytes per second read across all disks |
disk_write_bytes | bytes/s | The number of bytes per second written across all disks |
aws_ec2_disk_iops_reads | ops/s | The per second average number of EBS IO disk-read operations summed across all disks |
aws_ec2_disk_iops_writes | ops/s | The per second average number of EBS IO disk-write operations summed across all disks |
aws_ec2_disk_iops | ops/s | The per second average number of EBS IO disk operations summed across all disks |
aws_ec2_credits_cpu_available | credits | The number of earned CPU credits that an instance has accrued since it was launched or started. Credits are accrued in the credit balance after they are earned, and removed from the credit balance when they are spent |
aws_ec2_credits_cpu_used | credits | The number of CPU credits spent by the instance for CPU utilization |
aws_ec2_ebs_credits_io_util | percent | The percentage of I/O credits remaining in the burst bucket |
aws_ec2_ebs_credits_bytes_util | percent | The percentage of throughput credits remaining in the burst bucket |
aws_ec2_price | dollars | AWS EC2 hourly instance price (on-demand) |
aws_ec2_instance_type | Categorical |
|
| yes | Instance types comprise varying combinations of CPU, memory, storage, and networking capacity, optimized to fit different use cases |
aws_ec2_instance_size | Ordinal |
|
| yes |
| Domain of aws_ec2_instance_size here is: [4xlarge, 8xlarge, 9xlarge]. Since "c5" instance type does not support 8xlarge instance size, and "m5" instance family does not support the 9xlarge one, this option is enforced with a constraint that allows only c5.4xlarge, c5.9xlarge, m5.4xlarge, and m5.8xlarge configurations |
Amazon Web Services Elastic Compute Cloud |
Amazon Web Services Lambda |
The PostgreSQL optimization pack allows you to explore and tune the configuration space of PostgreSQL parameters. In this way, an Akamas study can ramp up the transaction number or minimize its resource consumption according to your typical workload, cutting costs. The main tuning areas covered by the parameters provided in this optimization pack are:
Background writer management
VACUUM management
Deadlock and concurrency management
Write-ahead management
The optimization pack includes metrics to monitor:
Query executions
Concurrency and locks
Buffers and disk I/O
These component types model different PostgreSQL releases. They provided a subset of the parameters available for the best optimization results.
Here’s the command to install the PostgreSQL optimization pack using the Akamas CLI:
Component Type | Description |
---|---|
Component Type | Description |
---|---|
Parameter | Unit | Description |
---|
Parameter | Unit | Description |
---|
Parameter | Unit | Description |
---|
Metric | Unit | Description |
---|
Parameter | Type | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default value | Restart | Description |
---|
Metric | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
PostgeSQL 11
PostgeSQL 12
Cassandra NoSQL database version 3
pg_connections | connections | The number of connections in the db. |
pg_start_time | seconds | The total amount time spent by postresql to boot up. |
pg_commits | commits/s | The number of transactions committed per second. |
pg_rollbacks | rollbacks/s | The number of transactions rollbacked per second. |
pg_checkpoint_executed | checkpoints/s | The total number of checkpoint operations executed by postgresql. |
pg_disk_used | bytes | The amount of disk space used by postgresql. |
pg_blocks_read | blocks/s | The number of blocks read per second by postgresql. |
pg_blocks_cache_hit | blocks/s | Number of blocks found in the buffer cache. |
pg_backend_fsync_count | syncs | The total number of times postgresql executed a sync of data to disk. |
pg_effective_io_concurrency | integer | iops |
|
| no | The number of simultaneous requests that can be handled efficiently by the disk subsystem. |
pg_bgwriter_delay | integer | milliseconds |
|
| no | The delay between activity rounds for the background writer. |
pg_bgwriter_lru_maxpages | integer | buffers |
|
| no | The maximum number of LRU pages to flush per round by the background writer. |
pg_checkpoint_completion_target | real |
|
|
| no | The time spent flushing dirty buffers during checkpoint, as fraction of checkpoint interval. |
pg_effective_cache_size | integer | kilobytes |
|
| no | The planner's assumption about the effective size of the disk cache available to a single query. A higher value makes it more likely index scans will be used, a lower value makes it more likely sequential scans will be used. |
read_rate | ops/s | Read queries per second |
read_response_time_p99 | milliseconds | 99th percentile of read queries response time |
read_response_time_avg | milliseconds | Average response time of read queries |
write_rate | ops/s | Write queries per second |
write_response_time_p99 | milliseconds | 99th percentile of write queries response time |
read_response_time_max | milliseconds | Maximum response time of read queries |
total_rate | ops/s | Total queries per second |
write_response_time_avg | milliseconds | Average response time of write queries |
write_response_time_max | milliseconds | Maximum response time of write queries |
read_response_time_p90 | milliseconds | 90th percentile of read queries response time |
write_response_time_p90 | milliseconds | 90th percentile of write queries response time |
cassandra_compactionStrategy | categorical |
|
| yes | Compaction strategy in use |
cassandra_concurrentReads | integer |
|
| yes | Concurrent Reads |
cassandra_concurrentWrites | integer |
|
| yes | Concurrent Writes |
cassandra_fileCacheSizeInMb | integer | megabytes |
|
| yes | Total memory to use for SSTable-reading buffers |
cassandra_memtableCleanupThreshold | real |
|
| yes | Ratio used for automatic memtable flush |
cassandra_concurrentCompactors | integer |
|
| yes | Sets the number of concurrent compaction processes allowed to run simultaneously on a node |
cassandra_commitlog_compression | categorical |
|
| Sets the compression of commit log |
cassandra_commitlog_segment_size_in_mb | integer | megabytes |
|
| Sets the segment size of commit log |
cassandra_compaction_throughput_mb_per_sec | integer | megabytes/s |
|
| Sets the number of throughput for compaction |
cassandra_commitlog_sync_period_in_ms | integer | milliseconds |
|
| Sets the sync period of commit log |
The MySQL optimization pack allows the user to monitor a MySQL instance and explore the configuration space of its parameters. The optimization pack provides parameters and metrics that can be leveraged to reach, among others, two main goals:
Throughput optimization - increasing the capacity of a MySQL deployment to serve clients
Cost optimization - decreasing the size of a MySQL deployment while guaranteeing the same service level
To reach the aforementioned goals, the optimization pack focuses on three key areas of tuning of the InnoDB, the storage engine for MySQL:
Buffer management
Threading
Paging
The following table describes the supported component types by the MySQL optimization pack.
Here’s the command to install the MySQL optimization-pack using the Akamas CLI:
Component Type | Description |
---|
MySQL 8.0 Database, deployed on-premises. |
The optimization pack for Oracle Database 12c.
The following parameters require their ranges or default values to be updated according to the described rules:
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters.
Metric | Unit | Description |
---|---|---|
Parameter | Type | Unit | Default value | Domain | Description | Restart |
---|---|---|---|---|---|---|
Name | Unit | Description |
---|---|---|
Name | Unit | Description |
---|---|---|
Name | Unit | Type | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Name | Unit | Type | Default | Domain | Restart | Description |
---|---|---|---|---|---|---|
Constraint |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|
Formula | Notes |
---|
Formula | Notes |
---|
mysql_aborted_connection
connections
The number of failed attempts to connect to the MySQL
mysql_connections_current
connections
The current number of connection opened towards MySql
mysql_connections_max
connections
The maximum number of connections that can be opened towards MySQL
mysql_innodb_buffer_pool_size
bytes
The size of the memory area where InnoDB caches tables and indexes
mysql_mem_usage
bytes
MySQL instance memory consumption divided by type (innodb_buffer_pool_data, innodb_log_buffer, query_cache, key_buffer_size)
mysql_query_throughput
queries/s
The number of queries per second processed by mysql
mysql_slow_query_rate
querys/s
The rate of queries that are considered slow based on parameters mysql_long_query_time and mysql_long_query_min_examined_row
mysql_statements_rate
statements/s
The rate at which each type of statement (select, insert, update, delete) executed per second.
mysql_threads_running
threads
The number of threads running in the MySQL instance
mysql_transactions_rate
transactions
The rate at which each type of transaction (handler label) is executed (commit. rollback, prepare, savepoint)
network_in_bytes_rate
bytes/s
The number of network inbound data in bytes per second
network_out_bytes_rate
bytes/s
The number of network outbound data in bytes per second
mysql_innodb_buffer_pool_size
integer
bytes
134217728
5242880
→ 25769803776
The size of the buffer pool used by InnoDB to cache tables and indexes in memory
no
mysql_innodb_buffer_pool_instances
integer
regions
8
1
→64
The number of regions that the InnoDB buffer pool is divided into
no
mysql_innodb_thread_sleep_delay
integer
milliseconds
10000
0
→ 1000000
The number of milliseconds each InnoDB thread sleeps before joining the InnoDB queue
no
mysql_innodb_flush_method
string
-
fsync
fsync
O_DSYNC
littlesync
nosyn
c
O_DIRECT
O_DIRECT_NO_FSYNC
The method used to flush data to InnoDB's datafiles and log files
yes
mysql_innodb_log_file_size
integer
bytes
50331648
4194304
→5368709120
The size of each log file in each log group maintained by InnoDB. The total size of log files cannot exceed 4GB.
yes
mysql_innodb_thread_concurrency
integer
threads
0
0
→ 1000
The limit on the number of os threads used by InnoDB to serve user requests
no
mysql_innodb_max_dirty_pages_pct
real
percentage
10.0
0.0
→ 99.99
The limit on the percentage of dirty pages in the buffer pool of InnoDB
no
mysql_innodb_read_ahead_threshold
integer
pages
56
0
→ 64
The number of sequentially read pages after which MySQL initiates an async read of the following extend (a group of pages within a tablespace)
no
mysql_innodb_adaptive_hash_index
-
ON
ON
OFF
Whether or not enable the adaptive hash index optimization for InnoDB tables
no
mysql_innodb_fill_factor
integer
percentage
100
10
→100
The percentage of each B-tree page that is filled during a sorted index build
no
mongodb_document_deleted
documents/s
The average number of documents deleted per second
mongodb_documents_inserted
documents/s
The average number of documents inserted per second
mongodb_documents_updated
documents/s
The average number of documents updated per second
mongodb_documents_returned
documents/s
The average number of documents returned by queries per second
mongodb_connections_current
connections
The current number of opened connections
mongodb_heap_used
bytes
The total size of heap space used (only available in Linux/Unix systems)
mongodb_mem_used
bytes
The total amount of memory used
mongodb_page_faults_total
faults/s
The average number of page faults per second (i.e., operations that require MongoDB to access data on disk rather than on memory)
mongodb_global_lock_current_queue
ops
The current number of operations queued because of a lock
mongodb_cache_size
megabytes
Integer
You should select your own default value when you create a study, since it is highly dependent on your system (how much memory your system has)
You should select your own default value when you create a study, since it is highly dependent on your system (how much memory your system has)
No
The maximum size of the internal cache that MongoDB (WiredTiger) will use to operate
mongodb_eviction_trigger
percentage
Integer
95
1 → 99
No
The percentage threshold on the use of the MongoDB cache for which cache eviction will start and client threads will throttle
mongodb_eviction_target
percentage
Integer
80
1 → 99
No
The target percentage usage of the MongoDB cache to reach after evictions
mongodb_eviction_dirty_trigger
percentage
Integer
20
1 → 99
No
The percentage threshold on the use of MongoDB dirty cache for which cache eviction will start and client threads will throttle
mongodb_eviction_dirty_target
percentage
Integer
5
1 → 99
No
The target percentage usage of the MongoDB dirty cache to reach after evictions
mongodb_eviction_threads_min
threads
Integer
4
1 → 20
No
The minimum number of threads to use to perform cache eviction
mongodb_eviction_threads_max
threads
Integer
4
1 → 20
No
The maximum number of threads to use to perform cache eviction
mongodb_sync_delay
seconds
Integer
1min
1min → 6min
no
The temporal interval between fsync operations where mongod flushes its working memory to disk
mongodb_eviction_threads_min <= mongodb_eviction_threads_max
mongodb_eviction_dirty_target <= mongodb_eviction_target
mongodb_eviction_dirty_trigger <= mongodb_eviction_trigger
oracle_sga_total_size | bytes | The current memory size of the SGA. |
oracle_sga_free_size | bytes | The amount of SGA currently available. |
oracle_sga_max_size | bytes | The configured maximum memory size for the SGA. |
oracle_pga_target_size | bytes | The configured target memory size for the PGA. |
oracle_redo_buffers_size | bytes | The memory size of the redo buffers. |
oracle_default_buffer_cache_size | bytes | The memory size for the DEFAULT buffer cache component. |
oracle_default_2k_buffer_cache_size | bytes | The memory size for the DEFAULT 2k buffer cache component. |
oracle_default_4k_buffer_cache_size | bytes | The memory size for the DEFAULT 4k buffer cache component. |
oracle_default_8k_buffer_cache_size | bytes | The memory size for the DEFAULT 8k buffer cache component. |
oracle_default_16k_buffer_cache_size | bytes | The memory size for the DEFAULT 16k buffer cache component. |
oracle_default_32k_buffer_cache_size | bytes | The memory size for the DEFAULT 32k buffer cache component. |
oracle_keep_buffer_cache_size | bytes | The memory size for the KEEP buffer cache component. |
oracle_recycle_buffer_cache_size | bytes | The memory size for the RECYCLE buffer cache component. |
oracle_asm_buffer_cache_size | bytes | The memory size for the ASM buffer cache component. |
oracle_shared_io_pool_size | bytes | The memory size for the IO pool component. |
oracle_java_pool_size | bytes | The memory size for the Java pool component. |
oracle_large_pool_size | bytes | The memory size for the large pool component. |
oracle_shared_pool_size | bytes | The memory size for the shared pool component. |
oracle_streams_pool_size | bytes | The memory size for the streams pool component. |
oracle_buffer_cache_hit_ratio | percent | How often a requested block has been found in the buffer cache without requiring disk access. |
oracle_wait_class_commit | percent | The percentage of time spent waiting on the events of class 'Commit'. |
oracle_wait_class_concurrency | percent | The percentage of time spent waiting on the events of class 'Concurrency'. |
oracle_wait_class_system_io | percent | The percentage of time spent waiting on the events of class 'System I/O'. |
oracle_wait_class_user_io | percent | The percentage of time spent waiting on the events of class 'User I/O'. |
oracle_wait_class_other | percent | The percentage of time spent waiting on the events of class 'Other'. |
oracle_wait_class_scheduler | percent | The percentage of time spent waiting on the events of class 'Scheduler'. |
oracle_wait_class_idle | percent | The percentage of time spent waiting on the events of class 'Idle'. |
oracle_wait_class_application | percent | The percentage of time spent waiting on the events of class 'Application'. |
oracle_wait_class_network | percent | The percentage of time spent waiting on the events of class 'Network'. |
oracle_wait_class_configuration | percent | The percentage of time spent waiting on the events of class 'Configuration'. |
oracle_wait_event_log_file_sync | percent | The percentage of time spent waiting on the 'log file sync' event. |
oracle_wait_event_log_file_parallel_write | percent | The percentage of time spent waiting on the 'log file parallel write' event. |
oracle_wait_event_log_file_sequential_read | percent | The percentage of time spent waiting on the 'log file sequential read' event. |
oracle_wait_event_enq_tx_contention | percent | The percentage of time spent waiting on the 'enq: TX - contention' event. |
oracle_wait_event_enq_tx_row_lock_contention | percent | The percentage of time spent waiting on the 'enq: TX - row lock contention' event. |
oracle_wait_event_latch_row_cache_objects | percent | The percentage of time spent waiting on the 'latch: row cache objects' event. |
oracle_wait_event_latch_shared_pool | percent | The percentage of time spent waiting on the 'latch: shared pool' event. |
oracle_wait_event_resmgr_cpu_quantum | percent | The percentage of time spent waiting on the 'resmgr:cpu quantum' event. |
oracle_wait_event_sql_net_message_from_client | percent | The percentage of time spent waiting on the 'SQL*Net message from client' event. |
oracle_wait_event_rdbms_ipc_message | percent | The percentage of time spent waiting on the 'rdbms ipc message' event. |
oracle_wait_event_db_file_sequential_read | percent | The percentage of time spent waiting on the 'db file sequential read' event. |
oracle_wait_event_log_file_switch_checkpoint_incomplete | percent | The percentage of time spent waiting on the 'log file switch (checkpoint incomplete)' event. |
oracle_wait_event_row_cache_lock | percent | The percentage of time spent waiting on the 'row cache lock' event. |
oracle_wait_event_buffer_busy_waits | percent | The percentage of time spent waiting on the 'buffer busy waits' event. |
oracle_wait_event_db_file_async_io_submit | percent | The percentage of time spent waiting on the 'db file async I/O submit' event. |
oracle_sessions_active_user | sessions | The number of active user sessions. |
oracle_sessions_inactive_user | sessions | The number of inactive user sessions. |
oracle_sessions_active_background | sessions | The number of active background sessions. |
oracle_sessions_inactive_background | sessions | The number of inactive background sessions. |
oracle_calls_execute_count | calls | Total number of calls (user and recursive) that executed SQL statements. |
oracle_tuned_undoretention | seconds | The amount of time for which undo will not be recycled from the time it was committed. |
oracle_max_query_length | seconds | The length of the longest query executed. |
oracle_transaction_count | transactions | The total number of transactions executed within the period. |
oracle_sso_errors | errors/s | The number of ORA-01555 (snapshot too old) errors raised per second. |
oracle_redo_log_space_requests | requests | The number of times a user process waits for space in the redo log file, usually caused by checkpointing or log switching. |
bitmap_merge_area_size | kilobytes |
|
| yes | The amount of memory Oracle uses to merge bitmaps retrieved from a range scan of the index. |
create_bitmap_area_size | megabytes |
|
| yes | Size of create bitmap buffer for bitmap index. Relevant only for systems containing bitmap indexes. |
db_block_size | bytes |
|
| yes | The size of Oracle database blocks. The value of this parameter can be changed only when the database is first created. |
db_cache_size | megabytes |
|
| no | The size of the DEFAULT buffer pool for standard block size buffers. The value must be at least 4M * cpu number. |
db_2k_cache_size | megabytes |
|
| no | Size of cache for 2K buffers. |
db_4k_cache_size | megabytes |
|
| no | Size of cache for 4K buffers. |
db_8k_cache_size | megabytes |
|
| no | Size of cache for 8K buffers. |
db_16k_cache_size | megabytes |
|
| no | Size of cache for 16K buffers. |
db_32k_cache_size | megabytes |
|
| no | Size of cache for 32K buffers. |
hash_area_size | kilobytes |
|
| yes | Maximum size of in-memory hash work area maximum amount of memory. |
java_pool_size | megabytes |
|
| no | The size of the Java pool. If SGA_TARGET is set, this value represents the minimum value for the memory pool. |
large_pool_size | megabytes |
|
| no | The size of large pool allocation heap. |
lock_sga |
|
| yes | Lock the entire SGA in physical memory. |
memory_max_target | megabytes |
|
| yes | The maximum value to which a DBA can set the MEMORY_TARGET initialization parameter. |
memory_target | megabytes |
|
| no | Oracle systemwide usable memory. The database tunes memory to the MEMORY_TARGET value, reducing or enlarging the SGA and PGA as needed. |
pga_aggregate_limit | megabytes |
|
| no | The limit on the aggregate PGA memory consumed by the instance. |
pga_aggregate_target | megabytes |
|
| no | The target aggregate PGA memory available to all server processes attached to the instance. |
result_cache_max_result | percent |
|
| no | Maximum result size as a percent of cache the size. |
result_cache_max_size | megabytes |
|
| no | The maximum amount of SGA memory that can be used by the Result Cache. |
result_cache_remote_expiration | minutes |
|
| no | The expiration in minutes of remote objects. High values may cause stale answers. |
sga_max_size | megabytes |
|
| yes | The maximum size of the SGA for the lifetime of the instance. |
sga_min_size | megabytes |
|
| no | The guaranteed SGA size for a pluggable database (PDB). When SGA_MIN_SIZE is set for a PDB, it guarantees the specified SGA size for the PDB. |
sga_target | megabytes |
|
| no | The total size of all SGA components, acts as the minimum value for the size of the SGA. |
shared_pool_reserved_size | megabytes |
|
| yes | The shared pool space reserved for large contiguous requests for shared pool memory. |
shared_pool_size | megabytes |
|
| no | The size of the shared pool. |
sort_area_retained_size | kilobytes |
|
| no | The maximum amount of the User Global Area memory retained after a sort run completes. |
sort_area_size | kilobytes |
|
| no | The maximum amount of memory Oracle will use for a sort. If more space is required then temporary segments on disks are used. |
streams_pool_size | megabytes |
|
| no | Size of the streams pool. |
use_large_pages |
|
| yes | Enable the use of large pages for SGA memory. |
commit_logging |
|
| no | Control how redo is batched by Log Writer. |
log_archive_max_processes | processes |
|
| no | Maximum number of active ARCH processes. |
log_buffer | megabytes |
|
| yes | The amount of memory that Oracle uses when buffering redo entries to a redo log file. |
log_checkpoint_interval | blocks |
|
| no | The maximum number of log file blocks between incremental checkpoints. |
log_checkpoint_timeout | seconds |
|
| no | Maximum time interval between checkpoints. Guarantees a no buffer remains dirty for more than the specified time. |
undo_retention | seconds |
|
| no | Low threshold value of undo retention. |
undo_management |
|
| yes | Instance runs in SMU mode if TRUE, else in RBU mode |
temp_undo_enabled |
|
| no | Split undo log into temporary (temporary objects) and permanent (persistent objects) undo log. |
optimizer_adaptive_plans |
|
| no | Controls adaptive plans, execution plans built with alternative choices based on collected statistics. |
optimizer_adaptive_statistics |
|
| no | Enable the optimizer to use adaptive statistics for complex queries. |
optimizer_capture_sql_plan_baselines |
|
| no | Automatic capture of SQL plan baselines for repeatable statements |
optimizer_dynamic_sampling |
|
| no | Controls both when the database gathers dynamic statistics, and the size of the sample that the optimizer uses to gather the statistics. |
optimizer_features_enable |
|
| no | Enable a series of optimizer features based on an Oracle release number. |
optimizer_index_caching |
|
| no | Adjust the behavior of cost-based optimization to favor nested loops joins and IN-list iterators. |
optimizer_index_cost_adj |
|
| no | Tune optimizer behavior for access path selection to be more or less index friendly. |
optimizer_inmemory_aware |
|
| no | Enables all of the optimizer cost model enhancements for in-memory. |
optimizer_mode |
|
| no | The default behavior for choosing an optimization approach for the instance. |
optimizer_use_invisible_indexes |
|
| no | Enable or disables the use of invisible indexes. |
optimizer_use_pending_statistics |
|
| no | Control whether the optimizer uses pending statistics when compiling SQL statements. |
optimizer_use_sql_plan_baselines |
|
| no | Enables the use of SQL plan baselines stored in SQL Management Base. |
approx_for_aggregation |
|
| no | Replace exact query processing for aggregation queries with approximate query processing. |
approx_for_count_distinct |
|
| no | Automatically replace COUNT (DISTINCT expr) queries with APPROX_COUNT_DISTINCT queries. |
approx_for_percentile |
|
| no | Converts exact percentile functions to their approximate percentile function counterparts. |
parallel_max_servers | processes |
|
| no | The maximum number of parallel execution processes and parallel recovery processes for an instance. |
parallel_min_servers | processes |
|
| no | The minimum number of execution processes kept alive to service parallel statements. |
parallel_threads_per_cpu |
|
| no | Number of parallel execution threads per CPU. |
cpu_count | cpus |
|
| no | Number of CPUs available for the Oracle instance to use. |
db_files | files |
|
| yes | The maximum number of database files that can be opened for this database. This may be subject to OS constraints. |
open_cursors | cursors |
|
| no | The maximum number of open cursors (handles to private SQL areas) a session can have at once. |
open_links | connections |
|
| yes | The maximum number of concurrent open connections to remote databases in one session. |
open_links_per_instance | connections |
|
| yes | Maximum number of migratable open connections globally for each database instance. |
processes | processes |
|
| yes | The maximum number of OS user processes that can simultaneously connect to Oracle. |
read_only_open_delayed |
|
| yes | Delay opening of read only files until first access. |
sessions | sessions |
|
| no | The maximum number of sessions that can be created in the system, effectively the maximum number of concurrent users in the system. |
transactions | transactions |
|
| yes | The maximum number of concurrent transactions. |
audit_sys_operations |
|
| yes | Enable sys auditing |
audit_trail |
|
| yes | Configure system auditing. |
gcs_server_processes | processes |
|
| yes | The number of background GCS server processes to serve the inter-instance traffic among Oracle RAC instances. |
java_jit_enabled |
|
| no | Enables the Just-in-Time (JIT) compiler for the Oracle Java Virtual Machine. |
fast_start_mttr_target | seconds |
|
| no | number of seconds the database should take to perform crash recovery of a single instance. This parameter impacts the time between checkpoints. |
recyclebin |
|
| no | Allow recovering of dropped tables. |
statistics_level |
|
| no | Level of collection for database and operating system statistics. |
transactions_per_rollback_segment |
|
| yes | Expected number of active transactions per rollback segment. |
filesystemio_options |
|
| yes | Specifies I/O operations for file system files. |
Parameter | Default value | Domain |
|
|
|
|
|
|
|
|
| upper bound can’t exceed half the size of |
|
|
|
| at least |
|
|
Parameter | Default value | Domain |
| should match the available CPUs 0 to let the Oracle engine automatically determine the value | must not exceed the available CPUs |
|
|
|
|
|
|
|
| must be at least equal to the default value |
|
|
| Add when tuning automatic memory management |
| Add when tuning SGA and PGA |
| Add when tuning SGA and PGA |
| Add when tuning SGA |
| Add when tuning SGA areas |
| Add when tuning PGA |
|
|
|
|
The MongoDB optimization pack helps you optimize instances of MongoDB to reach the desired performance goal. The optimization pack provides parameters and metrics specific to MongoDB that can be leveraged to reach, among others, two main goals:
Throughput optimization - increasing the capacity of a MongoDB deployment to serve clients
Cost optimization - decreasing the size of a MongoDB deployment while guaranteeing the same service level
To reach such goals the pack focuses mostly on the parameters managing the cache, being one of the elements that impact performances the most; in particular, the optimization pack provides parameters to control the lifecycle and the size of the MongoDB’s cache thus significantly impacting performance.
Even though it is possible to evaluate performance improvements of MongoDB by looking at the business application that uses it as its database, looking at the end to end throughput or response time or using a performance test like YCSB, the optimization pack provides internal MongoDB metrics that can shed a light too on how MongoDB is performing, in particular in terms of throughput, for example:
The number of documents inserted in the database per second
The number of active connections
The optimization pack supports the following version of MongoDB.
Here’s the command to install the MongoDB optimization-pack using the Akamas CLI:
This page describes the Optimization Pack for Spark Application 2.2.0.
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters:
The overall resources allocated to the application should be constrained by a maximum and, sometimes, a minimum value:
the maximum value could be the sum of resources physically available in the cluster, or a lower limit to allow the concurrent execution of other applications
an optional minimum value could be useful to avoid configurations that allocate executors that are both small and scarce
The Java-OpenJDK optimization pack enables the ability to optimize Java applications based on the OpenJDK and Oracle HotSpot JVM. Through this optimization pack, Akamas is able to tackle the problem of performance of JVM-based applications from both the point of view of cost savings and quality of service.
To achieve these goals the optimization pack provides parameters that focus on the following areas:
Garbage collection
Heap
JIT
Similarly, the bundled metrics provide visibility on the following aspects of tuned applications:
Heap and memory utilization
Garbage-collection
Execution threads
The optimization pack supports the most used versions of OpenJDK and Oracle HotSpot JVM.
Here’s the command to install the Java OpenJDK optimization pack using the Akamas CLI:
Component Type | Description |
---|---|
Metric | Unit | Desciption |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Unit | Type | Default value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Component Type | Description |
---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Type | Unit | Default value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Component Type | Description |
---|
For more information on the process of installing or upgrading an optimization pack refer to .\
MongoDB version 4.x
MongoDB version 5.x
spark_application_duration
milliseconds
The duration of the Spark application
spark_job_duration
milliseconds
The duration of the job
spark_stage_duration
milliseconds
The duration of the stage
spark_task_duration
milliseconds
The duration of the task
spark_driver_rdd_blocks
blocks
The total number of persisted RDD blocks for the driver
spark_driver_mem_used
bytes
The total amount of memory used by the driver
spark_driver_disk_used
bytes
The total amount of disk used for RDDs by the driver
spark_driver_cores
cores
The total number of concurrent tasks that can be run by the driver
spark_driver_total_input_bytes
bytes
The total number of bytes read from RDDs or persisted data by the driver
spark_driver_total_tasks
tasks
The total number of tasks run for each the driver
spark_driver_total_duration
milliseconds
The total amount of time spent by the driver running tasks
spark_driver_max_mem_used
bytes
The maximum amount of memory used by the driver
spark_driver_total_jvm_gc_duration
milliseconds
The total amount of time spent by the driver's JVM doing garbage across all tasks
spark_driver_total_shuffle_read
bytes
The total number of bytes read during a shuffle by the driver
spark_driver_total_shuffle_write
bytes
The total number of bytes written in shuffle operations by the driver
spark_driver_used_on_heap_storage_memory
bytes
The amount of on-heap memory used by the driver
spark_driver_used_off_heap_storage_memory
bytes
The amount of off-heap memory used by the driver
spark_driver_total_on_heap_storage_memory
bytes
The total amount of available on-heap memory for the driver
spark_driver_total_off_heap_storage_memory
bytes
The total amount of available off-heap memory for the driver
spark_executor_max_count
executors
The maximum number of executors used for the application
spark_executor_rdd_blocks
blocks
The total number of persisted RDD blocks for each executor
spark_executor_mem_used
bytes
The total amount of memory used by each executor
spark_executor_disk_used
bytes
The total amount of disk used for RDDs by each executor
spark_executor_cores
cores
The number of cores used by each executor
spark_executor_total_input_bytes
bytes
The total number of bytes read from RDDs or persisted data by each executor
spark_executor_total_tasks
tasks
The total number of tasks run for each the executor
spark_executor_total_duration
milliseconds
The total amount of time spent by each executor running tasks
spark_executor_max_mem_used
bytes
The maximum amount of memory used by each executor
spark_executor_total_jvm_gc_duration
milliseconds
The total amount of time spent by each executor's JVM doing garbage collection across all tasks
spark_executor_total_shuffle_read
bytes
The total number of bytes read during a shuffle by each executor
spark_executor_total_shuffle_write
bytes
The total number of bytes written in shuffle operations by each executor
spark_executor_used_on_heap_storage_memory
bytes
The amount of on-heap memory used by each executor
spark_executor_used_off_heap_storage_memory
bytes
The amount of off-heap memory used by each executor
spark_executor_total_on_heap_storage_memory
bytes
The total amount of available on-heap memory for each executor
spark_executor_total_off_heap_storage_memory
bytes
The total amount of available off-heap memory for each executor
spark_stage_shuffle_read_bytes
bytes
The total number of bytes read in shuffle operations by each stage
spark_task_jvm_gc_duration
milliseconds
The total duration of JVM garbage collections for each task
spark_task_peak_execution_memory
bytes
The sum of the peak sizes across internal data structures created for each task
spark_task_result_size
bytes
The size of the result of the computation of each task
spark_task_result_serialization_time
milliseconds
The time spent by each task serializing the computation result
spark_task_shuffle_read_fetch_wait_time
milliseconds
The time spent by each task waiting for remote shuffle blocks
spark_task_shuffle_read_local_blocks_fetched
blocks
The total number of local blocks fetched in shuffle operations by each task
spark_task_shuffle_read_local_bytes
bytes
The total number of bytes read in shuffle operations from local disk by each task
spark_task_shuffle_read_remote_blocks_fetched
blocks
The total number of remote blocks fetched in shuffle operations by each task
spark_task_shuffle_read_remote_bytes
bytes
The total number of remote bytes read in shuffle operations by each task
spark_task_shuffle_read_remote_bytes_to_disk
bytes
The total number of remote bytes read to disk in shuffle operations by each task
spark_task_shuffle_write_time
nanoseconds
The time spent by each task writing data on disk or on buffer caches during shuffle operations
spark_task_executor_deserialize_time
nanoseconds
The time spent by the executor deserializing the task
spark_task_executor_deserialize_cpu_time
nanoseconds
The CPU time spent by the executor deserializing the task
spark_task_stage_shuffle_write_records
records
The total number of records written in shuffle operations broken down by task and stage
spark_task_stage_shuffle_write_bytes
records
The total number of bytes written in shuffle operations broken down by task and stage
spark_task_stage_shuffle_read_records
records
The total number of records read in shuffle operations broken down by task and stage
spark_task_stage_disk_bytes_spilled
bytes
The total number of bytes spilled on disk broken down by task and stage
spark_task_stage_memory_bytes_spilled
bytes
The total number of bytes spilled on memory broken down by task and stage
spark_task_stage_input_bytes_read
bytes
The total number of bytes read, broken down by task and stage
spark_task_stage_input_records_read
records
The total number of records read, broken down by task and stage
spark_task_stage_output_bytes_written
bytes
The total number of bytes written, broken down by task and stage
spark_task_stage_output_records_written
records
The total number of records written, broken down by task and stage
spark_task_stage_executor_run_time
nanoseconds
The time spent by each executor actually running tasks (including fetching shuffle data) broken down by task, stage and executor
spark_task_stage_executor_cpu_time
nanoseconds
The CPU time spent by each executor actually running each task (including fetching shuffle data) broken down by task and stage
driverCores
integer
cores
You should select your own default
You should select your own domain
yes
The number of CPU cores assigned to the driver in cluster deploy mode.
numExecutors
integer
executors
You should select your own default
You should select your own domain
yes
Number of executors to use. YARN only.
totalExecutorCores
integer
cores
You should select your own default
You should select your own domain
yes
Total number of cores for the application. Spark standalone and Mesos only.
executorCores
integer
cores
You should select your own default
You should select your own domain
yes
Number of CPU cores for an executor. Spark standalone and YARN only.
defaultParallelism
integer
partitions
You should select your own default
You should select your own domain
yes
Default number of partitions in RDDs returned by transformations like join, reduceByKey, and parallelize when not set by user.
broadcastBlockSize
integer
kilobytes
4096
256
→ 131072
yes
Size of each piece of a block for TorrentBroadcastFactory.
schedulerMode
categorical
FIFO
FIFO
, FAIR
yes
Define the scheduling strategy across jobs.
driverMemory
integer
megabytes
You should select your own default
You should select your own domain
yes
Amount of memory to use for the driver process.
yarnDriverMemoryOverhead
integer
megabytes
384
384
→ 65536
yes
Off-heap memory to be allocated per driver in cluster mode. Currently supported in YARN and Kubernetes.
executorMemory
integer
megabytes
You should select your own default
You should select your own domain
yes
Amount of memory to use per executor.
yarnExecutorMemoryOverhead
integer
megabytes
384
384
→ 65536
yes
Off-heap memory to be allocated per executor. Currently supported in YARN and Kubernetes.
memoryOffHeapEnabled
categorical
false
true
, false
yes
If true, Spark will attempt to use off-heap memory for certain operations.
memoryOffHeapSize
integer
megabytes
0
0
→ 16384
yes
The absolute amount of memory in bytes which can be used for off-heap allocation.
reducerMaxSizeInFlight
integer
megabytes
48
1
→ 1024
yes
Maximum size of map outputs to fetch simultaneously from each reduce task in MB.
shuffleFileBuffer
integer
kilobytes
32
1
→ 2048
yes
Size of the in-memory buffer for each shuffle file output stream in KB.
shuffleCompress
categorical
true
true
, false
yes
Whether to compress map output files.
shuffleServiceEnabled
categorical
true
true
, false
yes
Enables the external shuffle service. This service preserves the shuffle files written by executors so the executors can be safely removed.
dynamicAllocationEnabled
categorical
true
true
, false
yes
Whether to use dynamic resource allocation, which scales the number of executors registered with this application up and down based on the workload. Requires spark.shuffle.service.enabled to be set.
dynamicAllocationExecutorIdleTimeout
integer
60
1
→ 3600
yes
If dynamic allocation is enabled and an executor has been idle for more than this duration, the executor will be removed.
dynamicAllocationInitialExecutors
integer
executors
You should select your own default
You should select your own domain
yes
Initial number of executors to run if dynamic allocation is enabled.
dynamicAllocationMinExecutors
integer
executors
You should select your own default
You should select your own domain
yes
Lower bound for the number of executors if dynamic allocation is enabled.
dynamicAllocationMaxExecutors
integer
executors
You should select your own default
You should select your own domain
yes
Upper bound for the number of executors if dynamic allocation is enabled.
sqlInMemoryColumnarStorageCompressed
categorical
true
true
, false
yes
When set to true Spark SQL will automatically select a compression codec for each column based on statistics of the data.
sqlInMemoryColumnarStorageBatchSize
integer
records
1000
1
→ 100000
yes
Controls the size of batches for columnar caching. Larger batch sizes can improve memory utilization and compression, but risk OOMs when caching data.
sqlFilesMaxPartitionBytes
integer
bytes
134217728
1024
→ 1073741824
yes
The maximum number of bytes to pack into a single partition when reading files.
sqlFilesOpenCostInBytes
integer
bytes
4194304
262144
→ 67108864
yes
The estimated cost to open a file, measured by the number of bytes could be scanned in the same time. This is used when putting multiple files into a partition.
compressionLz4BlockSize
integer
bytes
32
8
→ 1024
yes
Block size in bytes used in LZ4 compression.
serializer
categorical
org.apache.spark.serializer.KryoSerializer
org.apache.spark.serializer.JavaSerializer
, org.apache.spark.serializer.KryoSerializer
yes
Class to use for serializing objects that will be sent over the network or need to be cached in serialized form.
kryoserializerBuffer
integer
bytes
64
8
→ 1024
yes
Initial size of Kryo's serialization buffer. Note that there will be one buffer per core on each worker.
driverMemory + executorMemory * numExecutors < MEMORY_CAP
The overall allocated memory should not exceed the specified limit
driverCores + executorCores * numExecutors < CPU_CAP
The overall allocated CPUs should not exceed the specified limit
driverMemory + executorMemory * numExecutors > MIN_MEMORY
The overall allocated memory should not exceed the specified limit
driverCores + executorCores * numExecutors > MIN_CPUS
The overall allocated CPUs should not exceed the specified limit
ElasticSearch NoSQL database version 6
es_cluster_search_query_time
milliseconds
The average search query time of elasticsearch
es_cluster_search_query_throughput
queries/s
The throughput of elasticsearch in terms of search queries per second
es_cluster_active_shards
shards
The total number of active shards (including replica shards) across all indices within the elasticsearch cluster
es_cluster_status
flag
The status of the elasticsearch search cluster, red = 0, yellow = 1, green = 2
es_cluster_out_packets
packets/s
The number of packets per second transmitted outside of the elasticsearch cluster
es_cluster_out_bytes
bytes/s
The number of bytes per second transmitted outside of the elasticsearch cluster
es_cluster_in_packets
packets/s
The number of packets per second received by the elasticsearch cluster
es_cluster_in_bytes
bytes/s
The number of bytes per second received by the elasticsearch cluster
es_node_process_open_files
files
The total number of file descriptors opened by the elasticsearch process within the elasticsearch node
es_node_process_cpu_util
percentage
The CPU utilization % of the elasticsearch process within the elasticsearch node
es_node_process_jvm_gc_duration
milliseconds
The average duration of jvm garbage collection for the eleasticearch process
es_node_process_jvm_gc_count
gcs
The total number of jvm garbage collections that have occurred for the elasticsearch process in the node
index_merge_scheduler_max_thread_count
integer
threads
3
1
→ 10
ElasticSearch max number of threads for merge operations
indices_store_throttle_max_bytes_per_sec
integer
bytes/s
20
10
→ 500
ElasticSearch max bandwidth for store operations.
index_translog_flush_threshold_size
integer
megabytes
512
256
→ 1024
ElasticSearch flush threshold size.
index_refresh_interval
integer
seconds
1
10
→ 30
ElasticSearch refresh interval.
index_number_of_shards
integer
shards
5
1
→ 10
ElasticSearch number of shards.
index_number_of_replicas
integer
replicas
1
0
→ 1
ElasticSearch number of replicas.
Java OpenJDK 8 JVM |
Java OpenJDK 11 JVM |
The Spark optimization pack allows tuning applications running on the Apache Spark framework. Through this optimization pack, Akamas is able to explore the space of the Spark parameters in order to find the configurations that best optimize the allocated resources or the execution time.
To achieve these goals the optimization pack provides parameters that focus on the following areas:
Driver and executors' resources allocation
Parallelism
Shuffling
Spark SQL
Similarly, the bundled metrics provide visibility on the following statistics from the Spark History Server:
Execution time
Executors' resource usage
Garbage collection time
Here’s the command to install the Spark optimization pack using the Akamas CLI:
The Oracle Database optimization pack allows monitoring of an Oracle instance and exploring the configuration space of its initialization parameters. In this way, an Akamas study can achieve goals such as maximizing the throughput of an Oracle-backed application or minimizing its resource consumption, thus reducing costs.
The main tuning areas covered by the parameters provided in this optimization pack are:
SGA memory management
PGA memory management
SQL plan optimization
Approximate query execution
The optimization pack also includes metrics to monitor:
Memory allocation and utilization
Sessions
Query executions
Wait events
These component types model different Oracle Database releases, either as on-premise or cloud solutions. They provide the initialization parameters the workflow can apply through the OracleConfigurator operator, and a set of metrics to monitor the instance performances.
Notice that for the Oracle Database hosted on Amazon RDS, a subset of initialization parameters can be applied in the workflow to interact with the RDS API interface.
Here’s the command to install the Oracle Database optimization pack using the Akamas CLI:
The optimization pack for Oracle Database 11g on Amazon RDS.
The following parameters require their ranges or default values to be updated according to the described rules.
The following tables show a list of constraints that may be required in the definition of the study, depending on the tuned parameters:
Component Type | Description |
---|---|
Component Type | Description |
---|---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Unit | Default value | Domain | Restart | Description |
---|
Parameter | Default value | Domain |
---|
Parameter | Default value | Domain |
---|
Formula | Notes |
---|
Formula | Notes |
---|
Spark Application 2.2.0
Spark Application 2.3.0
Spark Application 2.4.0
Oracle Database 12c
Oracle Database 18c
Oracle Database 19c
Oracle Database 11g on Amazon RDS
Oracle Database 12c on Amazon RDS
oracle_sga_total_size | bytes | The current memory size of the SGA. |
oracle_sga_free_size | bytes | The amount of SGA currently available. |
oracle_sga_max_size | bytes | The configured maximum memory size for the SGA. |
oracle_pga_target_size | bytes | The configured target memory size for the PGA. |
oracle_redo_buffers_size | bytes | The memory size of the redo buffers. |
oracle_default_buffer_cache_size | bytes | The memory size for the DEFAULT buffer cache component. |
oracle_default_2k_buffer_cache_size | bytes | The memory size for the DEFAULT 2k buffer cache component. |
oracle_default_4k_buffer_cache_size | bytes | The memory size for the DEFAULT 4k buffer cache component. |
oracle_default_8k_buffer_cache_size | bytes | The memory size for the DEFAULT 8k buffer cache component. |
oracle_default_16k_buffer_cache_size | bytes | The memory size for the DEFAULT 16k buffer cache component. |
oracle_default_32k_buffer_cache_size | bytes | The memory size for the DEFAULT 32k buffer cache component. |
oracle_keep_buffer_cache_size | bytes | The memory size for the KEEP buffer cache component. |
oracle_recycle_buffer_cache_size | bytes | The memory size for the RECYCLE buffer cache component. |
oracle_asm_buffer_cache_size | bytes | The memory size for the ASM buffer cache component. |
oracle_shared_io_pool_size | bytes | The memory size for the IO pool component. |
oracle_java_pool_size | bytes | The memory size for the Java pool component. |
oracle_large_pool_size | bytes | The memory size for the large pool component. |
oracle_shared_pool_size | bytes | The memory size for the shared pool component. |
oracle_streams_pool_size | bytes | The memory size for the streams pool component. |
oracle_buffer_cache_hit_ratio | percent | How often a requested block has been found in the buffer cache without requiring disk access. |
oracle_wait_class_commit | percent | The percentage of time spent waiting on the events of class 'Commit'. |
oracle_wait_class_concurrency | percent | The percentage of time spent waiting on the events of class 'Concurrency'. |
oracle_wait_class_system_io | percent | The percentage of time spent waiting on the events of class 'System I/O'. |
oracle_wait_class_user_io | percent | The percentage of time spent waiting on the events of class 'User I/O'. |
oracle_wait_class_other | percent | The percentage of time spent waiting on the events of class 'Other'. |
oracle_wait_class_scheduler | percent | The percentage of time spent waiting on the events of class 'Scheduler'. |
oracle_wait_class_idle | percent | The percentage of time spent waiting on the events of class 'Idle'. |
oracle_wait_class_application | percent | The percentage of time spent waiting on the events of class 'Application'. |
oracle_wait_class_network | percent | The percentage of time spent waiting on the events of class 'Network'. |
oracle_wait_class_configuration | percent | The percentage of time spent waiting on the events of class 'Configuration'. |
oracle_wait_event_log_file_sync | percent | The percentage of time spent waiting on the 'log file sync' event. |
oracle_wait_event_log_file_parallel_write | percent | The percentage of time spent waiting on the 'log file parallel write' event. |
oracle_wait_event_log_file_sequential_read | percent | The percentage of time spent waiting on the 'log file sequential read' event. |
oracle_wait_event_enq_tx_contention | percent | The percentage of time spent waiting on the 'enq: TX - contention' event. |
oracle_wait_event_enq_tx_row_lock_contention | percent | The percentage of time spent waiting on the 'enq: TX - row lock contention' event. |
oracle_wait_event_latch_row_cache_objects | percent | The percentage of time spent waiting on the 'latch: row cache objects' event. |
oracle_wait_event_latch_shared_pool | percent | The percentage of time spent waiting on the 'latch: shared pool' event. |
oracle_wait_event_resmgr_cpu_quantum | percent | The percentage of time spent waiting on the 'resmgr:cpu quantum' event. |
oracle_wait_event_sql_net_message_from_client | percent | The percentage of time spent waiting on the 'SQL*Net message from client' event. |
oracle_wait_event_rdbms_ipc_message | percent | The percentage of time spent waiting on the 'rdbms ipc message' event. |
oracle_wait_event_db_file_sequential_read | percent | The percentage of time spent waiting on the 'db file sequential read' event. |
oracle_wait_event_log_file_switch_checkpoint_incomplete | percent | The percentage of time spent waiting on the 'log file switch (checkpoint incomplete)' event. |
oracle_wait_event_row_cache_lock | percent | The percentage of time spent waiting on the 'row cache lock' event. |
oracle_wait_event_buffer_busy_waits | percent | The percentage of time spent waiting on the 'buffer busy waits' event. |
oracle_wait_event_db_file_async_io_submit | percent | The percentage of time spent waiting on the 'db file async I/O submit' event. |
oracle_sessions_active_user | sessions | The number of active user sessions. |
oracle_sessions_inactive_user | sessions | The number of inactive user sessions. |
oracle_sessions_active_background | sessions | The number of active background sessions. |
oracle_sessions_inactive_background | sessions | The number of inactive background sessions. |
oracle_calls_execute_count | calls | Total number of calls (user and recursive) that executed SQL statements. |
oracle_tuned_undoretention | seconds | The amount of time for which undo will not be recycled from the time it was committed. |
oracle_max_query_length | seconds | The length of the longest query executed. |
oracle_transaction_count | transactions | The total number of transactions executed within the period. |
oracle_sso_errors | errors/s | The number of ORA-01555 (snapshot too old) errors raised per second. |
oracle_redo_log_space_requests | requests | The number of times a user process waits for space in the redo log file, usually caused by checkpointing or log switching. |
bitmap_merge_area_size | kilobytes |
|
| yes | The amount of memory Oracle uses to merge bitmaps retrieved from a range scan of the index. |
create_bitmap_area_size | megabytes |
|
| yes | Size of create bitmap buffer for bitmap index. Relevant only for systems containing bitmap indexes. |
db_cache_size | megabytes |
|
| no | The size of the DEFAULT buffer pool for standard block size buffers. The value must be at least 4M * cpu number. |
hash_area_size | kilobytes |
|
| yes | Maximum size of in-memory hash work area maximum amount of memory. |
java_pool_size | megabytes |
|
| no | The size of the Java pool. If SGA_TARGET is set, this value represents the minimum value for the memory pool. |
large_pool_size | megabytes |
|
| no | The size of large pool allocation heap. |
memory_max_target | megabytes |
|
| yes | The maximum value to which a DBA can set the MEMORY_TARGET initialization parameter. |
memory_target | megabytes |
|
| no | Oracle systemwide usable memory. The database tunes memory to the MEMORY_TARGET value, reducing or enlarging the SGA and PGA as needed. |
olap_page_pool_size | bytes |
|
| no | Size of the olap page pool. |
pga_aggregate_limit | megabytes |
|
| no | The limit on the aggregate PGA memory consumed by the instance. |
pga_aggregate_target | megabytes |
|
| no | The target aggregate PGA memory available to all server processes attached to the instance. |
pre_page_sga |
|
|
| yes | Read the entire SGA into memory at instance startup. |
result_cache_max_result | percent |
|
| no | Maximum result size as a percent of the cache size. |
result_cache_max_size | megabytes |
|
| no | The maximum amount of SGA memory that can be used by the Result Cache. |
result_cache_mode |
|
|
| no | Specifies when a ResultCache operator is spliced into a query's execution plan. |
result_cache_remote_expiration | minutes |
|
| no | The expiration in minutes of remote objects. High values may cause stale answers. |
sga_max_size | megabytes |
|
| yes | The maximum size of the SGA for the lifetime of the instance. |
sga_min_size | megabytes |
|
| no | The guaranteed SGA size for a pluggable database (PDB). When SGA_MIN_SIZE is set for a PDB, it guarantees the specified SGA size for the PDB. |
sga_target | megabytes |
|
| no | The total size of all SGA components, acts as the minimum value for the size of the SGA. |
shared_pool_reserved_size | megabytes |
|
| yes | The shared pool space reserved for large contiguous requests for shared pool memory. |
shared_pool_size | megabytes |
|
| no | The size of the shared pool. |
sort_area_retained_size | kilobytes |
|
| no | The maximum amount of the User Global Area memory retained after a sort run completes. |
sort_area_size | kilobytes |
|
| no | The maximum amount of memory Oracle will use for a sort. If more space is required then temporary segments on disks are used. |
streams_pool_size | megabytes |
|
| no | Size of the streams pool. |
use_large_pages |
|
|
| yes | Enable the use of large pages for SGA memory. |
workarea_size_policy |
|
|
| no | Policy used to size SQL working areas (MANUAL/AUTO). |
commit_logging |
|
|
| no | Control how redo is batched by Log Writer. |
log_archive_max_processes | processes |
|
| no | Maximum number of active ARCH processes. |
log_buffer | megabytes |
|
| yes | The amount of memory that Oracle uses when buffering redo entries to a redo log file. |
log_checkpoint_interval | blocks |
|
| no | The maximum number of log file blocks between incremental checkpoints. |
log_checkpoint_timeout | seconds |
|
| no | Maximum time interval between checkpoints. Guarantees a no buffer remains dirty for more than the specified time. |
db_flashback_retention_target | minutes |
|
| no | Maximum Flashback Database log retention time. |
undo_retention | seconds |
|
| no | Low threshold value of undo retention. |
optimizer_capture_sql_plan_baselines |
|
|
| no | Automatic capture of SQL plan baselines for repeatable statements |
optimizer_dynamic_sampling |
|
|
| no | Controls both when the database gathers dynamic statistics, and the size of the sample that the optimizer uses to gather the statistics. |
optimizer_features_enable |
|
|
| no | Enable a series of optimizer features based on an Oracle release number. |
optimizer_index_caching |
|
|
| no | Adjust the behavior of cost-based optimization to favor nested loops joins and IN-list iterators. |
optimizer_index_cost_adj |
|
|
| no | Tune optimizer behavior for access path selection to be more or less index friendly. |
optimizer_mode |
|
|
| no | The default behavior for choosing an optimization approach for the instance. |
optimizer_secure_view_merging |
|
|
| no | Enables security checks when the optimizer uses view merging. |
optimizer_use_invisible_indexes |
|
|
| no | Enable or disables the use of invisible indexes. |
optimizer_use_pending_statistics |
|
|
| no | Control whether the optimizer uses pending statistics when compiling SQL statements. |
optimizer_use_sql_plan_baselines |
|
|
| no | Enables the use of SQL plan baselines stored in SQL Management Base. |
parallel_degree_policy |
|
|
| no | Policy used to compute the degree of parallelism (MANUAL/LIMITED/AUTO). |
parallel_execution_message_size |
|
|
| yes | Message buffer size for parallel execution. |
parallel_force_local |
|
|
| no | Force single instance execution. |
parallel_max_servers | processes |
|
| no | The maximum number of parallel execution processes and parallel recovery processes for an instance. |
parallel_min_servers | processes |
|
| no | The minimum number of execution processes kept alive to service parallel statements. |
parallel_min_percent | percent |
|
| yes | The minimum percentage of parallel execution processes (of the value of PARALLEL_MAX_SERVERS) required for parallel execution. |
circuits | circuits |
|
| no | The total number of virtual circuits that are available for inbound and outbound network sessions. |
cpu_count | cpus |
|
| no | Number of CPUs available for the Oracle instance to use. |
cursor_bind_capture_destination |
|
|
| no | Allowed destination for captured bind variables. |
cursor_sharing |
|
|
| no | Cursor sharing mode. |
cursor_space_for_time |
|
|
| yes | Use more memory in order to get faster execution. |
db_files | files |
|
| yes | The maximum number of database files that can be opened for this database. This may be subject to OS constraints. |
open_cursors | cursors |
|
| no | The maximum number of open cursors (handles to private SQL areas) a session can have at once. |
open_links | connections |
|
| yes | The maximum number of concurrent open connections to remote databases in one session. |
open_links_per_instance | connections |
|
| yes | Maximum number of migratable open connections globally for each database instance. |
processes | processes |
|
| yes | The maximum number of OS user processes that can simultaneously connect to Oracle. |
serial_reuse |
|
|
| yes | Types of cursors that make use of the serial-reusable memory feature. |
session_cached_cursors |
|
|
| no | Number of session cursors to cache. |
session_max_open_files |
|
|
| yes | Maximum number of open files allowed per session. |
sessions | sessions |
|
| no | The maximum number of sessions that can be created in the system, effectively the maximum number of concurrent users in the system. |
transactions | transactions |
|
| yes | The maximum number of concurrent transactions. |
aq_tm_processes |
|
|
| no | Number of AQ Time Managers to start. |
audit_sys_operations |
|
|
| yes | Enable sys auditing |
audit_trail |
|
|
| yes | Configure system auditing. |
client_result_cache_lag | milliseconds |
|
| yes | Maximum time before checking the database for changes related to the queries cached on the client. |
client_result_cache_size | kilobytes |
|
| yes | The maximum size of the client per-process result set cache. |
db_block_checking |
|
|
| no | Header checking and data and index block checking. |
db_block_checksum |
|
|
| no | Store checksum in db blocks and check during reads. |
db_file_multiblock_read_count |
|
|
| no | Db block to be read each IO. |
db_keep_cache_size | megabytes |
|
| no | Size of KEEP buffer pool for standard block size buffers. |
db_lost_write_protect |
|
|
| no | Enable lost write detection. |
db_recovery_file_dest_size | megabytes |
|
| no | Database recovery files size limit. |
db_recycle_cache_size | megabytes |
|
| no | Size of RECYCLE buffer pool for standard block size buffers. |
db_writer_processes |
|
|
| yes | Number of background database writer processes to start. |
ddl_lock_timeout |
|
|
| no | Timeout to restrict the time that ddls wait for dml lock. |
deferred_segment_creation |
|
|
| no | Defer segment creation to first insert. |
distributed_lock_timeout | seconds |
|
| yes | Number of seconds a distributed transaction waits for a lock. |
dml_locks |
|
|
| yes | The maximum number of DML locks - one for each table modified in a transaction. |
enable_goldengate_replication |
|
|
| no | Enable GoldenGate replication. |
fast_start_parallel_rollback |
|
|
| no | Max number of parallel recovery slaves that may be used. |
hs_autoregister |
|
|
| no | Enable automatic server DD updates in HS agent self-registration. |
java_jit_enabled |
|
|
| no | Enables the Just-in-Time (JIT) compiler for the Oracle Java Virtual Machine. |
java_max_sessionspace_size | bytes |
|
| yes | Max allowed size in bytes of a Java sessionspace. |
java_soft_sessionspace_limit | bytes |
|
| yes | Warning limit on size in bytes of a Java sessionspace. |
job_queue_processes |
|
|
| no | Maximum number of job queue slave processes. |
object_cache_max_size_percent | percent |
|
| no | Percentage of maximum size over optimal of the user sessions object cache. |
object_cache_optimal_size | kilobytes |
|
| no | Optimal size of the user sessions object cache. |
plscope_settings |
|
|
| no | Plscope_settings controls the compile-time collection, cross reference, and storage of PL/SQLsourcecode identifier data. |
plsql_code_type |
|
|
| no | PL/SQL code-type. |
plsql_optimize_level |
|
|
| no | PL/SQL optimize level. |
query_rewrite_enabled |
|
|
| no | Allow rewrite of queries using materialized views if enabled. |
query_rewrite_integrity |
|
|
| no | Perform rewrite using materialized views with desired integrity. |
remote_dependencies_mode |
|
|
| no | Remote-procedure-call dependencies mode parameter. |
replication_dependency_tracking |
|
|
| yes | Tracking dependency for Replication parallel propagation. |
resource_limit |
|
|
| no | Enforce resource limits in database profiles. |
resourcemanager_cpu_allocation |
|
|
| no | ResourceManager CPU allocation. |
resumable_timeout | seconds |
|
| no | Enables resumable statements and specifies resumable timeout at the system level. |
sql_trace |
|
|
| no | Enable SQL trace. |
star_transformation_enabled |
|
|
| no | Enable the use of star transformation. |
timed_os_statistics |
|
|
| no | The interval at which Oracle collects operating system statistics. |
timed_statistics |
|
|
| no | Maintain internal timing statistics. |
trace_enabled |
|
|
| no | Enable in-memory tracing. |
transactions_per_rollback_segment |
|
|
| yes | Expected number of active transactions per rollback segment. |
|
|
|
|
|
|
|
|
|
|
|
|
|
| upper bound can’t exceed half the size of |
|
|
|
|
| at least |
|
|
|
| should match the available CPUs 0 to let the Oracle engine automatically determine the value | must not exceed the available CPUs |
|
|
|
|
|
|
|
|
|
|
| must be at least equal to the default value |
|
|
|
| Add when tuning automatic memory management |
| Add when tuning SGA and PGA |
| Add when tuning SGA and PGA |
| Add when tuning SGA |
| Add when tuning SGA areas |
| Add when tuning PGA |
|
|
|
|
|
|
|
|