Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The Linux optimization pack helps you optimize Linux-based systems. The optimization pack provides component types for various Linux distributions, thus enabling performance improvements on a plethora of different configurations.
Through this optimization pack, Akamas is able to tackle the problem of performance of Linux-based systems from both the point of you of cost savings, as well as quality and level of service: the included component types bring in parameters that act on the memory footprint of systems, on their ability to sustain higher levels of traffic, on their capacity of leveraging all the available resources and on their potential for lower latency transactions.
Each component type providers parameters that cover four main areas of tuning:
CPU tasks scheduling (for example, if to auto-group and schedule together similar tasks)
Memory (for example, the limit on memory usage for which start swapping pages on disk)
Network (for example, the size of the buffers used to write/read network packets)
Storage (for example, the type of storage scheduler)
Component Type | Description |
---|---|
Here’s the command to install the Linux optimization pack using the Akamas CLI:
For more information on the process of installing or upgrading an optimization pack refer to Install Optimization Packs.
Amazon Linux AMI
Amazon Linux 2 AMI
Amazon Linux 2022 AMI
CentOS Linux distribution version 7.x
CentOS Linux distribution version 8.x
Red Hat Enterprise Linux distribution version 7.x
Red Hat Enterprise Linux distribution version 8.x
Ubuntu Linux distribution by Canonical version 16.04 (LTS)
Ubuntu Linux distribution by Canonical version 18.04 (LTS)
Ubuntu Linux distribution by Canonical version 20.04 (LTS)
This page describes the Optimization Pack for the component type CentOS 8.
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Notice: you can use a device
custom filter to monitor a specific disk with Prometheus. You can find more information on Prometheus queries and the %FILTERS%
placeholder here: Prometheus provider and here: Prometheus provider metrics mapping.
There are no general constraints among RHEL 8 parameters.
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
Metric | Description |
---|
Metric | Description |
---|
Metric | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
cpu_num
CPUs
The number of CPUs available in the system (physical and logical)
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
cpu_util_details
percent
The average CPU utilization % broken down by usage type and cpu number (e.g., cp1 user, cp2 system, cp3 soft-irq)
cpu_load_avg
tasks
The system load average (i.e., the number of active tasks in the system)
mem_util
percent
The memory utilization % (i.e, the % of memory used)
mem_util_nocache
percent
The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes
mem_util_details
percent
The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory)
mem_used
bytes
The total amount of memory used
mem_used_nocache
bytes
The total amount of memory used without considering memory reserved for caching purposes
mem_total
bytes
The total amount of installed memory
mem_fault_minor
faults/s
The number of minor memory faults (i.e., faults that do not cause disk access) per second
mem_fault_major
faults/s
The number of major memory faults (i.e., faults that cause disk access) per second
mem_fault
faults/s
The number of memory faults (major + minor)
mem_swapins
pages/s
The number of memory pages swapped in per second
mem_swapouts
pages/s
The number of memory pages swapped out per second
network_tcp_retrans
retrans/s
The number of network TCP retransmissions per second
network_in_bytes_details
bytes/s
The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0)
network_out_bytes_details
bytes/s
The number of outbound network packets in bytes per second broken down by network device (e.g., eth01)
disk_swap_util
percent
The average space utilization % of swap disks
disk_swap_used
bytes
The total amount of space used by swap disks
disk_util_details
percent
The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://)
disk_iops_writes
ops/s
The average number of IO disk-write operations per second across all disks
disk_iops_reads
ops/s
The average number of IO disk-read operations per second across all disks
disk_iops
ops/s
The average number of IO disk operations per second across all disks
disk_response_time_read
seconds
The average response time of IO read-disk operations
disk_response_time_worst
seconds
The average response time of IO disk operations of the slowest disk
disk_response_time_write
seconds
The average response time of IO write-disk operations
disk_response_time_details
ops/s
The average response time of IO disk operations broken down by disk (e.g., disk /dev/nvme01 )
disk_iops_details
ops/s
The number of IO disk-write operations of per second broken down by disk (e.g., disk /dev/nvme01)
disk_io_inflight_details
ops
The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01)
disk_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_read_bytes
bytes/s
The number of bytes per second read across all disks
disk_read_write_bytes
bytes/s
The number of bytes per second read and written across all disks
disk_write_bytes_details
bytes/s
The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE)
disk_read_bytes_details
bytes/s
The number of bytes per second read from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation READ)
filesystem_util
percent
The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1)
filesystem_used
bytes
The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01)
filesystem_size
bytes
The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01)
proc_blocked
processes
The number of processes blocked (e.g, for IO or swapping reasons)
os_context_switch
switches/s
The number of context switches per second
os_cpuSchedMinGranularity
2250000 ns
300000→30000000 ns
Minimal preemption granularity (in nanoseconds) for CPU bound tasks
os_cpuSchedWakeupGranularity
3000000 ns
400000→40000000 ns
Scheduler Wakeup Granularity (in nanoseconds)
os_CPUSchedMigrationCost
500000 ns
100000→5000000 ns
Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations
os_CPUSchedChildRunsFirst
0
0→1
A freshly forked child runs before the parent continues execution
os_CPUSchedLatency
18000000 ns
2400000→240000000 ns
Targeted preemption latency (in nanoseconds) for CPU bound tasks
os_CPUSchedAutogroupEnabled
1
0→1
Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads
os_CPUSchedNrMigrate
32
3→320
Scheduler NR Migrate
os_MemorySwappiness
1
0→100
Memory Swappiness
os_MemoryVmVfsCachePressure
100 %
10→100 %
VFS Cache Pressure
os_MemoryVmMinFree
67584 KB
10240→1024000 KB
Minimum Free Memory
os_MemoryVmDirtyRatio
20 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyBackgroundRatio
10 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background
os_MemoryTransparentHugepageEnabled
never
always
never
madvise
Transparent Hugepage Enablement
os_MemoryTransparentHugepageDefrag
always
always
never
madvise
defer
defer+madvise
Transparent Hugepage Enablement Defrag
os_MemorySwap
swapon
swapon
swapoff
Memory Swap
os_MemoryVmDirtyExpire
3000 centisecs
300→30000 centisecs
Memory Dirty Expiration Time
os_MemoryVmDirtyWriteback
500 centisecs
50→5000 centisecs
Memory Dirty Writeback
os_NetworkNetCoreSomaxconn
128 connections
12→1200 connections
Network Max Connections
os_NetworkNetCoreNetdevMaxBacklog
1000 packets
100→10000 packets
Network Max Backlog
os_NetworkNetIpv4TcpMaxSynBacklog
512 packets
52→15120 packets
Network IPV4 Max Sync Backlog
os_NetworkNetCoreNetdevBudget
300 packets
30→3000 packets
Network Budget
os_NetworkNetCoreRmemMax
212992 bytes
21299→2129920 bytes
Maximum network receive buffer size that applications can request
os_NetworkNetCoreWmemMax
21299→2129920 bytes
21299→2129920 bytes
Maximum network transmit buffer size that applications can request
os_NetworkNetIpv4TcpSlowStartAfterIdle
1
0→1
Network Slow Start After Idle Flag
os_NetworkNetIpv4TcpFinTimeout
60
6 →600 seconds
Network TCP timeout
os_NetworkRfs
0
0→131072
If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running
os_StorageReadAhead
128 KB
0→1024 KB
Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk
os_StorageNrRequests
1000 packets
100→10000 packets
Network Max Backlog
os_StorageRqAffinity
1
1→2
Storage Requests Affinity
os_StorageQueueScheduler
none
none
kyber
mq-deadline
bfq
Storage Queue Scheduler Type
os_StorageNomerges
0
0→2
Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried
os_StorageMaxSectorsKb
128 KB
32→128 KB
The largest IO size that the OS c
cpu_load_avg | tasks | The system load average (i.e., the number of active tasks in the system) |
cpu_num | CPUs | The number of CPUs available in the system (physical and logical) |
cpu_util | percent | The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work) |
cpu_used | CPUs | The average number of CPUs used in the system (physical and logical) |
cpu_util_details | percent | The average CPU utilization % broken down by usage type and CPU number (e.g., cp1 user, cp2 system, cp3 soft-irq) |
mem_fault | faults/s | The number of memory faults (minor+major) |
mem_fault_major | faults/s | The number of major memory faults (i.e., faults that cause disk access) per second |
mem_fault_minor | faults/s | The number of minor memory faults (i.e., faults that do not cause disk access) per second |
mem_swapins | pages/s | The number of memory pages swapped in per second |
mem_swapouts | pages/s | The number of memory pages swapped out per second |
mem_total | bytes | The total amount of installed memory |
mem_used | bytes | The total amount of memory used |
mem_used_nocache | bytes | The total amount of memory used without considering memory reserved for caching purposes |
mem_util | percent | The memory utilization % (i.e, the % of memory used) |
mem_util_details | percent | The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory) |
mem_util_nocache | percent | The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes |
disk_io_inflight_details | ops | The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01) |
disk_iops | ops/s | The average number of IO disk operations per second across all disks |
disk_iops_details | ops/s | The number of IO disk-write operations per second broken down by disk (e.g., disk /dev/nvme01) |
disk_iops_reads | ops/s | The average number of IO disk-read operations per second across all disks |
disk_iops_writes | ops/s | The average number of IO disk-write operations per second across all disks |
disk_read_bytes | bytes/s | The number of bytes per second read across all disks |
disk_read_bytes_details | bytes/s | The average response time of IO disk operations broken down by disk (e.g., disk C://) |
disk_read_write_bytes | bytes/s | The number of bytes per second written across all disks |
disk_response_time_details | seconds | The average response time of IO disk operations broken down by disk (e.g., disk C://) |
disk_response_time_read | seconds | The average response time of read disk operations |
disk_response_time_worst | seconds | The average response time of IO disk operations of the slowest disk |
disk_response_time_write | seconds | The average response time of write on disk operations |
disk_swap_used | bytes | The total amount of space used by swap disks |
disk_swap_util | percent | The average space utilization % of swap disks |
disk_util_details | percent | The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://) |
disk_write_bytes | bytes/s | The number of bytes per second written across all disks |
disk_write_bytes_details | bytes/s | The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE) |
filesystem_size | bytes | The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01) |
filesystem_used | bytes | The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01) |
filesystem_util | percent | The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1) |
network_in_bytes_details | bytes/s | The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0) |
network_out_bytes_details | bytes/s | The number of outbound network packets in bytes per second broken down by network device (e.g., eth01) |
network_tcp_retrans | retrans/s | The number of network TCP retransmissions per second |
os_context_switch | switches/s | The number of context switches per second |
proc_blocked | processes | The number of processes blocked (e.g, for IO or swapping reasons) |
os_cpuSchedMinGranularity | integer | nanoseconds | 1500000 | 300000 → 30000000 | no | Minimal preemption granularity (in nanoseconds) for CPU bound tasks |
os_cpuSchedWakeupGranularity | integer | nanoseconds | 2000000 | 400000 → 40000000 | no | Scheduler Wakeup Granularity (in nanoseconds) |
os_CPUSchedMigrationCost | integer | nanoseconds | 500000 | 100000 → 5000000 | no | Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations |
os_CPUSchedChildRunsFirst | integer | 0 |
| no | A freshly forked child runs before the parent continues execution |
os_CPUSchedLatency | integer | nanoseconds | 12000000 | 2400000 → 240000000 | no | Targeted preemption latency (in nanoseconds) for CPU bound tasks |
os_CPUSchedAutogroupEnabled | integer | 0 |
| no | Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads |
os_CPUSchedNrMigrate | integer | 32 | 3 → 320 | no | Scheduler NR Migrate |
os_MemorySwappiness | integer | percent | 60 | 0 → 100 | no | The percentage of RAM free space for which the kernel will start swapping pages to disk |
os_MemoryVmVfsCachePressure | integer | 100 | 10 → 100 | no | VFS Cache Pressure |
os_MemoryVmCompactionProactiveness | integer | 20 | 0 → 100 | Determines how aggressively compaction is done in the background |
os_MemoryVmPageLockUnfairness | integer | 5 | 0 → 1000 | no | Set the level of unfairness in the page lock queue. |
os_MemoryVmWatermarkScaleFactor | integer | 10 | 0 → 1000 | no | The amount of memory, expressed as fractions of 10'000, left in a node/system before kswapd is woken up and how much memory needs to be free before kswapd goes back to sleep |
os_MemoryVmWatermarkBoostFactor | integer | 15000 | 0 → 30000 | no | The level of reclaim when the memory is being fragmented, expressed as fractions of 10'000 of a zone's high watermark |
os_MemoryVmMinFree | integer | 67584 | 10240 → 1024000 | no | Minimum Free Memory (in kbytes) |
os_MemoryTransparentHugepageEnabled | categorical |
|
| no | Transparent Hugepage Enablement Flag |
os_MemoryTransparentHugepageDefrag | categorical |
|
| no | Transparent Hugepage Enablement Defrag |
os_MemorySwap | categorical |
|
| no | Memory Swap |
os_MemoryVmDirtyRatio | integer | 20 | 1 → 99 | no | When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write |
os_MemoryVmDirtyBackgroundRatio | integer | 10 | 1 → 99 | no | When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background |
os_MemoryVmDirtyExpire | integer | centiseconds | 3000 | 300 → 30000 | no | When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write |
os_MemoryVmDirtyWriteback | integer | centiseconds | 500 | 50 → 5000 | no | Memory Dirty Writeback (in centisecs) |
os_NetworkNetCoreSomaxconn | integer | megabytes | 128 | 12 → 8192 | no | Network Max Connections |
os_NetworkNetCoreNetdevMaxBacklog | integer | megabytes/s | 1000 | 100 → 10000 | no | Network Max Backlog |
os_NetworkNetIpv4TcpMaxSynBacklog | integer | milliseconds | 256 | 52 → 5120 | no | Network IPV4 Max Sync Backlog |
os_NetworkNetCoreNetdevBudget | integer | 300 | 30 → 30000 | no | Network Budget |
os_NetworkNetCoreRmemMax | integer | 212992 | 21299 → 2129920 | no | Maximum network receive buffer size that applications can request |
os_NetworkNetCoreWmemMax | integer | 212992 | 21299 → 2129920 | no | Maximum network transmit buffer size that applications can request |
os_NetworkNetIpv4TcpSlowStartAfterIdle | integer | 1 |
| no | Network Slow Start After Idle Flag |
os_NetworkNetIpv4TcpFinTimeout | integer | 60 | 6 → 600 | no | Network TCP timeout |
os_NetworkRfs | integer | 0 | 0 → 131072 | no | If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running |
os_StorageReadAhead | integer | kilobytes | 128 | 0 → 4096 | no | Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk |
os_StorageNrRequests | integer | 32 | 12 → 1280 | no | Storage Number of Requests |
os_StorageRqAffinity | integer | 1 |
| no | Storage Requests Affinity |
os_StorageQueueScheduler | integer |
|
| no | Storage Queue Scheduler Type |
os_StorageNomerges | integer | 0 | 0 → 2 | no | Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried |
os_StorageMaxSectorsKb | integer | kilobytes | 256 | 32 → 256 | no | The largest IO size that the OS can issue to a block device |
Metric | Description | |
---|---|---|
Metric | Description | |
---|---|---|
Metric | Description | |
---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
Metric | Unit | Description |
---|
Notice: you can use a device
custom filter to monitor a specific disk with Prometheus. You can find more information on Prometheus queries and the %FILTERS%
placeholder here: and here: .
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Parameter | Default Value | Domain | Description |
---|
Parameter | Default Value | Domain | Description |
---|
Parameter | Default value | Domain | Description |
---|
Parameter | Default value | Domain | Description |
---|
Metric | Description |
---|
Metric | Description |
---|
Metric | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|
Metric | Unit | Description |
---|
Notice: you can use a device
custom filter to monitor a specific disk with Prometheus. You can find more information on Prometheus queries and the %FILTERS%
placeholder here: and here: .
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Parameter | Default Value | Domain | Description |
---|
Parameter | Default Value | Domain | Description |
---|
Parameter | Default value | Domain | Description |
---|
Parameter | Default value | Domain | Description |
---|
Metric | Unit | Description |
---|
Notice: you can use a device
custom filter to monitor a specific disk with Prometheus. You can find more information on Prometheus queries and the %FILTERS%
placeholder here: and here: .
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Parameter | Default Value | Domain | Description |
---|
Parameter | Default Value | Domain | Description |
---|
Parameter | Default value | Domain | Description |
---|
Parameter | Default value | Domain | Description |
---|
Metric | Unit | Description |
---|
Notice: you can use a device
custom filter to monitor a specific disk with Prometheus. You can find more information on Prometheus queries and the %FILTERS%
placeholder here: and here: .
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Parameter | Default Value | Domain | Description |
---|
Parameter | Default Value | Domain | Description |
---|
Parameter | Default value | Domain | Description |
---|
Parameter | Default value | Domain | Description |
---|
Metric | Unit | Description |
---|
Notice: you can use a device
custom filter to monitor a specific disk with Prometheus. You can find more information on Prometheus queries and the %FILTERS%
placeholder here: and here: .
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Metric | Unit | Description |
---|
Parameter | Default Value | Domain | Description |
---|
Parameter | Default Value | Domain | Description |
---|
Parameter | Default value | Domain | Description |
---|
Parameter | Default value | Domain | Description |
---|
cpu_load_avg
tasks
The system load average (i.e., the number of active tasks in the system)
cpu_num
CPUs
The number of CPUs available in the system (physical and logical)
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
cpu_used
CPUs
The average number of CPUs used in the system (physical and logical)
cpu_util_details
percent
The average CPU utilization % broken down by usage type and CPU number (e.g., cp1 user, cp2 system, cp3 soft-irq)
mem_fault
faults/s
The number of memory faults (minor+major)
mem_fault_major
faults/s
The number of major memory faults (i.e., faults that cause disk access) per second
mem_fault_minor
faults/s
The number of minor memory faults (i.e., faults that do not cause disk access) per second
mem_swapins
pages/s
The number of memory pages swapped in per second
mem_swapouts
pages/s
The number of memory pages swapped out per second
mem_total
bytes
The total amount of installed memory
mem_used
bytes
The total amount of memory used
mem_used_nocache
bytes
The total amount of memory used without considering memory reserved for caching purposes
mem_util
percent
The memory utilization % (i.e, the % of memory used)
mem_util_details
percent
The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory)
mem_util_nocache
percent
The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes
disk_io_inflight_details
ops
The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01)
disk_iops
ops/s
The average number of IO disk operations per second across all disks
disk_iops_details
ops/s
The number of IO disk-write operations per second broken down by disk (e.g., disk /dev/nvme01)
disk_iops_reads
ops/s
The average number of IO disk-read operations per second across all disks
disk_iops_writes
ops/s
The average number of IO disk-write operations per second across all disks
disk_read_bytes
bytes/s
The number of bytes per second read across all disks
disk_read_bytes_details
bytes/s
The average response time of IO disk operations broken down by disk (e.g., disk C://)
disk_read_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_response_time_details
seconds
The average response time of IO disk operations broken down by disk (e.g., disk C://)
disk_response_time_read
seconds
The average response time of read disk operations
disk_response_time_worst
seconds
The average response time of IO disk operations of the slowest disk
disk_response_time_write
seconds
The average response time of write on disk operations
disk_swap_used
bytes
The total amount of space used by swap disks
disk_swap_util
percent
The average space utilization % of swap disks
disk_util_details
percent
The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://)
disk_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_write_bytes_details
bytes/s
The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE)
filesystem_size
bytes
The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01)
filesystem_used
bytes
The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01)
filesystem_util
percent
The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1)
network_in_bytes_details
bytes/s
The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0)
network_out_bytes_details
bytes/s
The number of outbound network packets in bytes per second broken down by network device (e.g., eth01)
network_tcp_retrans
retrans/s
The number of network TCP retransmissions per second
os_context_switch
switches/s
The number of context switches per second
proc_blocked
processes
The number of processes blocked (e.g, for IO or swapping reasons)
os_cpuSchedMinGranularity
integer
nanoseconds
1500000
300000 → 30000000
no
Minimal preemption granularity (in nanoseconds) for CPU bound tasks
os_cpuSchedWakeupGranularity
integer
nanoseconds
2000000
400000 → 40000000
no
Scheduler Wakeup Granularity (in nanoseconds)
os_CPUSchedMigrationCost
integer
nanoseconds
500000
100000 → 5000000
no
Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations
os_CPUSchedChildRunsFirst
integer
0
0
, 1
no
A freshly forked child runs before the parent continues execution
os_CPUSchedLatency
integer
nanoseconds
12000000
2400000 → 240000000
no
Targeted preemption latency (in nanoseconds) for CPU bound tasks
os_CPUSchedAutogroupEnabled
integer
0
0
, 1
no
Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads
os_CPUSchedNrMigrate
integer
32
3 → 320
no
Scheduler NR Migrate
os_MemorySwappiness
integer
percent
60
0 → 100
no
The percentage of RAM free space for which the kernel will start swapping pages to disk
os_MemoryVmVfsCachePressure
integer
100
10 → 100
no
VFS Cache Pressure
os_MemoryVmCompactionProactiveness
integer
20
10 → 100
no
Determines how aggressively compaction is done in the background
os_MemoryVmPageLockUnfairness
integer
5
0 → 1000
no
Set the level of unfairness in the page lock queue.
os_MemoryVmWatermarkScaleFactor
integer
10
0 → 1000
no
The amount of memory, expressed as fractions of 10'000, left in a node/system before kswapd is woken up and how much memory needs to be free before kswapd goes back to sleep
os_MemoryVmWatermarkBoostFactor
integer
15000
0 → 30000
no
The level of reclaim when the memory is being fragmented, expressed as fractions of 10'000 of a zone's high watermark
os_MemoryVmMinFree
integer
67584
10240 → 1024000
no
Minimum Free Memory (in kbytes)
os_MemoryTransparentHugepageEnabled
categorical
madvise
always
, never
, madvise
no
Transparent Hugepage Enablement Flag
os_MemoryTransparentHugepageDefrag
categorical
madvise
always
, never
, defer+madvise
, madvise
, defer
no
Transparent Hugepage Enablement Defrag
os_MemorySwap
categorical
swapon
swapon
, swapoff
no
Memory Swap
os_MemoryVmDirtyRatio
integer
20
1 → 99
no
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyBackgroundRatio
integer
10
1 → 99
no
When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background
os_MemoryVmDirtyExpire
integer
centiseconds
3000
300 → 30000
no
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyWriteback
integer
centiseconds
500
50 → 5000
no
Memory Dirty Writeback (in centisecs)
os_NetworkNetCoreSomaxconn
integer
megabytes
128
12 → 8192
no
Network Max Connections
os_NetworkNetCoreNetdevMaxBacklog
integer
megabytes/s
1000
100 → 10000
no
Network Max Backlog
os_NetworkNetIpv4TcpMaxSynBacklog
integer
milliseconds
256
52 → 5120
no
Network IPV4 Max Sync Backlog
os_NetworkNetCoreNetdevBudget
integer
300
30 → 30000
no
Network Budget
os_NetworkNetCoreRmemMax
integer
212992
21299 → 2129920
no
Maximum network receive buffer size that applications can request
os_NetworkNetCoreWmemMax
integer
212992
21299 → 2129920
no
Maximum network transmit buffer size that applications can request
os_NetworkNetIpv4TcpSlowStartAfterIdle
integer
1
0
, 1
no
Network Slow Start After Idle Flag
os_NetworkNetIpv4TcpFinTimeout
integer
60
6 → 600
no
Network TCP timeout
os_NetworkRfs
integer
0
0 → 131072
no
If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running
os_StorageReadAhead
integer
kilobytes
128
0 → 4096
no
Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk
os_StorageNrRequests
integer
32
12 → 1280
no
Storage Number of Requests
os_StorageRqAffinity
integer
1
1
, 2
no
Storage Requests Affinity
os_StorageQueueScheduler
integer
none
none
, kyber
, mq-deadline
, bfq
no
Storage Queue Scheduler Type
os_StorageNomerges
integer
0
0 → 2
no
Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried
os_StorageMaxSectorsKb
integer
kilobytes
256
32 → 256
no
The largest IO size that the OS can issue to a block device
cpu_num | CPUs | The number of CPUs available in the system (physical and logical) |
cpu_util | percent | The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work) |
cpu_util_details | percent | The average CPU utilization % broken down by usage type and cpu number (e.g., cp1 user, cp2 system, cp3 soft-irq) |
cpu_load_avg | tasks | The system load average (i.e., the number of active tasks in the system) |
mem_util | percent | The memory utilization % (i.e, the % of memory used) |
mem_util_nocache | percent | The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes |
mem_util_details | percent | The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory) |
mem_used | bytes | The total amount of memory used |
mem_used_nocache | bytes | The total amount of memory used without considering memory reserved for caching purposes |
mem_total | bytes | The total amount of installed memory |
mem_fault_minor | faults/s | The number of minor memory faults (i.e., faults that do not cause disk access) per second |
mem_fault_major | faults/s | The number of major memory faults (i.e., faults that cause disk access) per second |
mem_fault | faults/s | The number of memory faults (major + minor) |
mem_swapins | pages/s | The number of memory pages swapped in per second |
mem_swapouts | pages/s | The number of memory pages swapped out per second |
network_tcp_retrans | retrans/s | The number of network TCP retransmissions per second |
network_in_bytes_details | bytes/s | The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0) |
network_out_bytes_details | bytes/s | The number of outbound network packets in bytes per second broken down by network device (e.g., eth01) |
disk_swap_util | percent | The average space utilization % of swap disks |
disk_swap_used | bytes | The total amount of space used by swap disks |
disk_util_details | percent | The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://) |
disk_iops_writes | ops/s | The average number of IO disk-write operations per second across all disks |
disk_iops_reads | ops/s | The average number of IO disk-read operations per second across all disks |
disk_iops | ops/s | The average number of IO disk operations per second across all disks |
disk_response_time_read | seconds | The average response time of IO read-disk operations |
disk_response_time_worst | seconds | The average response time of IO disk operations of the slowest disk |
disk_response_time_write | seconds | The average response time of IO write-disk operations |
disk_response_time_details | ops/s | The average response time of IO disk operations broken down by disk (e.g., disk /dev/nvme01 ) |
disk_iops_details | ops/s | The number of IO disk-write operations of per second broken down by disk (e.g., disk /dev/nvme01) |
disk_io_inflight_details | ops | The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01) |
disk_write_bytes | bytes/s | The number of bytes per second written across all disks |
disk_read_bytes | bytes/s | The number of bytes per second read across all disks |
disk_read_write_bytes | bytes/s | The number of bytes per second read and written across all disks |
disk_write_bytes_details | bytes/s | The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE) |
disk_read_bytes_details | bytes/s | The number of bytes per second read from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation READ) |
filesystem_util | percent | The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1) |
filesystem_used | bytes | The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01) |
filesystem_size | bytes | The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01) |
proc_blocked | processes | The number of processes blocked (e.g, for IO or swapping reasons) |
os_context_switch | switches/s | The number of context switches per second |
os_cpuSchedMinGranularity | 2250000 ns | 300000→30000000 ns | Minimal preemption granularity (in nanoseconds) for CPU bound tasks |
os_cpuSchedWakeupGranularity | 3000000 ns | 400000→40000000 ns | Scheduler Wakeup Granularity (in nanoseconds) |
os_CPUSchedMigrationCost | 500000 ns | 100000→5000000 ns | Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations |
os_CPUSchedChildRunsFirst | 0 | 0→1 | A freshly forked child runs before the parent continues execution |
os_CPUSchedLatency | 18000000 ns | 2400000→240000000 ns | Targeted preemption latency (in nanoseconds) for CPU bound tasks |
os_CPUSchedAutogroupEnabled | 1 | 0→1 | Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads |
os_CPUSchedNrMigrate | 32 | 3→320 | Scheduler NR Migrate |
os_MemorySwappiness | 1 | 0→100 | Memory Swappiness |
os_MemoryVmVfsCachePressure | 100 % | 10→100 % | VFS Cache Pressure |
os_MemoryVmMinFree | 67584 KB | 10240→1024000 KB | Minimum Free Memory |
os_MemoryVmDirtyRatio | 20 % | 1→99 % | When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write |
os_MemoryVmDirtyBackgroundRatio | 10 % | 1→99 % | When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background |
os_MemoryTransparentHugepageEnabled |
|
| Transparent Hugepage Enablement |
os_MemoryTransparentHugepageDefrag |
|
| Transparent Hugepage Enablement Defrag |
os_MemorySwap |
|
| Memory Swap |
os_MemoryVmDirtyExpire | 3000 centisecs | 300→30000 centisecs | Memory Dirty Expiration Time |
os_MemoryVmDirtyWriteback | 500 centisecs | 50→5000 centisecs | Memory Dirty Writeback |
os_NetworkNetCoreSomaxconn | 128 connections | 12→1200 connections | Network Max Connections |
os_NetworkNetCoreNetdevMaxBacklog | 1000 packets | 100→10000 packets | Network Max Backlog |
os_NetworkNetIpv4TcpMaxSynBacklog | 1024 packets | 52→15120 packets | Network IPV4 Max Sync Backlog |
os_NetworkNetCoreNetdevBudget | 300 packets | 30→3000 packets | Network Budget |
os_NetworkNetCoreRmemMax | 212992 bytes | 21299→2129920 bytes | Maximum network receive buffer size that applications can request |
os_NetworkNetCoreWmemMax | 21299→2129920 bytes | 21299→2129920 bytes | Maximum network transmit buffer size that applications can request |
os_NetworkNetIpv4TcpSlowStartAfterIdle | 1 | 0→1 | Network Slow Start After Idle Flag |
os_NetworkNetIpv4TcpFinTimeout | 60 | 6 →600 seconds | Network TCP timeout |
os_NetworkRfs | 0 | 0→131072 | If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running |
os_StorageReadAhead | 128 KB | 0→1024 KB | Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk |
os_StorageNrRequests | 1000 packets | 100→10000 packets | Network Max Backlog |
os_StorageRqAffinity | 1 | 1→2 | Storage Requests Affinity |
os_StorageQueueScheduler |
|
| Storage Queue Scheduler Type |
os_StorageNomerges | 0 | 0→2 | Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried |
os_StorageMaxSectorsKb | 128 KB | 32→128 KB | The largest IO size that the OS c |
cpu_load_avg | tasks | The system load average (i.e., the number of active tasks in the system) |
cpu_num | CPUs | The number of CPUs available in the system (physical and logical) |
cpu_util | percent | The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work) |
cpu_used | CPUs | The average number of CPUs used in the system (physical and logical) |
cpu_util_details | percent | The average CPU utilization % broken down by usage type and CPU number (e.g., cp1 user, cp2 system, cp3 soft-irq) |
mem_fault | faults/s | The number of memory faults (minor+major) |
mem_fault_major | faults/s | The number of major memory faults (i.e., faults that cause disk access) per second |
mem_fault_minor | faults/s | The number of minor memory faults (i.e., faults that do not cause disk access) per second |
mem_swapins | pages/s | The number of memory pages swapped in per second |
mem_swapouts | pages/s | The number of memory pages swapped out per second |
mem_total | bytes | The total amount of installed memory |
mem_used | bytes | The total amount of memory used |
mem_used_nocache | bytes | The total amount of memory used without considering memory reserved for caching purposes |
mem_util | percent | The memory utilization % (i.e, the % of memory used) |
mem_util_details | percent | The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory) |
mem_util_nocache | percent | The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes |
disk_io_inflight_details | ops | The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01) |
disk_iops | ops/s | The average number of IO disk operations per second across all disks |
disk_iops_details | ops/s | The number of IO disk-write operations per second broken down by disk (e.g., disk /dev/nvme01) |
disk_iops_reads | ops/s | The average number of IO disk-read operations per second across all disks |
disk_iops_writes | ops/s | The average number of IO disk-write operations per second across all disks |
disk_read_bytes | bytes/s | The number of bytes per second read across all disks |
disk_read_bytes_details | bytes/s | The average response time of IO disk operations broken down by disk (e.g., disk C://) |
disk_read_write_bytes | bytes/s | The number of bytes per second written across all disks |
disk_response_time_details | seconds | The average response time of IO disk operations broken down by disk (e.g., disk C://) |
disk_response_time_read | seconds | The average response time of read disk operations |
disk_response_time_worst | seconds | The average response time of IO disk operations of the slowest disk |
disk_response_time_write | seconds | The average response time of write on disk operations |
disk_swap_used | bytes | The total amount of space used by swap disks |
disk_swap_util | percent | The average space utilization % of swap disks |
disk_util_details | percent | The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://) |
disk_write_bytes | bytes/s | The number of bytes per second written across all disks |
disk_write_bytes_details | bytes/s | The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE) |
filesystem_size | bytes | The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01) |
filesystem_used | bytes | The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01) |
filesystem_util | percent | The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1) |
network_in_bytes_details | bytes/s | The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0) |
network_out_bytes_details | bytes/s | The number of outbound network packets in bytes per second broken down by network device (e.g., eth01) |
network_tcp_retrans | retrans/s | The number of network TCP retransmissions per second |
os_context_switch | switches/s | The number of context switches per second |
proc_blocked | processes | The number of processes blocked (e.g, for IO or swapping reasons) |
os_cpuSchedMinGranularity | integer | nanoseconds | 1500000 | 300000 → 30000000 | no | Minimal preemption granularity (in nanoseconds) for CPU bound tasks |
os_cpuSchedWakeupGranularity | integer | nanoseconds | 2000000 | 400000 → 40000000 | no | Scheduler Wakeup Granularity (in nanoseconds) |
os_CPUSchedMigrationCost | integer | nanoseconds | 500000 | 100000 → 5000000 | no | Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations |
os_CPUSchedChildRunsFirst | integer | 0 |
| no | A freshly forked child runs before the parent continues execution |
os_CPUSchedLatency | integer | nanoseconds | 12000000 | 2400000 → 240000000 | no | Targeted preemption latency (in nanoseconds) for CPU bound tasks |
os_CPUSchedAutogroupEnabled | integer | 0 |
| no | Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads |
os_CPUSchedNrMigrate | integer | 32 | 3 → 320 | no | Scheduler NR Migrate |
os_MemorySwappiness | integer | percent | 60 | 0 → 100 | no | The percentage of RAM free space for which the kernel will start swapping pages to disk |
os_MemoryVmVfsCachePressure | integer | 100 | 10 → 100 | no | VFS Cache Pressure |
os_MemoryVmCompactionProactiveness | integer | Determines how aggressively compaction is done in the background |
os_MemoryVmMinFree | integer | 67584 | 10240 → 1024000 | no | Minimum Free Memory (in kbytes) |
os_MemoryTransparentHugepageEnabled | categorical |
|
| no | Transparent Hugepage Enablement Flag |
os_MemoryTransparentHugepageDefrag | categorical |
|
| no | Transparent Hugepage Enablement Defrag |
os_MemorySwap | categorical |
|
| no | Memory Swap |
os_MemoryVmDirtyRatio | integer | 20 | 1 → 99 | no | When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write |
os_MemoryVmDirtyBackgroundRatio | integer | 10 | 1 → 99 | no | When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background |
os_MemoryVmDirtyExpire | integer | centiseconds | 3000 | 300 → 30000 | no | When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write |
os_MemoryVmDirtyWriteback | integer | centiseconds | 500 | 50 → 5000 | no | Memory Dirty Writeback (in centisecs) |
os_NetworkNetCoreSomaxconn | integer | megabytes | 128 | 12 → 8192 | no | Network Max Connections |
os_NetworkNetCoreNetdevMaxBacklog | integer | megabytes/s | 1000 | 100 → 10000 | no | Network Max Backlog |
os_NetworkNetIpv4TcpMaxSynBacklog | integer | milliseconds | 256 | 52 → 5120 | no | Network IPV4 Max Sync Backlog |
os_NetworkNetCoreNetdevBudget | integer | 300 | 30 → 30000 | no | Network Budget |
os_NetworkNetCoreRmemMax | integer | 212992 | 21299 → 2129920 | no | Maximum network receive buffer size that applications can request |
os_NetworkNetCoreWmemMax | integer | 212992 | 21299 → 2129920 | no | Maximum network transmit buffer size that applications can request |
os_NetworkNetIpv4TcpSlowStartAfterIdle | integer | 1 |
| no | Network Slow Start After Idle Flag |
os_NetworkNetIpv4TcpFinTimeout | integer | 60 | 6 → 600 | no | Network TCP timeout |
os_NetworkRfs | integer | 0 | 0 → 131072 | no | If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running |
os_StorageReadAhead | integer | kilobytes | 128 | 0 → 4096 | no | Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk |
os_StorageNrRequests | integer | 32 | 12 → 1280 | no | Storage Number of Requests |
os_StorageRqAffinity | integer | 1 |
| no | Storage Requests Affinity |
os_StorageNomerges | integer | 0 | 0 → 2 | no | Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried |
os_StorageMaxSectorsKb | integer | kilobytes | 256 | 32 → 256 | no | The largest IO size that the OS can issue to a block device |
cpu_num | CPUs | The number of CPUs available in the system (physical and logical) |
cpu_util | percent | The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work) |
cpu_util_details | percent | The average CPU utilization % broken down by usage type and cpu number (e.g., cp1 user, cp2 system, cp3 soft-irq) |
cpu_load_avg | tasks | The system load average (i.e., the number of active tasks in the system) |
mem_util | percent | The memory utilization % (i.e, the % of memory used) |
mem_util_nocache | percent | The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes |
mem_util_details | percent | The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory) |
mem_used | bytes | The total amount of memory used |
mem_used_nocache | bytes | The total amount of memory used without considering memory reserved for caching purposes |
mem_total | bytes | The total amount of installed memory |
mem_fault_minor | faults/s | The number of minor memory faults (i.e., faults that do not cause disk access) per second |
mem_fault_major | faults/s | The number of major memory faults (i.e., faults that cause disk access) per second |
mem_fault | faults/s | The number of memory faults (major + minor) |
mem_swapins | pages/s | The number of memory pages swapped in per second |
mem_swapouts | pages/s | The number of memory pages swapped out per second |
network_tcp_retrans | retrans/s | The number of network TCP retransmissions per second |
network_in_bytes_details | bytes/s | The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0) |
network_out_bytes_details | bytes/s | The number of outbound network packets in bytes per second broken down by network device (e.g., eth01) |
disk_swap_util | percent | The average space utilization % of swap disks |
disk_swap_used | bytes | The total amount of space used by swap disks |
disk_util_details | percent | The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://) |
disk_iops_writes | ops/s | The average number of IO disk-write operations per second across all disks |
disk_iops_reads | ops/s | The average number of IO disk-read operations per second across all disks |
disk_iops | ops/s | The average number of IO disk operations per second across all disks |
disk_response_time_read | seconds | The average response time of IO read-disk operations |
disk_response_time_worst | seconds | The average response time of IO disk operations of the slowest disk |
disk_response_time_write | seconds | The average response time of IO write-disk operations |
disk_response_time_details | ops/s | The average response time of IO disk operations broken down by disk (e.g., disk /dev/nvme01 ) |
disk_iops_details | ops/s | The number of IO disk-write operations of per second broken down by disk (e.g., disk /dev/nvme01) |
disk_io_inflight_details | ops | The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01) |
disk_write_bytes | bytes/s | The number of bytes per second written across all disks |
disk_read_bytes | bytes/s | The number of bytes per second read across all disks |
disk_read_write_bytes | bytes/s | The number of bytes per second read and written across all disks |
disk_write_bytes_details | bytes/s | The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE) |
disk_read_bytes_details | bytes/s | The number of bytes per second read from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation READ) |
filesystem_util | percent | The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1) |
filesystem_used | bytes | The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01) |
filesystem_size | bytes | The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01) |
proc_blocked | processes | The number of processes blocked (e.g, for IO or swapping reasons) |
os_context_switch | switches/s | The number of context switches per second |
os_cpuSchedMinGranularity | 2250000 ns | 300000→30000000 ns | Minimal preemption granularity (in nanoseconds) for CPU bound tasks |
os_cpuSchedWakeupGranularity | 3000000 ns | 400000→40000000 ns | Scheduler Wakeup Granularity (in nanoseconds) |
os_CPUSchedMigrationCost | 500000 ns | 100000→5000000 ns | Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations |
os_CPUSchedChildRunsFirst | 0 | 0→1 | A freshly forked child runs before the parent continues execution |
os_CPUSchedLatency | 18000000 ns | 2400000→240000000 ns | Targeted preemption latency (in nanoseconds) for CPU bound tasks |
os_CPUSchedAutogroupEnabled | 1 | 0→1 | Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads |
os_CPUSchedNrMigrate | 32 | 3→320 | Scheduler NR Migrate |
os_MemorySwappiness | 1 | 0→100 | Memory Swappiness |
os_MemoryVmVfsCachePressure | 100 % | 10→100 % | VFS Cache Pressure |
os_MemoryVmMinFree | 67584 KB | 10240→1024000 KB | Minimum Free Memory |
os_MemoryVmDirtyRatio | 20 % | 1→99 % | When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write |
os_MemoryVmDirtyBackgroundRatio | 10 % | 1→99 % | When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background |
os_MemoryTransparentHugepageEnabled |
|
| Transparent Hugepage Enablement |
os_MemoryTransparentHugepageDefrag |
|
| Transparent Hugepage Enablement Defrag |
os_MemorySwap |
|
| Memory Swap |
os_MemoryVmDirtyExpire | 3000 centisecs | 300→30000 centisecs | Memory Dirty Expiration Time |
os_MemoryVmDirtyWriteback | 500 centisecs | 50→5000 centisecs | Memory Dirty Writeback |
os_NetworkNetCoreSomaxconn | 128 connections | 12→1200 connections | Network Max Connections |
os_NetworkNetCoreNetdevMaxBacklog | 1000 packets | 100→10000 packets | Network Max Backlog |
os_NetworkNetIpv4TcpMaxSynBacklog | 1024 packets | 52→15120 packets | Network IPV4 Max Sync Backlog |
os_NetworkNetCoreNetdevBudget | 300 packets | 30→3000 packets | Network Budget |
os_NetworkNetCoreRmemMax | 212992 bytes | 21299→2129920 bytes | Maximum network receive buffer size that applications can request |
os_NetworkNetCoreWmemMax | 21299→2129920 bytes | 21299→2129920 bytes | Maximum network transmit buffer size that applications can request |
os_NetworkNetIpv4TcpSlowStartAfterIdle | 1 | 0→1 | Network Slow Start After Idle Flag |
os_NetworkNetIpv4TcpFinTimeout | 60 | 6 →600 seconds | Network TCP timeout |
os_NetworkRfs | 0 | 0→131072 | If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running |
os_StorageReadAhead | 128 KB | 0→1024 KB | Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk |
os_StorageNrRequests | 1000 packets | 100→10000 packets | Network Max Backlog |
os_StorageRqAffinity | 1 | 1→2 | Storage Requests Affinity |
os_StorageQueueScheduler |
|
| Storage Queue Scheduler Type |
os_StorageNomerges | 0 | 0→2 | Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried |
os_StorageMaxSectorsKb | 128 KB | 32→128 KB | The largest IO size that the OS c |
cpu_num | CPUs | The number of CPUs available in the system (physical and logical) |
cpu_util | percent | The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work) |
cpu_util_details | percent | The average CPU utilization % broken down by usage type and cpu number (e.g., cp1 user, cp2 system, cp3 soft-irq) |
cpu_load_avg | tasks | The system load average (i.e., the number of active tasks in the system) |
mem_util | percent | The memory utilization % (i.e, the % of memory used) |
mem_util_nocache | percent | The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes |
mem_util_details | percent | The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory) |
mem_used | bytes | The total amount of memory used |
mem_used_nocache | bytes | The total amount of memory used without considering memory reserved for caching purposes |
mem_total | bytes | The total amount of installed memory |
mem_fault_minor | faults/s | The number of minor memory faults (i.e., faults that do not cause disk access) per second |
mem_fault_major | faults/s | The number of major memory faults (i.e., faults that cause disk access) per second |
mem_fault | faults/s | The number of memory faults (major + minor) |
mem_swapins | pages/s | The number of memory pages swapped in per second |
mem_swapouts | pages/s | The number of memory pages swapped out per second |
network_tcp_retrans | retrans/s | The number of network TCP retransmissions per second |
network_in_bytes_details | bytes/s | The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0) |
network_out_bytes_details | bytes/s | The number of outbound network packets in bytes per second broken down by network device (e.g., eth01) |
disk_swap_util | percent | The average space utilization % of swap disks |
disk_swap_used | bytes | The total amount of space used by swap disks |
disk_util_details | percent | The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://) |
disk_iops_writes | ops/s | The average number of IO disk-write operations per second across all disks |
disk_iops_reads | ops/s | The average number of IO disk-read operations per second across all disks |
disk_iops | ops/s | The average number of IO disk operations per second across all disks |
disk_response_time_read | seconds | The average response time of IO read-disk operations |
disk_response_time_worst | seconds | The average response time of IO disk operations of the slowest disk |
disk_response_time_write | seconds | The average response time of IO write-disk operations |
disk_response_time_details | ops/s | The average response time of IO disk operations broken down by disk (e.g., disk /dev/nvme01 ) |
disk_iops_details | ops/s | The number of IO disk-write operations of per second broken down by disk (e.g., disk /dev/nvme01) |
disk_io_inflight_details | ops | The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01) |
disk_write_bytes | bytes/s | The number of bytes per second written across all disks |
disk_read_bytes | bytes/s | The number of bytes per second read across all disks |
disk_read_write_bytes | bytes/s | The number of bytes per second read and written across all disks |
disk_write_bytes_details | bytes/s | The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE) |
disk_read_bytes_details | bytes/s | The number of bytes per second read from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation READ) |
filesystem_util | percent | The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1) |
filesystem_used | bytes | The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01) |
filesystem_size | bytes | The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01) |
proc_blocked | processes | The number of processes blocked (e.g, for IO or swapping reasons) |
os_context_switch | switches/s | The number of context switches per second |
os_cpuSchedMinGranularity | 2250000 ns | 300000→30000000 ns | Minimal preemption granularity (in nanoseconds) for CPU bound tasks |
os_cpuSchedWakeupGranularity | 3000000 ns | 400000→40000000 ns | Scheduler Wakeup Granularity (in nanoseconds) |
os_CPUSchedMigrationCost | 500000 ns | 100000→5000000 ns | Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations |
os_CPUSchedChildRunsFirst | 0 | 0→1 | A freshly forked child runs before the parent continues execution |
os_CPUSchedLatency | 18000000 ns | 2400000→240000000 ns | Targeted preemption latency (in nanoseconds) for CPU bound tasks |
os_CPUSchedAutogroupEnabled | 1 | 0→1 | Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads |
os_CPUSchedNrMigrate | 32 | 3→320 | Scheduler NR Migrate |
os_MemorySwappiness | 30 | 0→100 | Memory Swappiness |
os_MemoryVmVfsCachePressure | 100 % | 10→100 % | VFS Cache Pressure |
os_MemoryVmMinFree | 67584 KB | 10240→1024000 KB | Minimum Free Memory |
os_MemoryVmDirtyRatio | 30 % | 1→99 % | When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write |
os_MemoryVmDirtyBackgroundRatio | 10 % | 1→99 % | When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background |
os_MemoryTransparentHugepageEnabled |
|
| Transparent Hugepage Enablement |
os_MemoryTransparentHugepageDefrag |
|
| Transparent Hugepage Enablement Defrag |
os_MemorySwap |
|
| Memory Swap |
os_MemoryVmDirtyExpire | 3000 centisecs | 300→30000 centisecs | Memory Dirty Expiration Time |
os_MemoryVmDirtyWriteback | 500 centisecs | 50→5000 centisecs | Memory Dirty Writeback |
os_NetworkNetCoreSomaxconn | 128 connections | 12→1200 connections | Network Max Connections |
os_NetworkNetCoreNetdevMaxBacklog | 1000 packets | 100→10000 packets | Network Max Backlog |
os_NetworkNetIpv4TcpMaxSynBacklog | 512 packets | 52→15120 packets | Network IPV4 Max Sync Backlog |
os_NetworkNetCoreNetdevBudget | 300 packets | 30→3000 packets | Network Budget |
os_NetworkNetCoreRmemMax | 212992 bytes | 21299→2129920 bytes | Maximum network receive buffer size that applications can request |
os_NetworkNetCoreWmemMax | 21299→2129920 bytes | 21299→2129920 bytes | Maximum network transmit buffer size that applications can request |
os_NetworkNetIpv4TcpSlowStartAfterIdle | 1 | 0→1 | Network Slow Start After Idle Flag |
os_NetworkNetIpv4TcpFinTimeout | 60 | 6 →600 seconds | Network TCP timeout |
os_NetworkRfs | 0 | 0→131072 | If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running |
os_StorageReadAhead | 128 KB | 0→1024 KB | Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk |
os_StorageNrRequests | 1000 packets | 100→10000 packets | Network Max Backlog |
os_StorageRqAffinity | 1 | 1→2 | Storage Requests Affinity |
os_StorageQueueScheduler |
|
| Storage Queue Scheduler Type |
os_StorageNomerges | 0 | 0→2 | Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried |
os_StorageMaxSectorsKb | 256 KB | 32→256 KB | The largest IO size that the OS c |
cpu_num | CPUs | The number of CPUs available in the system (physical and logical) |
cpu_util | percent | The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work) |
cpu_util_details | percent | The average CPU utilization % broken down by usage type and cpu number (e.g., cp1 user, cp2 system, cp3 soft-irq) |
cpu_load_avg | tasks | The system load average (i.e., the number of active tasks in the system) |
mem_util | percent | The memory utilization % (i.e, the % of memory used) |
mem_util_nocache | percent | The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes |
mem_util_details | percent | The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory) |
mem_used | bytes | The total amount of memory used |
mem_used_nocache | bytes | The total amount of memory used without considering memory reserved for caching purposes |
mem_total | bytes | The total amount of installed memory |
mem_fault_minor | faults/s | The number of minor memory faults (i.e., faults that do not cause disk access) per second |
mem_fault_major | faults/s | The number of major memory faults (i.e., faults that cause disk access) per second |
mem_fault | faults/s | The number of memory faults (major + minor) |
mem_swapins | pages/s | The number of memory pages swapped in per second |
mem_swapouts | pages/s | The number of memory pages swapped out per second |
network_tcp_retrans | retrans/s | The number of network TCP retransmissions per second |
network_in_bytes_details | bytes/s | The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0) |
network_out_bytes_details | bytes/s | The number of outbound network packets in bytes per second broken down by network device (e.g., eth01) |
disk_swap_util | percent | The average space utilization % of swap disks |
disk_swap_used | bytes | The total amount of space used by swap disks |
disk_util_details | percent | The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://) |
disk_iops_writes | ops/s | The average number of IO disk-write operations per second across all disks |
disk_iops_reads | ops/s | The average number of IO disk-read operations per second across all disks |
disk_iops | ops/s | The average number of IO disk operations per second across all disks |
disk_response_time_read | seconds | The average response time of IO read-disk operations |
disk_response_time_worst | seconds | The average response time of IO disk operations of the slowest disk |
disk_response_time_write | seconds | The average response time of IO write-disk operations |
disk_response_time_details | ops/s | The average response time of IO disk operations broken down by disk (e.g., disk /dev/nvme01 ) |
disk_iops_details | ops/s | The number of IO disk-write operations of per second broken down by disk (e.g., disk /dev/nvme01) |
disk_io_inflight_details | ops | The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01) |
disk_write_bytes | bytes/s | The number of bytes per second written across all disks |
disk_read_bytes | bytes/s | The number of bytes per second read across all disks |
disk_read_write_bytes | bytes/s | The number of bytes per second read and written across all disks |
disk_write_bytes_details | bytes/s | The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE) |
disk_read_bytes_details | bytes/s | The number of bytes per second read from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation READ) |
filesystem_util | percent | The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1) |
filesystem_used | bytes | The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01) |
filesystem_size | bytes | The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01) |
proc_blocked | processes | The number of processes blocked (e.g, for IO or swapping reasons) |
os_context_switch | switches/s | The number of context switches per second |
os_cpuSchedMinGranularity | 2250000 ns | 300000→30000000 ns | Minimal preemption granularity (in nanoseconds) for CPU bound tasks |
os_cpuSchedWakeupGranularity | 3000000 ns | 400000→40000000 ns | Scheduler Wakeup Granularity (in nanoseconds) |
os_CPUSchedMigrationCost | 500000 ns | 100000→5000000 ns | Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations |
os_CPUSchedChildRunsFirst | 0 | 0→1 | A freshly forked child runs before the parent continues execution |
os_CPUSchedLatency | 18000000 ns | 2400000→240000000 ns | Targeted preemption latency (in nanoseconds) for CPU bound tasks |
os_CPUSchedAutogroupEnabled | 1 | 0→1 | Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads |
os_CPUSchedNrMigrate | 32 | 3→320 | Scheduler NR Migrate |
os_MemorySwappiness | 1 | 0→100 | Memory Swappiness |
os_MemoryVmVfsCachePressure | 100 % | 10→100 % | VFS Cache Pressure |
os_MemoryVmMinFree | 67584 KB | 10240→1024000 KB | Minimum Free Memory |
os_MemoryVmDirtyRatio | 20 % | 1→99 % | When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write |
os_MemoryVmDirtyBackgroundRatio | 10 % | 1→99 % | When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background |
os_MemoryTransparentHugepageEnabled |
|
| Transparent Hugepage Enablement |
os_MemoryTransparentHugepageDefrag |
|
| Transparent Hugepage Enablement Defrag |
os_MemorySwap |
|
| Memory Swap |
os_MemoryVmDirtyExpire | 3000 centisecs | 300→30000 centisecs | Memory Dirty Expiration Time |
os_MemoryVmDirtyWriteback | 500 centisecs | 50→5000 centisecs | Memory Dirty Writeback |
os_NetworkNetCoreSomaxconn | 128 connections | 12→1200 connections | Network Max Connections |
os_NetworkNetCoreNetdevMaxBacklog | 1000 packets | 100→10000 packets | Network Max Backlog |
os_NetworkNetIpv4TcpMaxSynBacklog | 1024 packets | 52→15120 packets | Network IPV4 Max Sync Backlog |
os_NetworkNetCoreNetdevBudget | 300 packets | 30→3000 packets | Network Budget |
os_NetworkNetCoreRmemMax | 212992 bytes | 21299→2129920 bytes | Maximum network receive buffer size that applications can request |
os_NetworkNetCoreWmemMax | 21299→2129920 bytes | 21299→2129920 bytes | Maximum network transmit buffer size that applications can request |
os_NetworkNetIpv4TcpSlowStartAfterIdle | 1 | 0→1 | Network Slow Start After Idle Flag |
os_NetworkNetIpv4TcpFinTimeout | 60 | 6 →600 seconds | Network TCP timeout |
os_NetworkRfs | 0 | 0→131072 | If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running |
os_StorageReadAhead | 128 KB | 0→1024 KB | Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk |
os_StorageNrRequests | 1000 packets | 100→10000 packets | Network Max Backlog |
os_StorageRqAffinity | 1 | 1→2 | Storage Requests Affinity |
os_StorageQueueScheduler |
|
| Storage Queue Scheduler Type |
os_StorageNomerges | 0 | 0→2 | Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried |
os_StorageMaxSectorsKb | 128 KB | 32→128 KB | The largest IO size that the OS c |
cpu_num | CPUs | The number of CPUs available in the system (physical and logical) |
cpu_util | percent | The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work) |
cpu_util_details | percent | The average CPU utilization % broken down by usage type and cpu number (e.g., cp1 user, cp2 system, cp3 soft-irq) |
cpu_load_avg | tasks | The system load average (i.e., the number of active tasks in the system) |
mem_util | percent | The memory utilization % (i.e, the % of memory used) |
mem_util_nocache | percent | The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes |
mem_util_details | percent | The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory) |
mem_used | bytes | The total amount of memory used |
mem_used_nocache | bytes | The total amount of memory used without considering memory reserved for caching purposes |
mem_total | bytes | The total amount of installed memory |
mem_fault_minor | faults/s | The number of minor memory faults (i.e., faults that do not cause disk access) per second |
mem_fault_major | faults/s | The number of major memory faults (i.e., faults that cause disk access) per second |
mem_fault | faults/s | The number of memory faults (major + minor) |
mem_swapins | pages/s | The number of memory pages swapped in per second |
mem_swapouts | pages/s | The number of memory pages swapped out per second |
network_tcp_retrans | retrans/s | The number of network TCP retransmissions per second |
network_in_bytes_details | bytes/s | The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0) |
network_out_bytes_details | bytes/s | The number of outbound network packets in bytes per second broken down by network device (e.g., eth01) |
disk_swap_util | percent | The average space utilization % of swap disks |
disk_swap_used | bytes | The total amount of space used by swap disks |
disk_util_details | percent | The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://) |
disk_iops_writes | ops/s | The average number of IO disk-write operations per second across all disks |
disk_iops_reads | ops/s | The average number of IO disk-read operations per second across all disks |
disk_iops | ops/s | The average number of IO disk operations per second across all disks |
disk_response_time_read | seconds | The average response time of IO read-disk operations |
disk_response_time_worst | seconds | The average response time of IO disk operations of the slowest disk |
disk_response_time_write | seconds | The average response time of IO write-disk operations |
disk_response_time_details | ops/s | The average response time of IO disk operations broken down by disk (e.g., disk /dev/nvme01 ) |
disk_iops_details | ops/s | The number of IO disk-write operations of per second broken down by disk (e.g., disk /dev/nvme01) |
disk_io_inflight_details | ops | The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01) |
disk_write_bytes | bytes/s | The number of bytes per second written across all disks |
disk_read_bytes | bytes/s | The number of bytes per second read across all disks |
disk_read_write_bytes | bytes/s | The number of bytes per second read and written across all disks |
disk_write_bytes_details | bytes/s | The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE) |
disk_read_bytes_details | bytes/s | The number of bytes per second read from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation READ) |
filesystem_util | percent | The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1) |
filesystem_used | bytes | The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01) |
filesystem_size | bytes | The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01) |
proc_blocked | processes | The number of processes blocked (e.g, for IO or swapping reasons) |
os_context_switch | switches/s | The number of context switches per second |
os_cpuSchedMinGranularity | 2250000 ns | 300000→30000000 ns | Minimal preemption granularity (in nanoseconds) for CPU bound tasks |
os_cpuSchedWakeupGranularity | 3000000 ns | 400000→40000000 ns | Scheduler Wakeup Granularity (in nanoseconds) |
os_CPUSchedMigrationCost | 500000 ns | 100000→5000000 ns | Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations |
os_CPUSchedChildRunsFirst | 0 | 0→1 | A freshly forked child runs before the parent continues execution |
os_CPUSchedLatency | 18000000 ns | 2400000→240000000 ns | Targeted preemption latency (in nanoseconds) for CPU bound tasks |
os_CPUSchedAutogroupEnabled | 1 | 0→1 | Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads |
os_CPUSchedNrMigrate | 32 | 3→320 | Scheduler NR Migrate |
os_MemorySwappiness | 1 | 0→100 | Memory Swappiness |
os_MemoryVmVfsCachePressure | 100 % | 10→100 % | VFS Cache Pressure |
os_MemoryVmMinFree | 67584 KB | 10240→1024000 KB | Minimum Free Memory |
os_MemoryVmDirtyRatio | 20 % | 1→99 % | When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write |
os_MemoryVmDirtyBackgroundRatio | 10 % | 1→99 % | When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background |
os_MemoryTransparentHugepageEnabled |
|
| Transparent Hugepage Enablement |
os_MemoryTransparentHugepageDefrag |
|
| Transparent Hugepage Enablement Defrag |
os_MemorySwap |
|
| Memory Swap |
os_MemoryVmDirtyExpire | 3000 centisecs | 300→30000 centisecs | Memory Dirty Expiration Time |
os_MemoryVmDirtyWriteback | 500 centisecs | 50→5000 centisecs | Memory Dirty Writeback |
os_NetworkNetCoreSomaxconn | 128 connections | 12→1200 connections | Network Max Connections |
os_NetworkNetCoreNetdevMaxBacklog | 1000 packets | 100→10000 packets | Network Max Backlog |
os_NetworkNetIpv4TcpMaxSynBacklog | 1024 packets | 52→15120 packets | Network IPV4 Max Sync Backlog |
os_NetworkNetCoreNetdevBudget | 300 packets | 30→3000 packets | Network Budget |
os_NetworkNetCoreRmemMax | 212992 bytes | 21299→2129920 bytes | Maximum network receive buffer size that applications can request |
os_NetworkNetCoreWmemMax | 21299→2129920 bytes | 21299→2129920 bytes | Maximum network transmit buffer size that applications can request |
os_NetworkNetIpv4TcpSlowStartAfterIdle | 1 | 0→1 | Network Slow Start After Idle Flag |
os_NetworkNetIpv4TcpFinTimeout | 60 | 6 →600 seconds | Network TCP timeout |
os_NetworkRfs | 0 | 0→131072 | If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running |
os_StorageReadAhead | 128 KB | 0→1024 KB | Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk |
os_StorageNrRequests | 1000 packets | 100→10000 packets | Network Max Backlog |
os_StorageRqAffinity | 1 | 1→2 | Storage Requests Affinity |
os_StorageQueueScheduler |
|
| Storage Queue Scheduler Type |
os_StorageNomerges | 0 | 0→2 | Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried |
os_StorageMaxSectorsKb | 128 KB | 32→128 KB | The largest IO size that the OS c |
This page describes the Optimization Pack for the component type Ubuntu 20.04.
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Notice: you can use a device
custom filter to monitor a specific disk with Prometheus. You can find more information on Prometheus queries and the %FILTERS%
placeholder here: Prometheus provider and here: Prometheus provider metrics mapping.
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Metric | Unit | Description |
---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default Value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
Parameter | Default value | Domain | Description |
---|---|---|---|
cpu_num
CPUs
The number of CPUs available in the system (physical and logical)
cpu_util
percent
The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work)
cpu_util_details
percent
The average CPU utilization % broken down by usage type and cpu number (e.g., cp1 user, cp2 system, cp3 soft-irq)
cpu_load_avg
tasks
The system load average (i.e., the number of active tasks in the system)
mem_util
percent
The memory utilization % (i.e, the % of memory used)
mem_util_nocache
percent
The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes
mem_util_details
percent
The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory)
mem_used
bytes
The total amount of memory used
mem_used_nocache
bytes
The total amount of memory used without considering memory reserved for caching purposes
mem_total
bytes
The total amount of installed memory
mem_fault_minor
faults/s
The number of minor memory faults (i.e., faults that do not cause disk access) per second
mem_fault_major
faults/s
The number of major memory faults (i.e., faults that cause disk access) per second
mem_fault
faults/s
The number of memory faults (major + minor)
mem_swapins
pages/s
The number of memory pages swapped in per second
mem_swapouts
pages/s
The number of memory pages swapped out per second
network_tcp_retrans
retrans/s
The number of network TCP retransmissions per second
network_in_bytes_details
bytes/s
The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0)
network_out_bytes_details
bytes/s
The number of outbound network packets in bytes per second broken down by network device (e.g., eth01)
disk_swap_util
percent
The average space utilization % of swap disks
disk_swap_used
bytes
The total amount of space used by swap disks
disk_util_details
percent
The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://)
disk_iops_writes
ops/s
The average number of IO disk-write operations per second across all disks
disk_iops_reads
ops/s
The average number of IO disk-read operations per second across all disks
disk_iops
ops/s
The average number of IO disk operations per second across all disks
disk_response_time_read
seconds
The average response time of IO read-disk operations
disk_response_time_worst
seconds
The average response time of IO disk operations of the slowest disk
disk_response_time_write
seconds
The average response time of IO write-disk operations
disk_response_time_details
ops/s
The average response time of IO disk operations broken down by disk (e.g., disk /dev/nvme01 )
disk_iops_details
ops/s
The number of IO disk-write operations of per second broken down by disk (e.g., disk /dev/nvme01)
disk_io_inflight_details
ops
The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01)
disk_write_bytes
bytes/s
The number of bytes per second written across all disks
disk_read_bytes
bytes/s
The number of bytes per second read across all disks
disk_read_write_bytes
bytes/s
The number of bytes per second read and written across all disks
disk_write_bytes_details
bytes/s
The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE)
disk_read_bytes_details
bytes/s
The number of bytes per second read from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation READ)
filesystem_util
percent
The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1)
filesystem_used
bytes
The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01)
filesystem_size
bytes
The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01)
proc_blocked
processes
The number of processes blocked (e.g, for IO or swapping reasons)
os_context_switch
switches/s
The number of context switches per second
os_cpuSchedMinGranularity
2250000 ns
300000→30000000 ns
Minimal preemption granularity (in nanoseconds) for CPU bound tasks
os_cpuSchedWakeupGranularity
3000000 ns
400000→40000000 ns
Scheduler Wakeup Granularity (in nanoseconds)
os_CPUSchedMigrationCost
500000 ns
100000→5000000 ns
Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations
os_CPUSchedChildRunsFirst
0
0→1
A freshly forked child runs before the parent continues execution
os_CPUSchedLatency
18000000 ns
2400000→240000000 ns
Targeted preemption latency (in nanoseconds) for CPU bound tasks
os_CPUSchedAutogroupEnabled
1
0→1
Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads
os_CPUSchedNrMigrate
32
3→320
Scheduler NR Migrate
os_MemorySwappiness
1
0→100
Memory Swappiness
os_MemoryVmVfsCachePressure
100 %
10→100 %
VFS Cache Pressure
os_MemoryVmMinFree
67584 KB
10240→1024000 KB
Minimum Free Memory
os_MemoryVmDirtyRatio
20 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write
os_MemoryVmDirtyBackgroundRatio
10 %
1→99 %
When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background
os_MemoryTransparentHugepageEnabled
madvise
always
never
madvise
Transparent Hugepage Enablement
os_MemoryTransparentHugepageDefrag
madvise
always
never
madvise
defer
defer+madvise
Transparent Hugepage Enablement Defrag
os_MemorySwap
swapon
swapon
swapoff
Memory Swap
os_MemoryVmDirtyExpire
3000 centisecs
300→30000 centisecs
Memory Dirty Expiration Time
os_MemoryVmDirtyWriteback
500 centisecs
50→5000 centisecs
Memory Dirty Writeback
os_NetworkNetCoreSomaxconn
128 connections
12→1200 connections
Network Max Connections
os_NetworkNetCoreNetdevMaxBacklog
1000 packets
100→10000 packets
Network Max Backlog
os_NetworkNetIpv4TcpMaxSynBacklog
1024 packets
52→15120 packets
Network IPV4 Max Sync Backlog
os_NetworkNetCoreNetdevBudget
300 packets
30→3000 packets
Network Budget
os_NetworkNetCoreRmemMax
212992 bytes
21299→2129920 bytes
Maximum network receive buffer size that applications can request
os_NetworkNetCoreWmemMax
21299→2129920 bytes
21299→2129920 bytes
Maximum network transmit buffer size that applications can request
os_NetworkNetIpv4TcpSlowStartAfterIdle
1
0→1
Network Slow Start After Idle Flag
os_NetworkNetIpv4TcpFinTimeout
60
6 →600 seconds
Network TCP timeout
os_NetworkRfs
0
0→131072
If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running
os_StorageReadAhead
128 KB
0→1024 KB
Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk
os_StorageNrRequests
1000 packets
100→10000 packets
Network Max Backlog
os_StorageRqAffinity
1
1→2
Storage Requests Affinity
os_StorageQueueScheduler
none
none
mq-deadline
Storage Queue Scheduler Type
os_StorageNomerges
0
0→2
Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried
os_StorageMaxSectorsKb
128 KB
32→128 KB
The largest IO size that the OS c