Ubuntu 20.04
This page describes the Optimization Pack for the component type Ubuntu 20.04.
Metrics
CPU
Metric | Unit | Description |
---|---|---|
cpu_num | CPUs | The number of CPUs available in the system (physical and logical) |
cpu_util | percent | The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work) |
cpu_util_details | percent | The average CPU utilization % broken down by usage type and cpu number (e.g., cp1 user, cp2 system, cp3 soft-irq) |
cpu_load_avg | tasks | The system load average (i.e., the number of active tasks in the system) |
Memory
Metric | Unit | Description |
---|---|---|
mem_util | percent | The memory utilization % (i.e, the % of memory used) |
mem_util_nocache | percent | The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes |
mem_util_details | percent | The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory) |
mem_used | bytes | The total amount of memory used |
mem_used_nocache | bytes | The total amount of memory used without considering memory reserved for caching purposes |
mem_total | bytes | The total amount of installed memory |
mem_fault_minor | faults/s | The number of minor memory faults (i.e., faults that do not cause disk access) per second |
mem_fault_major | faults/s | The number of major memory faults (i.e., faults that cause disk access) per second |
mem_fault | faults/s | The number of memory faults (major + minor) |
mem_swapins | pages/s | The number of memory pages swapped in per second |
mem_swapouts | pages/s | The number of memory pages swapped out per second |
Network
Metric | Unit | Description |
---|---|---|
network_tcp_retrans | retrans/s | The number of network TCP retransmissions per second |
network_in_bytes_details | bytes/s | The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0) |
network_out_bytes_details | bytes/s | The number of outbound network packets in bytes per second broken down by network device (e.g., eth01) |
Disk
Notice: you can use a device
custom filter to monitor a specific disk with Prometheus. You can find more information on Prometheus queries and the %FILTERS%
placeholder here: Prometheus provider and here: Prometheus provider metrics mapping.
Metric | Unit | Description |
---|---|---|
disk_swap_util | percent | The average space utilization % of swap disks |
disk_swap_used | bytes | The total amount of space used by swap disks |
disk_util_details | percent | The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://) |
disk_iops_writes | ops/s | The average number of IO disk-write operations per second across all disks |
disk_iops_reads | ops/s | The average number of IO disk-read operations per second across all disks |
disk_iops | ops/s | The average number of IO disk operations per second across all disks |
disk_response_time_read | seconds | The average response time of IO read-disk operations |
disk_response_time_worst | seconds | The average response time of IO disk operations of the slowest disk |
disk_response_time_write | seconds | The average response time of IO write-disk operations |
disk_response_time_details | ops/s | The average response time of IO disk operations broken down by disk (e.g., disk /dev/nvme01 ) |
disk_iops_details | ops/s | The number of IO disk-write operations of per second broken down by disk (e.g., disk /dev/nvme01) |
disk_io_inflight_details | ops | The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01) |
disk_write_bytes | bytes/s | The number of bytes per second written across all disks |
disk_read_bytes | bytes/s | The number of bytes per second read across all disks |
disk_read_write_bytes | bytes/s | The number of bytes per second read and written across all disks |
disk_write_bytes_details | bytes/s | The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE) |
disk_read_bytes_details | bytes/s | The number of bytes per second read from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation READ) |
Filesystem
Metric | Unit | Description |
---|---|---|
filesystem_util | percent | The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1) |
filesystem_used | bytes | The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01) |
filesystem_size | bytes | The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01) |
Other metrics
Metric | Unit | Description |
---|---|---|
proc_blocked | processes | The number of processes blocked (e.g, for IO or swapping reasons) |
os_context_switch | switches/s | The number of context switches per second |
Parameters
CPU
Parameter | Default Value | Domain | Description |
---|---|---|---|
os_cpuSchedMinGranularity | 2250000 ns | 300000→30000000 ns | Minimal preemption granularity (in nanoseconds) for CPU bound tasks |
os_cpuSchedWakeupGranularity | 3000000 ns | 400000→40000000 ns | Scheduler Wakeup Granularity (in nanoseconds) |
os_CPUSchedMigrationCost | 500000 ns | 100000→5000000 ns | Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations |
os_CPUSchedChildRunsFirst | 0 | 0→1 | A freshly forked child runs before the parent continues execution |
os_CPUSchedLatency | 18000000 ns | 2400000→240000000 ns | Targeted preemption latency (in nanoseconds) for CPU bound tasks |
os_CPUSchedAutogroupEnabled | 1 | 0→1 | Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads |
os_CPUSchedNrMigrate | 32 | 3→320 | Scheduler NR Migrate |
Memory
Parameter | Default Value | Domain | Description |
---|---|---|---|
os_MemorySwappiness | 1 | 0→100 | Memory Swappiness |
os_MemoryVmVfsCachePressure | 100 % | 10→100 % | VFS Cache Pressure |
os_MemoryVmMinFree | 67584 KB | 10240→1024000 KB | Minimum Free Memory |
os_MemoryVmDirtyRatio | 20 % | 1→99 % | When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write |
os_MemoryVmDirtyBackgroundRatio | 10 % | 1→99 % | When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background |
os_MemoryTransparentHugepageEnabled |
|
| Transparent Hugepage Enablement |
os_MemoryTransparentHugepageDefrag |
|
| Transparent Hugepage Enablement Defrag |
os_MemorySwap |
|
| Memory Swap |
os_MemoryVmDirtyExpire | 3000 centisecs | 300→30000 centisecs | Memory Dirty Expiration Time |
os_MemoryVmDirtyWriteback | 500 centisecs | 50→5000 centisecs | Memory Dirty Writeback |
Network
Parameter | Default value | Domain | Description |
---|---|---|---|
os_NetworkNetCoreSomaxconn | 128 connections | 12→1200 connections | Network Max Connections |
os_NetworkNetCoreNetdevMaxBacklog | 1000 packets | 100→10000 packets | Network Max Backlog |
os_NetworkNetIpv4TcpMaxSynBacklog | 1024 packets | 52→15120 packets | Network IPV4 Max Sync Backlog |
os_NetworkNetCoreNetdevBudget | 300 packets | 30→3000 packets | Network Budget |
os_NetworkNetCoreRmemMax | 212992 bytes | 21299→2129920 bytes | Maximum network receive buffer size that applications can request |
os_NetworkNetCoreWmemMax | 21299→2129920 bytes | 21299→2129920 bytes | Maximum network transmit buffer size that applications can request |
os_NetworkNetIpv4TcpSlowStartAfterIdle | 1 | 0→1 | Network Slow Start After Idle Flag |
os_NetworkNetIpv4TcpFinTimeout | 60 | 6 →600 seconds | Network TCP timeout |
os_NetworkRfs | 0 | 0→131072 | If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running |
Storage
Parameter | Default value | Domain | Description |
---|---|---|---|
os_StorageReadAhead | 128 KB | 0→1024 KB | Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk |
os_StorageNrRequests | 1000 packets | 100→10000 packets | Network Max Backlog |
os_StorageRqAffinity | 1 | 1→2 | Storage Requests Affinity |
os_StorageQueueScheduler |
|
| Storage Queue Scheduler Type |
os_StorageNomerges | 0 | 0→2 | Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried |
os_StorageMaxSectorsKb | 128 KB | 32→128 KB | The largest IO size that the OS c |
Last updated