Amazon Linux 2022
This page describes the Optimization Pack for the component type Amazon Linux 2022.
Metrics
CPU
Metric | Description | |
---|---|---|
cpu_load_avg | tasks | The system load average (i.e., the number of active tasks in the system) |
cpu_num | CPUs | The number of CPUs available in the system (physical and logical) |
cpu_util | percent | The average CPU utilization % across all the CPUs (i.e., how much time on average the CPUs are busy doing work) |
cpu_used | CPUs | The average number of CPUs used in the system (physical and logical) |
cpu_util_details | percent | The average CPU utilization % broken down by usage type and CPU number (e.g., cp1 user, cp2 system, cp3 soft-irq) |
Memory
Metric | Description | |
---|---|---|
mem_fault | faults/s | The number of memory faults (minor+major) |
mem_fault_major | faults/s | The number of major memory faults (i.e., faults that cause disk access) per second |
mem_fault_minor | faults/s | The number of minor memory faults (i.e., faults that do not cause disk access) per second |
mem_swapins | pages/s | The number of memory pages swapped in per second |
mem_swapouts | pages/s | The number of memory pages swapped out per second |
mem_total | bytes | The total amount of installed memory |
mem_used | bytes | The total amount of memory used |
mem_used_nocache | bytes | The total amount of memory used without considering memory reserved for caching purposes |
mem_util | percent | The memory utilization % (i.e, the % of memory used) |
mem_util_details | percent | The memory utilization % (i.e., the % of memory used) broken down by usage type (e.g., active memory) |
mem_util_nocache | percent | The memory utilization % (i.e., the % of memory used) without considering memory reserved for caching purposes |
Disk & Filesystem
Metric | Description | |
---|---|---|
disk_io_inflight_details | ops | The number of IO disk operations in progress (outstanding) broken down by disk (e.g., disk /dev/nvme01) |
disk_iops | ops/s | The average number of IO disk operations per second across all disks |
disk_iops_details | ops/s | The number of IO disk-write operations per second broken down by disk (e.g., disk /dev/nvme01) |
disk_iops_reads | ops/s | The average number of IO disk-read operations per second across all disks |
disk_iops_writes | ops/s | The average number of IO disk-write operations per second across all disks |
disk_read_bytes | bytes/s | The number of bytes per second read across all disks |
disk_read_bytes_details | bytes/s | The average response time of IO disk operations broken down by disk (e.g., disk C://) |
disk_read_write_bytes | bytes/s | The number of bytes per second written across all disks |
disk_response_time_details | seconds | The average response time of IO disk operations broken down by disk (e.g., disk C://) |
disk_response_time_read | seconds | The average response time of read disk operations |
disk_response_time_worst | seconds | The average response time of IO disk operations of the slowest disk |
disk_response_time_write | seconds | The average response time of write on disk operations |
disk_swap_used | bytes | The total amount of space used by swap disks |
disk_swap_util | percent | The average space utilization % of swap disks |
disk_util_details | percent | The utilization % of disk, i.e how much time a disk is busy doing work broken down by disk (e.g., disk D://) |
disk_write_bytes | bytes/s | The number of bytes per second written across all disks |
disk_write_bytes_details | bytes/s | The number of bytes per second written from the disks broken down by disk and type of operation (e.g., disk /dev/nvme01 and operation WRITE) |
filesystem_size | bytes | The size of filesystems broken down by type and device (e.g., filesystem of type ext4 for device /dev/nvme01) |
filesystem_used | bytes | The amount of space used on the filesystems broken down by type and device (e.g., filesystem of type zfs on device /dev/nvme01) |
filesystem_util | percent | The space utilization % of filesystems broken down by type and device (e.g., filesystem of type overlayfs on device /dev/loop1) |
Network
Metric | Description | |
---|---|---|
network_in_bytes_details | bytes/s | The number of inbound network packets in bytes per second broken down by network device (e.g., wlp4s0) |
network_out_bytes_details | bytes/s | The number of outbound network packets in bytes per second broken down by network device (e.g., eth01) |
network_tcp_retrans | retrans/s | The number of network TCP retransmissions per second |
Others
Metric | Description | |
---|---|---|
os_context_switch | switches/s | The number of context switches per second |
proc_blocked | processes | The number of processes blocked (e.g, for IO or swapping reasons) |
Parameters
CPU
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
os_cpuSchedMinGranularity | integer | nanoseconds | 1500000 | 300000 → 30000000 | no | Minimal preemption granularity (in nanoseconds) for CPU bound tasks |
os_cpuSchedWakeupGranularity | integer | nanoseconds | 2000000 | 400000 → 40000000 | no | Scheduler Wakeup Granularity (in nanoseconds) |
os_CPUSchedMigrationCost | integer | nanoseconds | 500000 | 100000 → 5000000 | no | Amount of time (in nanoseconds) after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated to another CPU, so increasing this variable reduces task migrations |
os_CPUSchedChildRunsFirst | integer | 0 |
| no | A freshly forked child runs before the parent continues execution | |
os_CPUSchedLatency | integer | nanoseconds | 12000000 | 2400000 → 240000000 | no | Targeted preemption latency (in nanoseconds) for CPU bound tasks |
os_CPUSchedAutogroupEnabled | integer | 0 |
| no | Enables the Linux task auto-grouping feature, where the kernel assigns related tasks to groups and schedules them together on CPUs to achieve higher performance for some workloads | |
os_CPUSchedNrMigrate | integer | 32 | 3 → 320 | no | Scheduler NR Migrate |
Memory
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
os_MemorySwappiness | integer | percent | 60 | 0 → 100 | no | The percentage of RAM free space for which the kernel will start swapping pages to disk |
os_MemoryVmVfsCachePressure | integer | 100 | 10 → 100 | no | VFS Cache Pressure | |
os_MemoryVmCompactionProactiveness | integer | 20 | 10 → 100 | no | Determines how aggressively compaction is done in the background | |
os_MemoryVmPageLockUnfairness | integer | 5 | 0 → 1000 | no | Set the level of unfairness in the page lock queue. | |
os_MemoryVmWatermarkScaleFactor | integer | 10 | 0 → 1000 | no | The amount of memory, expressed as fractions of 10'000, left in a node/system before kswapd is woken up and how much memory needs to be free before kswapd goes back to sleep | |
os_MemoryVmWatermarkBoostFactor | integer | 15000 | 0 → 30000 | no | The level of reclaim when the memory is being fragmented, expressed as fractions of 10'000 of a zone's high watermark | |
os_MemoryVmMinFree | integer | 67584 | 10240 → 1024000 | no | Minimum Free Memory (in kbytes) | |
os_MemoryTransparentHugepageEnabled | categorical |
|
| no | Transparent Hugepage Enablement Flag | |
os_MemoryTransparentHugepageDefrag | categorical |
|
| no | Transparent Hugepage Enablement Defrag | |
os_MemorySwap | categorical |
|
| no | Memory Swap | |
os_MemoryVmDirtyRatio | integer | 20 | 1 → 99 | no | When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write | |
os_MemoryVmDirtyBackgroundRatio | integer | 10 | 1 → 99 | no | When the dirty memory pages exceed this percentage of the total memory, the kernel begins to write them asynchronously in the background | |
os_MemoryVmDirtyExpire | integer | centiseconds | 3000 | 300 → 30000 | no | When the dirty memory pages exceed this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write |
os_MemoryVmDirtyWriteback | integer | centiseconds | 500 | 50 → 5000 | no | Memory Dirty Writeback (in centisecs) |
Network
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
os_NetworkNetCoreSomaxconn | integer | megabytes | 128 | 12 → 8192 | no | Network Max Connections |
os_NetworkNetCoreNetdevMaxBacklog | integer | megabytes/s | 1000 | 100 → 10000 | no | Network Max Backlog |
os_NetworkNetIpv4TcpMaxSynBacklog | integer | milliseconds | 256 | 52 → 5120 | no | Network IPV4 Max Sync Backlog |
os_NetworkNetCoreNetdevBudget | integer | 300 | 30 → 30000 | no | Network Budget | |
os_NetworkNetCoreRmemMax | integer | 212992 | 21299 → 2129920 | no | Maximum network receive buffer size that applications can request | |
os_NetworkNetCoreWmemMax | integer | 212992 | 21299 → 2129920 | no | Maximum network transmit buffer size that applications can request | |
os_NetworkNetIpv4TcpSlowStartAfterIdle | integer | 1 |
| no | Network Slow Start After Idle Flag | |
os_NetworkNetIpv4TcpFinTimeout | integer | 60 | 6 → 600 | no | Network TCP timeout | |
os_NetworkRfs | integer | 0 | 0 → 131072 | no | If enabled increases datacache hitrate by steering kernel processing of packets to the CPU where the application thread consuming the packet is running |
Storage
Parameter | Type | Unit | Default Value | Domain | Restart | Description |
---|---|---|---|---|---|---|
os_StorageReadAhead | integer | kilobytes | 128 | 0 → 4096 | no | Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk |
os_StorageNrRequests | integer | 32 | 12 → 1280 | no | Storage Number of Requests | |
os_StorageRqAffinity | integer | 1 |
| no | Storage Requests Affinity | |
os_StorageQueueScheduler | integer |
|
| no | Storage Queue Scheduler Type | |
os_StorageNomerges | integer | 0 | 0 → 2 | no | Enables the user to disable the lookup logic involved with IO merging requests in the block layer. By default (0) all merges are enabled. With 1 only simple one-hit merges will be tried. With 2 no merge algorithms will be tried | |
os_StorageMaxSectorsKb | integer | kilobytes | 256 | 32 → 256 | no | The largest IO size that the OS can issue to a block device |
Last updated