Akamas Docs
3.5
3.5
  • Home
  • Getting started
    • Introduction
    • Free Trial
    • Licensing
    • Deployment
      • Cloud Hosting
    • Security
    • Maintenance & Support (M&S) Services
      • Customer Support Services
      • Support levels for Customer Support Services
      • Support levels for software versions
      • Support levels with Akamas
  • Installing
    • Architecture
    • Docker compose installation
      • Prerequisites
        • Hardware Requirements
        • Software Requirements
        • Network requirements
      • Install Akamas dependencies
      • Install the Akamas Server
        • Online installation mode
          • Online installation behind a Proxy server
        • Offline installation mode
        • Changing UI Ports
        • Setup HTTPS configuration
      • Troubleshoot Docker installation issues
    • Kubernetes installation
      • Prerequisites
        • Cluster Requirements
        • Software Requirements
      • Install Akamas
        • Online Installation
        • Offline Installation - Private registry
      • Installing on OpenShift
      • Accessing Akamas
      • Useful commands
    • Install the CLI
      • Setup the CLI
      • Initialize the CLI
      • Change CLI configuration
      • Use a proxy server
    • Verify the installation
    • Installing the toolbox
    • Install the license
    • Manage anonymous data collection
  • Managing Akamas
    • Akamas logs
    • Audit logs
    • Upgrade Akamas
      • Docker compose
      • Kubernetes
    • Monitor Akamas status
    • Backup & Recover of the Akamas Server
    • Users management
      • Accessing Keycloak admin console
      • Configure an external identity provider
        • Azure Active Directory
        • Google
      • Limit users sessions
        • Local users
        • Identity provider users
    • Collecting support information
  • Using
    • System
    • Telemetry
    • Workflow
    • Study
      • Offline Study
      • Live Study
        • Analyzing results of live optimization studies
      • Windowing
      • Parameters and constraints
  • Optimization Guides
    • Optimize application costs and resource efficiency
      • Kubernetes microservices
        • Optimize cost of a Kubernetes deployment subject to Horizontal Pod Autoscaler
        • Optimize cost of a Kubernetes microservice while preserving SLOs in production
        • Optimize cost of a Java microservice on Kubernetes while preserving SLOs in production
      • Application runtime
        • Optimizing a sample Java OpenJDK application
        • Optimizing cost of a Node.js application with performance tests
        • Optimizing cost of a Golang application with performance tests
        • Optimizing cost of a .NET application with performance tests
      • Applications running on cloud instances
        • Optimizing a sample application running on AWS
      • Spark applications
        • Optimizing a Spark application
    • Optimize application performance and reliability
      • Kubernetes microservices
        • Optimizing cost of a Kubernetes microservice while preserving SLOs in production
        • Optimizing cost of a Java microservice on Kubernetes while preserving SLOs in production
      • Applications running on cloud instances
      • Spark applications
  • Integrating
    • Integrating Telemetry Providers
      • CSV provider
        • Install CSV provider
        • Create CSV telemetry instances
      • Dynatrace provider
        • Install Dynatrace provider
        • Create Dynatrace telemetry instances
          • Import Key Requests
      • Prometheus provider
        • Install Prometheus provider
        • Create Prometheus telemetry instances
        • CloudWatch Exporter
        • OracleDB Exporter
      • Spark History Server provider
        • Install Spark History Server provider
        • Create Spark History Server telemetry instances
      • NeoLoadWeb provider
        • Install NeoLoadWeb telemetry provider
        • Create NeoLoadWeb telemetry instances
      • LoadRunner Professional provider
        • Install LoadRunner Professional provider
        • Create LoadRunner Professional telemetry instances
      • LoadRunner Enterprise provider
        • Install LoadRunner Enterprise provider
        • Create LoadRunner Enterprise telemetry instances
      • AWS provider
        • Install AWS provider
        • Create AWS telemetry instances
    • Integrating Configuration Management
    • Integrating with pipelines
    • Integrating Load Testing
      • Integrating NeoLoad
      • Integrating LoadRunner Professional
      • Integrating LoadRunner Enterprise
  • Reference
    • Glossary
      • System
      • Component
      • Metric
      • Parameter
      • Component Type
      • Workflow
      • Telemetry Provider
      • Telemetry Instance
      • Optimization Pack
      • Goals & Constraints
      • KPI
      • Optimization Study
      • Workspace
      • Safety Policies
    • Construct templates
      • System template
      • Component template
      • Parameter template
      • Metric template
      • Component Types template
      • Telemetry Provider template
      • Telemetry Instance template
      • Workflows template
      • Study template
        • Goal & Constraints
        • Windowing policy
          • Trim windowing
          • Stability windowing
        • Parameter selection
        • Metric selection
        • Workload selection
        • KPIs
        • Steps
          • Baseline step
          • Bootstrap step
          • Preset step
          • Optimize step
        • Parameter rendering
        • Optimizer Options
    • Workflow Operators
      • General operator arguments
      • Executor Operator
      • FileConfigurator Operator
      • LinuxConfigurator Operator
      • WindowsExecutor Operator
      • WindowsFileConfigurator Operator
      • Sleep Operator
      • OracleExecutor Operator
      • OracleConfigurator Operator
      • SparkSSHSubmit Operator
      • SparkSubmit Operator
      • SparkLivy Operator
      • NeoLoadWeb Operator
      • LoadRunner Operator
      • LoadRunnerEnteprise Operator
    • Telemetry metric mapping
      • Dynatrace metrics mapping
      • Prometheus metrics mapping
      • NeoLoadWeb metrics mapping
      • Spark History Server metrics mapping
      • LoadRunner metrics mapping
    • Optimization Packs
      • Linux optimization pack
        • Amazon Linux
        • Amazon Linux 2
        • Amazon Linux 2022
        • CentOS 7
        • CentOS 8
        • RHEL 7
        • RHEL 8
        • Ubuntu 16.04
        • Ubuntu 18.04
        • Ubuntu 20.04
      • DotNet optimization pack
        • DotNet Core 3.1
      • Java OpenJDK optimization pack
        • Java OpenJDK 8
        • Java OpenJDK 11
        • Java OpenJDK 17
      • OpenJ9 optimization pack
        • IBM J9 VM 6
        • IBM J9 VM 8
        • Eclipse Open J9 11
      • Node JS optimization pack
        • Node JS 18
      • GO optimization pack
        • GO 1
      • Web Application optimization pack
        • Web Application
      • Docker optimization pack
        • Container
      • Kubernetes optimization pack
        • Kubernetes Pod
        • Kubernetes Container
        • Kubernetes Workload
        • Kubernetes Namespace
        • Kubernetes Cluster
      • WebSphere optimization pack
        • WebSphere 8.5
        • WebSphere Liberty ND
      • AWS optimization pack
        • EC2
        • Lambda
      • PostgreSQL optimization pack
        • PostgreSQL 11
        • PostgreSQL 12
      • Cassandra optimization pack
        • Cassandra
      • MySQL Database optimization pack
        • MySQL 8.0
      • Oracle Database optimization pack
        • Oracle Database 12c
        • Oracle Database 18c
        • Oracle Database 19c
        • RDS Oracle Database 11g
        • RDS Oracle Database 12c
      • MongoDB optimization pack
        • MongoDB 4
        • MongoDB 5
      • Elasticsearch optimization pack
        • Elasticsearch 6
      • Spark optimization pack
        • Spark Application 2.2.0
        • Spark Application 2.3.0
        • Spark Application 2.4.0
    • Command Line commands
      • Administration commands
      • User and Workspace management commands
      • Authentication commands
      • Resource management commands
      • Optimizer options commands
    • Release Notes
  • Knowledge Base
    • Creating custom optimization packs
    • Setting up a Konakart environment for testing Akamas
    • Modeling a sample Java-based e-commerce application (Konakart)
    • Optimizing a web application
    • Optimizing a sample Java OpenJ9 application
    • Optimizing a sample Linux system
    • Optimizing a MongoDB server instance
    • Optimizing a Kubernetes application
    • Leveraging Ansible to automate AWS instance management
    • Guidelines for optimizing AWS EC2 instances
    • Optimizing an Oracle Database server instance
    • Optimizing an Oracle Database for an e-commerce service
    • Guidelines for optimizing Oracle RDS
    • Optimizing a MySQL server database running Sysbench
    • Optimizing a MySQL server database running OLTPBench
    • Optimizing a live full-stack deployment (K8s + JVM)
    • Setup Instana integration
Powered by GitBook
On this page

Was this helpful?

Export as PDF
  1. Knowledge Base

Optimizing a sample Linux system

Last updated 1 year ago

Was this helpful?

In this study, Akamas is tasked with the optimization of Linux1, a Linux-based system (Ubuntu). The study's goal is to maximize the throughput of the computations of the CPU benchmark.

Sysbench is a suite of benchmarks for CPU, file system, memory, threads, etc… typically used for testing the performance of databases.

System1 comes with a that collects system metrics that Akamas consumes through a . Concerning Sysbench metrics, The study uses the to make them available to Akamas.

The study uses Sysbench to execute a performance test against System1.

Telemetry

  1. Setup a Prometheus and a Node Exporter to monitor the System

  2. Install the Prometheus provider

  3. Create a provider instance:

provider: "Prometheus"
config:
  address: "linux1" # address of the Prometheus of system1
  port: 9090 # port of the Prometheus of system1
  component: "linux1-linux"

4. Install the CSV File provider.

5. Create a provider instance:

provider: "CSV"
config:
 address: "linux1"
 authType: "password"
 username: "ubuntu"
 auth: "[INSERT PASSWORD HERE]"
 protocol: scp
 remoteFilePattern: "/home/ubuntu/benchmark_log.csv" # the remote path of the CSV with the metrics of the benchmark
 componentColumn: "component" # which column of the CSV should contain the name of the component
 csvFormat: "horizontal"
metrics:
- metric: "throughput"
  datasourceMetric: "events_per_second"

Workflow

The study uses a four-task workflow to test System1 with a new configuration:

The following YAML file represents the complete workflow definition:

name: "workflow for linux 1"
tasks
- name: "Configure OS"
  operator: "LinuxConfigurator"
  arguments:
    component: linux1-linux

- name: "Start benchmark"
  operator: "Executor"
  arguments:
    command: "bash /home/ubuntu/benchmark.sh"
    host:
      hostname: "linux1"
      username: "ubuntu"
      password: "[INSERT_HERE_PASSWORD]"

System

Within Akamas, System1 is modeled by a system of two components:

  • system1-linux, which represents the actual Linux system with its metrics and parameters and is of type Ubuntu 16.04

  • system1-bench, which represents the Sysbench with its metrics and is of type Sysbench

The following YAML file represents the definition of the Sysbench component type:

name: "Sysbench"
description: "A component-type for Sysbench"
metrics:
- "throughput" # only one metric

Study

  • Goal: minimize the throughput of the benchmark

  • Windowing: take the default (compute the score for the entire duration of a trial)

  • Parameters selection: select only CPU scheduling parameters

  • Metrics selection: select only the throughput of the benchmark

  • Trials: 3

  • Steps: one baseline and one optimize

The following YAML file represents the definition of the study:

system: "system for linux1"
workflow: "workflow ofr linux1"
name: "linux optimization with sysbench"
description: "Optimizing an Ubuntu instance with a CPU intensive benchmark: sysbench"
goal:
  objective: minimize
  function:
    formula: "linux1-benchmark.throughput"
metricsSelection:
  - "linux1-benchmark.throughput"
parametersSelection:
  - name: "linux1-linux.os_cpuSchedMinGranularity"
  - name: "linux1-linux.os_cpuSchedWakeupGranularity"
  - name: "linux1-linux.os_CPUSchedMigrationCost"
  - name: "linux1-linux.os_CPUSchedChildRunsFirst"
  - name: "linux1-linux.os_CPUSchedLatency"
  - name: "linux1-linux.os_CPUSchedAutogroupEnabled"
  - name: "linux1-linux.os_CPUSchedNrMigrate"
numberOfTrials: 3
steps:
  - name: "baseline"
    type: "baseline"
    values:
      linux1-linux.os_cpuSchedMinGranularity: 2250000
      linux1-linux.os_cpuSchedWakeupGranularity: 3000000
      linux1-linux.os_CPUSchedMigrationCost: 500000
      linux1-linux.os_CPUSchedChildRunsFirst: 0
      linux1-linux.os_CPUSchedLatency: 18000000
      linux1-linux.os_CPUSchedAutogroupEnabled: 1
      linux1-linux.os_CPUSchedNrMigrate: 32
  - name: "optimization"
    type: "optimize"
    numberOfExperiments: 99
    maxFailedExperiments: 25

Task Configure OS, which leverages the operator to apply a new set of Linux configuration parameters

Task Start benchmark, which leverages the to launch the benchmark

Sysbench
Node Exporter
Prometheus provider
CSV provider
LinuxConfigurator
Executor operator