Akamas Docs
3.6
3.6
  • Home
  • Getting started
    • Introduction
    • Insights for Kubernetes
    • Free Trial
    • Licensing
    • Deployment
      • Cloud Hosting
    • Security
    • Maintenance & Support (M&S) Services
      • Customer Support Services
      • Support levels for Customer Support Services
      • Support levels for software versions
      • Support levels with Akamas
  • Installing
    • Architecture
    • Docker compose installation
      • Prerequisites
        • Hardware Requirements
        • Software Requirements
        • Network requirements
      • Install Akamas dependencies
      • Install the Akamas Server
        • Online installation mode
          • Online installation behind a Proxy server
        • Offline installation mode
        • Changing UI Ports
        • Setup HTTPS configuration
      • Troubleshoot Docker installation issues
    • Kubernetes installation
      • Prerequisites
        • Cluster Requirements
        • Software Requirements
      • Install Akamas
        • Online Installation
        • Offline Installation - Private registry
      • Installing on OpenShift
      • Accessing Akamas
      • Useful commands
      • Selecting Cluster Nodes
    • Install the CLI
      • Setup the CLI
      • Initialize the CLI
      • Change CLI configuration
      • Use a proxy server
    • Verify the installation
    • Installing the toolbox
    • Install the license
    • Manage anonymous data collection
  • Managing Akamas
    • Akamas logs
    • Audit logs
    • Upgrade Akamas
      • Docker compose
      • Kubernetes
    • Monitor Akamas status
    • Backup & Recover of the Akamas Server
    • Users management
      • Accessing Keycloak admin console
      • Configure an external identity provider
        • Azure Active Directory
        • Google
      • Limit users sessions
        • Local users
        • Identity provider users
    • Collecting support information
  • Using
    • System
    • Telemetry
    • Workflow
    • Study
      • Offline Study
      • Live Study
        • Analyzing results of live optimization studies
      • Windowing
      • Parameters and constraints
  • Optimization Guides
    • Optimize application costs and resource efficiency
      • Kubernetes microservices
        • Optimize cost of a Kubernetes deployment subject to Horizontal Pod Autoscaler
        • Optimize cost of a Kubernetes microservice while preserving SLOs in production
        • Optimize cost of a Java microservice on Kubernetes while preserving SLOs in production
      • Application runtime
        • Optimizing a sample Java OpenJDK application
        • Optimizing cost of a Node.js application with performance tests
        • Optimizing cost of a Golang application with performance tests
        • Optimizing cost of a .NET application with performance tests
      • Applications running on cloud instances
        • Optimizing a sample application running on AWS
      • Spark applications
        • Optimizing a Spark application
    • Optimize application performance and reliability
      • Kubernetes microservices
        • Optimizing cost of a Kubernetes microservice while preserving SLOs in production
        • Optimizing cost of a Java microservice on Kubernetes while preserving SLOs in production
  • Integrating
    • Integrating Telemetry Providers
      • CSV provider
        • Install CSV provider
        • Create CSV telemetry instances
      • Dynatrace provider
        • Install Dynatrace provider
        • Create Dynatrace telemetry instances
          • Import Key Requests
      • Prometheus provider
        • Install Prometheus provider
        • Create Prometheus telemetry instances
        • CloudWatch Exporter
        • OracleDB Exporter
      • Spark History Server provider
        • Install Spark History Server provider
        • Create Spark History Server telemetry instances
      • NeoLoadWeb provider
        • Install NeoLoadWeb telemetry provider
        • Create NeoLoadWeb telemetry instances
      • LoadRunner Professional provider
        • Install LoadRunner Professional provider
        • Create LoadRunner Professional telemetry instances
      • LoadRunner Enterprise provider
        • Install LoadRunner Enterprise provider
        • Create LoadRunner Enterprise telemetry instances
      • AWS provider
        • Install AWS provider
        • Create AWS telemetry instances
    • Integrating Configuration Management
    • Integrating with pipelines
    • Integrating Load Testing
      • Integrating NeoLoad
      • Integrating LoadRunner Professional
      • Integrating LoadRunner Enterprise
  • Reference
    • Glossary
      • System
      • Component
      • Metric
      • Parameter
      • Component Type
      • Workflow
      • Telemetry Provider
      • Telemetry Instance
      • Optimization Pack
      • Goals & Constraints
      • KPI
      • Optimization Study
      • Workspace
      • Safety Policies
    • Construct templates
      • System template
      • Component template
      • Parameter template
      • Metric template
      • Component Types template
      • Telemetry Provider template
      • Telemetry Instance template
      • Workflows template
      • Study template
        • Goal & Constraints
        • Windowing policy
          • Trim windowing
          • Stability windowing
        • Parameter selection
        • Metric selection
        • Workload selection
        • KPIs
        • Steps
          • Baseline step
          • Bootstrap step
          • Preset step
          • Optimize step
        • Parameter rendering
        • Optimizer Options
    • Workflow Operators
      • General operator arguments
      • Executor Operator
      • FileConfigurator Operator
      • LinuxConfigurator Operator
      • WindowsExecutor Operator
      • WindowsFileConfigurator Operator
      • Sleep Operator
      • OracleExecutor Operator
      • OracleConfigurator Operator
      • SparkSSHSubmit Operator
      • SparkSubmit Operator
      • SparkLivy Operator
      • NeoLoadWeb Operator
      • LoadRunner Operator
      • LoadRunnerEnteprise Operator
    • Telemetry metric mapping
      • Dynatrace metrics mapping
      • Prometheus metrics mapping
      • NeoLoadWeb metrics mapping
      • Spark History Server metrics mapping
      • LoadRunner metrics mapping
    • Optimization Packs
      • Linux optimization pack
        • Amazon Linux
        • Amazon Linux 2
        • Amazon Linux 2022
        • CentOS 7
        • CentOS 8
        • RHEL 7
        • RHEL 8
        • Ubuntu 16.04
        • Ubuntu 18.04
        • Ubuntu 20.04
      • DotNet optimization pack
        • DotNet Core 3.1
      • Java OpenJDK optimization pack
        • Java OpenJDK 8
        • Java OpenJDK 11
        • Java OpenJDK 17
      • OpenJ9 optimization pack
        • IBM J9 VM 6
        • IBM J9 VM 8
        • Eclipse Open J9 11
      • Node JS optimization pack
        • Node JS 18
      • GO optimization pack
        • GO 1
      • Web Application optimization pack
        • Web Application
      • Docker optimization pack
        • Container
      • Kubernetes optimization pack
        • Horizontal Pod Autoscaler v1
        • Horizontal Pod Autoscaler v2
        • Kubernetes Pod
        • Kubernetes Container
        • Kubernetes Workload
        • Kubernetes Namespace
        • Kubernetes Cluster
      • WebSphere optimization pack
        • WebSphere 8.5
        • WebSphere Liberty ND
      • AWS optimization pack
        • EC2
        • Lambda
      • PostgreSQL optimization pack
        • PostgreSQL 11
        • PostgreSQL 12
      • Cassandra optimization pack
        • Cassandra
      • MySQL Database optimization pack
        • MySQL 8.0
      • Oracle Database optimization pack
        • Oracle Database 12c
        • Oracle Database 18c
        • Oracle Database 19c
        • RDS Oracle Database 11g
        • RDS Oracle Database 12c
      • MongoDB optimization pack
        • MongoDB 4
        • MongoDB 5
      • Elasticsearch optimization pack
        • Elasticsearch 6
      • Spark optimization pack
        • Spark Application 2.2.0
        • Spark Application 2.3.0
        • Spark Application 2.4.0
    • Command Line commands
      • Administration commands
      • User and Workspace management commands
      • Authentication commands
      • Resource management commands
      • Optimizer options commands
    • Release Notes
  • Knowledge Base
    • Performing load testing to support optimization activities
    • Creating custom optimization packs
    • Setting up a Konakart environment for testing Akamas
    • Modeling a sample Java-based e-commerce application (Konakart)
    • Optimizing a web application
    • Optimizing a sample Java OpenJ9 application
    • Optimizing a sample Linux system
    • Optimizing a MongoDB server instance
    • Optimizing a Kubernetes application
    • Leveraging Ansible to automate AWS instance management
    • Guidelines for optimizing AWS EC2 instances
    • Optimizing an Oracle Database server instance
    • Optimizing an Oracle Database for an e-commerce service
    • Guidelines for optimizing Oracle RDS
    • Optimizing a MySQL server database running Sysbench
    • Optimizing a MySQL server database running OLTPBench
    • Optimizing a live full-stack deployment (K8s + JVM)
    • Setup Instana integration
    • Setup Locust telemetry via CSV
    • Setup AppDynamics integration
Powered by GitBook
On this page
  • What is Insights
  • Why Insights
  • Why Insights is different
  • How Insights works
  • Example screenshot
  • Integration requirements
  • Account credentials
  • Type of collected data
  • Metrics collected
  • Getting started
  • Frequently Asked Questions

Was this helpful?

Export as PDF
  1. Getting started

Insights for Kubernetes

Last updated 3 days ago

Was this helpful?

What is Insights

Insights is a new Akamas capability that helps SREs, platform engineers, developers and FinOps teams uncover hidden cost inefficiencies and reliability risks in your Kubernetes clusters and applications.

Insights provides actionable recommendations to optimize your Kubernetes environment quickly and easily, without requiring setup effort and skills.

Why Insights

Achieving reliable and cost-efficient Kubernetes clusters and applications is easier said than done. The untold reality is that most Kubernetes clusters are massively over-provisioned, and at the same time, applications suffer reliability issues.

Insights analyzes your entire Kubernetes environment and provides:

  • Clear visibility into optimization opportunities across all clusters.

  • Estimated impact of the optimization, e.g. achievable savings.

  • Prioritized, safe recommendations for both infrastructure and application configurations.

All of this comes easy, with no skills and effort required to set up, as there are no agents to be installed. For more information, read our launch .

Why Insights is different

  • No agents required: no setup time, no security checks are required.

  • Full-stack optimization approach: while current Kubernetes optimization tools just consider pod CPU and memory resources, Insights goes deeper inside the pod and optimizes the application runtime, such as the JVM for Java applications or V8 for Node.js applications. This is unique in the industry.

  • No effort required: identifies optimization opportunities and provide recommendations with no effort and deep Kubernetes and application runtime skills required.

  • Designed with safety in mind: recommendations are full-stack and consider the application running within the pod. This avoids reliability risks such as out-of-memory errors or CPU throttling, hence are trusted by development teams.

  • Best practices built-in: provides not only recommendations but also best-practices your teams can use to avoid reliability incidents and run highly efficient Kubernetes environments.

How Insights works

  1. Connect Insights with your Kubernetes observability solution Insights collect metrics from your existing observability tools. See the FAQ for the list of supported tools.

  2. Insights gathers metrics history of your Kubernetes clusters See below for more details about which data is collected.

  3. Insights analyzes collected data using its full-stack, application-aware recommendation engines and knowledge base Insights analyzes data and identifies opportunities to optimize efficiency and reliability using its full-stack, technology-specific recommendation engines. Recommendations are generated considering clusters, workload and application runtimes like the JVM.

  4. Insights shows the identified cost savings opportunities and reliability issues, plus recommendations to improve Kubernetes efficiency and reliability

Example screenshot

Insights summary dashboard showing optimization opportunities across all clusters, and a recommendation to optimize the pod resources and JVM memory for a Java application.

Integration requirements

Insights collects data leveraging the observability tool you are already using to monitor your Kubernetes environment. No agent needs to be installed on your clusters.

Account credentials

Insights simply needs a read-only account to connect and extract data from your observability tool.

Type of collected data

Insights collect technical metrics and configuration information only (see below for details). No PII information is collected.

Metrics collected

Insights analyzes and provide recommendations to optimize the full Kubernetes stack.

To do so, it requires access to the following metrics:

Level
Description
Examples

Kubernetes cluster

Metrics and configuration information related to

  • cluster

  • nodes

  • cluster autoscalers

  • Cluster CPU/memory requests, limits, and used

  • Node CPU/memory requests, limits, and used

Kubernetes workloads

Metrics and configuration information related to

  • workloads

  • pods & containers

  • HPA

  • namespaces

  • resource quotas

  • Pods CPU/memory requests, limits, and used

  • HPA replica count

  • Namespaces CPU/memory requests, limits, and used

Application runtime

Metrics and configuration information related to the runtime powering the application

  • Java virtual machine (JVM)

  • Node.js V8 (planned)

  • JVM heap size, usage

  • JVM garbage collection

  • JVM configuration

Not all metrics are mandatory!

We recommend to feed Insights with all the mentioned layers for best results. However, not all the layers are mandatory. In particular, application runtime metrics are used by Insights to optimize your applications for max reliability and efficiency. However, if application runtime metrics are not available in your observability tool, Insights will still provide technology-agnostic recommendations.

Getting started

Insights is in beta status and will be in GA soon. Try it out and give us your feedback!

Frequently Asked Questions

Do I need to install anything in my cluster? No — Akamas Insights is agentless. It leverages metrics already collected by your Kubernetes observability tool. Which observability tools are supported? Observability tools currently supported are:

  • Dynatrace SaaS

  • Datadog

  • Grafana Cloud (planned)

We're adding support for more solutions, please reach out to us if your solution is not listed here. What is the deployment model? Insights is a SaaS-based solution. Will this modify workloads? No — Insights is read-only and does not modify your workloads. You can inspect the recommendations and apply them manually. Support for automation is planned.

Can I use Insights with multiple clusters? Yes — Insights supports multi-cluster views and analysis.

Request your access .

blog
here