Audit logs

Akamas audit logs

Akamas stores all its logs into an internal Elasticsearch instance: some of these logs are reported to the user in the GUI in order to ease the monitoring of workflow executions, while other logs are only accessible via CLI and are mostly used to provide more context and information to support requests.

Audit access can be performed by using the CLI in order to extract logs related to UI or API access. For instance, to extract audit logs from the last hour use the following commands:

  • UI Logs

akamas logs --no-pagination -S kong -f -1h
  • API Logs

akamas logs --no-pagination -S kong -f -1h

Notice: to visualize the system logs unrelated to the execution of workflows bound to workspaces, you need an account with administrative privileges.

Storing audit logs into files

Akamas can be configured to store access logs into files to ease the integration with external logging systems. Enabled this feature ensures that, when the user interacts with the UI or the API, Akamas will report detailed access logs on the internal database and in a file in a dedicated log folder. To ease log rolling and management every day, Akamas will create a new file named according to the pattern access-%{+YYYY-MM-dd}.log.

Docker version

To enable this feature you should:

  1. Create a logs folder next to the Akamas docker-compose.yml file

  2. Edit the docker-compose.yml file by modifying the line FILE_LOG: "false" to FILE_LOG: "true"

  3. If Akamas is already running issue the following command

docker compose up -d logstash

otherwise, start Akamas first.

Kubernetes version

To enable this feature you should go to your Akamas chart folder, edit your values file (typically values-flies/my-values.yaml), and add the following section (if a logstash: section is already present, add the new values to it):

  enabled: true
    enabled: true

then perform installation or update as usual with:

make install

in this specific case, the logs will be stored in a dedicated volume attached to the logstash pod, under the folder /akamas/logs/.

To list them you can use the command:

kubectl exec deploy/logstash -- ls /akamas/logs/

To read a logfile you can use the command (replace LOGFILENAME.log with the actual name):

kubectl exec deploy/logstash -- cat /akamas/logs/LOGFILENAME.log

To copy them to your local machine you can use:

# for this specific command, you cannot use the deployment name 
# but you need the actual pod name
kubectl cp logstash-NNNNNNN-NNNN:/akamas/logs/ .

Last updated