Loading

Hashicorp Vault Integration for Elastic

Version 1.30.0 (View all)
Subscription level
What's this?
Basic
Developed by
What's this?
Elastic
Ingestion method(s) File, Network Protocol, Prometheus
Minimum Kibana version(s) 9.0.0
8.12.0
Note

This AI-assisted guide was validated by our engineers. You may need to adjust the steps to match your environment.

The Hashicorp Vault integration allows you to collect audit logs, operational logs, and performance metrics from your Vault environment into the Elastic Stack. This gives you comprehensive visibility into the security posture and operational health of your secrets management infrastructure.

This integration has been tested with Hashicorp Vault version 1.11.

This integration is compatible with Elastic Stack version 8.12.0 or later.

This integration collects logs and metrics from your Hashicorp Vault deployment. You install an Elastic Agent on a host that can access your Vault service's logs, metrics endpoint, or audit device socket. The agent forwards this data to your Elastic deployment for monitoring and analysis.

The Elastic Agent collects data in the following ways:

  • Vault metrics: It collects Prometheus metrics from the /sys/metrics API endpoint, which includes performance telemetry like request counts, seal status, and memory usage.
  • Audit logs (file): It ingests audit logs directly from a local JSON file generated by Vault's file audit device.
  • Audit logs (socket): It streams audit logs from Vault's socket audit device over a TCP connection to a listening Elastic Agent.
  • Operational logs: It collects the standard operational logs (stdout) of the Vault service from a log file.

The Hashicorp Vault integration collects several types of data to provide comprehensive visibility into your Vault environment's security, performance, and operational health.

This integration collects the following data types:

  • Audit logs: Detailed JSON records of every authenticated request and response. These logs are crucial for security monitoring and can be collected from a local file (file audit device) or a TCP socket stream (socket audit device).
  • Operational logs: Standard output logs from the Vault service. These logs contain internal system events and error messages, which are essential for troubleshooting the Vault service itself.
  • Metrics: Performance telemetry from Vault's /sys/metrics API endpoint. This includes metrics on request counts, seal status, memory usage, and other key performance indicators.

Integrating Hashicorp Vault data with Elastic enables several key use cases for security and operations teams:

  • Security monitoring and threat detection: Ingest audit logs into Elastic SIEM to detect suspicious activity, such as unauthorized access attempts, privilege escalations, or unusual secret access patterns.
  • Performance and health monitoring: Use Kibana dashboards to visualize metrics, monitor the health of your Vault cluster, track performance telemetry, and receive alerts on critical issues like a sealed vault or high memory usage.
  • Compliance and auditing: Maintain a detailed, searchable audit trail of all interactions with Vault to meet regulatory compliance requirements and simplify security audits.
  • Operational troubleshooting: Correlate operational logs with metrics and audit events to quickly diagnose and resolve issues within the Vault service, reducing downtime and improving reliability.

Before you begin, ensure you have the following prerequisites:

  • An Elastic Agent installed and enrolled in a policy.
  • Administrative access to your Hashicorp Vault server. You'll need a Vault token with root privileges or sufficient permissions to enable audit devices and read metrics from the /sys/metrics endpoint.
  • The vault CLI installed and authenticated on a machine where you can run configuration commands.
  • Network connectivity between the Vault server and the Elastic Agent host. Firewalls must be configured to allow traffic on the required ports:
    • For metrics collection, the Elastic Agent must be able to connect to the Vault API port (default 8200).
    • For audit log collection using the TCP socket device, the Vault server must be able to connect to the listening port on the Elastic Agent host.

You must install an Elastic Agent on a host that can receive log data or has access to the log files from Hashicorp Vault. For more details, refer to the Elastic Agent installation instructions. You only need to install one Elastic Agent per host.

The Elastic Agent streams data from the log file or TCP socket and sends it to your Elastic deployment. Once there, the events are processed by the integration's ingest pipelines.

Follow the steps below to configure Hashicorp Vault to send data to the Elastic Agent. This integration supports audit logs, operational logs, and metrics.

To collect audit logs from a file, you'll need to enable the file audit device in Vault.

  1. SSH into your Vault server and create a dedicated directory for logs:
    sudo mkdir -p /var/log/vault
    		
  2. Ensure the vault user has ownership of the directory:
    sudo chown vault:vault /var/log/vault
    		
  3. Enable the audit device using the Vault CLI, specifying the JSON file path:
    vault audit enable file file_path=/var/log/vault/audit.json
    		
  4. It's recommended to configure logrotate or similar utility to manage file growth.

You can also configure Vault to send audit logs to the Elastic Agent over a TCP socket.

Warning

Risk of Unresponsive Vault with TCP socket audit devices

  1. Ensure the Elastic Agent is installed and configured to listen on the target TCP port (e.g., 9007).
  2. Enable the socket device via the Vault CLI, providing the Elastic Agent's IP address and port:
    # Replace <elastic_agent_ip> with the actual IP address
    vault audit enable socket address="<elastic_agent_ip>:9007" socket_type=tcp
    		
  3. Verify connectivity. If the Agent is not reachable, Vault might block API requests.

To collect operational logs, you need to set the log format to json in your Vault configuration.

  1. Open your Vault configuration file (commonly /etc/vault.d/vault.hcl).
  2. Set the log_format to json within the main configuration body:
    log_format = "json"
    		
  3. Reload the daemon and restart Vault for the changes to take effect:
    sudo systemctl daemon-reload && sudo systemctl restart vault
    		

This integration collects metrics in Prometheus format from Vault's /v1/sys/metrics endpoint. To enable it, you'll need to configure the telemetry stanza in Vault.

  1. Open the Vault configuration file and add or update the telemetry stanza. The prometheus_retention_time parameter must be set to a non-zero value to enable the Prometheus metrics endpoint:
    telemetry {
      prometheus_retention_time = "30s"
      disable_hostname = true
      enable_hostname_label = true
    }
    		
  2. Restart the Vault service to enable the /v1/sys/metrics endpoint.
  3. Generate a token with a policy that allows read permissions for the sys/metrics path.
  1. In Kibana, navigate to Management > Integrations.
  2. Search for "Hashicorp Vault" and select the integration.
  3. Click Add Hashicorp Vault.
  4. Configure the integration by selecting an input type and providing the necessary settings.

This input collects audit logs written to a file by Vault's file audit device.

Setting Description
Paths The file paths to the audit logs. The default is ['/var/log/vault/audit*.json*'].
Preserve original event If enabled, a raw copy of the original log is stored in the event.original field. The default is false.

This input collects operational logs from a file.

Setting Description
Paths The file paths to the operational logs. The default is ['/var/log/vault/log*.json*'].
Preserve original event If enabled, a raw copy of the original log is stored in the event.original field. The default is false.

This input collects audit logs sent from Vault's socket audit device over a TCP connection.

Setting Description
Listen Address The bind address for the TCP listener. Use 0.0.0.0 to listen on all available interfaces. The default is localhost.
Listen Port The TCP port number to listen on. The default is 9007.
Preserve original event If enabled, a raw copy of the original log is stored in the event.original field. The default is false.

This input collects metrics from Vault's Prometheus endpoint.

Setting Description
Hosts The Vault addresses to monitor. The integration automatically appends /v1/sys/metrics?format=prometheus. The default is ['http://localhost:8200'].
Vault Token A Vault token with read access to the /sys/metrics API endpoint.
Period Optional. How often the Agent should poll the metrics API. The default is 30s.

After configuring the input, assign the integration to an agent policy and click Save and continue.

After you've configured the integration, you can validate that data is flowing into your Elastic deployment.

To test the integration, generate some activity in Vault:

  • Audit events: Log in to the Vault UI or CLI and perform an action, such as reading a secret: vault kv get secret/test.
  • Configuration events: Enable a secrets engine to trigger administrative audit logs: vault secrets enable cubbyhole.
  • Operational logs: Restart the Vault service to generate initialization logs: sudo systemctl restart vault.
  • Metrics: Perform several CLI requests to increment request counters and latency metrics.
  1. In Kibana, navigate to Discover.
  2. In the search bar, filter for your Vault data using a KQL query. For example, to see audit logs, use data_stream.dataset: "hashicorp_vault.audit". For operational logs, use data_stream.dataset: "hashicorp_vault.log".
  3. Verify that events are appearing with recent timestamps.
  4. Expand a document to confirm that fields like event.dataset, event.action, and message are populated correctly.
  5. Navigate to Analytics > Dashboards and search for "Hashicorp Vault" to see if the visualizations are populated with data.

For help with common issues, refer to the following sections.

  • File permission denied: If the Elastic Agent cannot read the logs, ensure the Agent user (usually root or elastic-agent) has read permissions for /var/log/vault and the JSON files within it.
  • Socket connection refused: Vault will fail to start the audit device if the Elastic Agent TCP listener is not already active. Ensure the Agent is successfully deployed with the listening port open before you run the vault audit enable command.
  • Operational logs not in JSON format: If operational logs are not being parsed correctly, verify that log_format = "json" is present in your vault.hcl file and that you've restarted the service.
  • Telemetry prefixing: If metrics look unusual, ensure disable_hostname = true is set in the telemetry configuration. Otherwise, metric names will be prefixed with the hostname, which can break the integration's standard mappings.
  • JSON parsing failures: If the error.message field in Kibana indicates parsing issues, verify that log_format = "json" is correctly set in the Vault configuration for operational logs.
  • Hashed secrets in logs: By default, Vault hashes secret values in audit logs using HMAC-SHA256. If you cannot see raw secrets, this is expected behavior.
  • 403 Forbidden on metrics endpoint: Ensure the token provided in the integration configuration has a policy that allows read access to the sys/metrics path.
  • Expired token: Vault tokens used for metrics collection should be "periodic" or have a long time-to-live (TTL) to prevent the integration from failing when the token expires.

To ensure optimal performance and scalability in high-volume environments, consider the following recommendations:

  • Choose the right collection method for audit logs: The logfile input is the recommended collection method for production. It provides the strongest delivery guarantees because logs are saved to disk before the Elastic Agent reads them. The tcp socket input offers real-time streaming, but it requires a stable network. Be aware that Vault's audit devices are blocking by default. If the Elastic Agent becomes unreachable and the socket buffer fills, Vault may stop responding to prevent un-audited actions.
  • Manage data volume: In high-traffic environments, audit logs can generate a large amount of data. You can use Vault's built-in audit filtering to exclude high-volume, low-value paths and reduce the load. For metrics, you can adjust the Period setting in the integration to balance the need for visibility with the performance impact of polling the metrics endpoint.
  • Scale your Elastic Agents: For high-throughput environments, it's best to deploy a dedicated Elastic Agent on each Vault node instead of using a centralized collector. This approach distributes the JSON parsing load across your Vault cluster. If you are using socket-based collection for environments with very high event volumes, ensure the agent has enough CPU resources to handle the concurrent TCP connections.

For more information on architectures that you can use for scaling this integration, check the Ingest Architectures documentation.

The audit data stream collects audit logs from Hashicorp Vault. These logs contain detailed information about all requests and responses to Vault, providing a comprehensive trail of activity. This is useful for security monitoring, compliance, and troubleshooting.

The log data stream collects the operational logs from Hashicorp Vault. These logs provide insights into the internal operations of the Vault server, including startup, shutdown, errors, and warnings. They are essential for monitoring the health and performance of your Vault instance.

The metrics data stream collects telemetry data from Hashicorp Vault. These metrics provide real-time visibility into Vault's performance, including memory usage, request latency, and backend operations. The metrics are collected in Prometheus format.

These inputs can be used with this integration:

This integration uses the following API to collect metrics:

This integration includes one or more Kibana dashboards that visualizes the data collected by the integration. The screenshots below illustrate how the ingested data is displayed.