Kafka OpenTelemetry Input Package
| Version | 0.1.0
|
| Subscription level What's this? |
Basic |
| Developed by What's this? |
Elastic |
| Minimum Kibana version(s) | 9.4.0 |
To use pre-release integrations, go to the Integrations page in Kibana, scroll down, and toggle on the Display beta integrations option.
The Kafka OpenTelemetry Input Package enables consumption of telemetry data (logs, metrics, and traces) from Apache Kafka using the Kafka receiver from the OpenTelemetry Collector Contrib project.
This receiver is designed to consume OTLP-formatted telemetry data that has been published to Kafka topics, making it useful for pipeline architectures where Kafka serves as an intermediary buffer for observability data.
This package configures the Kafka receiver in the EDOT (Elastic Distribution of OpenTelemetry) collector, which:
- Connects to one or more Kafka brokers
- Subscribes to configured topics for each signal type (logs, metrics, traces)
- Consumes and decodes messages using the specified encoding format
- Forwards the telemetry data to Elastic Agent for processing and indexing in Elasticsearch
| Setting | Description | Default |
|---|---|---|
| Brokers | List of Kafka broker addresses | localhost:9092 |
| Consumer Group ID | Consumer group for message consumption | otel-collector |
Each signal type (logs, metrics, traces) can be configured independently:
| Setting | Description | Default |
|---|---|---|
| Topics | Kafka topics to consume from | Signal-specific defaults |
| Encoding | Message encoding format | otlp_proto |
| Exclude Topics | Regex patterns to exclude topics | None |
Default Topics:
- Logs:
otlp_logs - Metrics:
otlp_metrics - Traces:
otlp_spans
All signals: otlp_proto, otlp_json
Traces only: jaeger_proto, jaeger_json, zipkin_proto, zipkin_json, zipkin_thrift
Logs only: raw, text, json, azure_resource_logs
The receiver supports multiple authentication methods:
- SASL: PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, AWS MSK IAM
- Kerberos: Username/password or keytab-based authentication
- TLS: Client certificate authentication with optional CA verification
For the full list of available settings, refer to the upstream Kafka receiver documentation.
- Verify that the broker addresses are correct and reachable
- Check that the Kafka cluster is running and accepting connections
- Ensure network connectivity between the collector and Kafka brokers
- Verify SASL credentials are correct
- For Kerberos, ensure the keytab or credentials are valid
- For TLS, verify certificate paths and that certificates are not expired
- Ensure the encoding setting matches the actual message format
- Verify that producers are sending correctly formatted messages
- Check Kafka topic configuration for compatibility
- Ensure the consumer group ID is unique if running multiple collectors
- Check for consumer group rebalancing issues in Kafka logs
- Verify that the initial offset setting matches your requirements
Changelog
| Version | Details | Minimum Kibana version |
|---|---|---|
| 0.1.0 | Enhancement (View pull request) Initial version of the package |
9.4.0 |