Elasticsearch OTLP/HTTP endpoint
In addition to ingesting data through the Bulk API, Elasticsearch accepts data through the OpenTelemetry Protocol (OTLP). The Elasticsearch OTLP/HTTP endpoint exposes three signal-specific paths:
| Signal | Path | Availability |
|---|---|---|
| Metrics | /_otlp/v1/metrics |
|
| Logs | /_otlp/v1/logs |
|
| Traces | /_otlp/v1/traces |
|
For most users, one of the following higher-level ingestion paths is recommended:
| Deployment | Recommended ingestion path |
|---|---|
| Elastic Cloud Hosted and Serverless | Elastic Cloud Managed OTLP Endpoint |
| Elastic Cloud Enterprise, Elastic Cloud on Kubernetes, and self-managed | OpenTelemetry Collector in Gateway mode, using the Elasticsearch exporter |
Use Elastic Cloud Managed OTLP Endpoint if it's available in your deployment, even when an application can target the Elasticsearch OTLP endpoint directly.
For an overview of the recommended OpenTelemetry-based ingestion architecture, refer to the EDOT reference architecture.
Use the Elasticsearch OTLP endpoint directly when one of the following applies:
- You have an application that exports OTLP natively and you want it to send data to Elasticsearch without running an OpenTelemetry Collector. For example, a lightweight development setup (SDK to Elasticsearch).
- You operate a self-managed gateway Collector and prefer the
OTLP/HTTPexporter over the Elasticsearch exporter.
Don't send telemetry from many individual applications directly to the Elasticsearch OTLP endpoint at the same time. Send to an OpenTelemetry Collector first so it can absorb connection churn and batch records to improve ingestion performance.
Compared to the Bulk API, ingesting through OTLP offers:
- Improved ingestion performance, especially for payloads with many resource attributes.
- Simplified mapping: data streams, index templates, dimensions, and metrics are derived dynamically from OTLP metadata. There's no need to set them up manually.
Authenticate to the Elasticsearch OTLP endpoint with an API key. Refer to the API key documentation for your deployment type for instructions on how to create one:
- Elasticsearch API keys (self-managed, Elastic Cloud Enterprise, Elastic Cloud on Kubernetes)
- Elastic Cloud Hosted API keys
- Elastic Cloud Enterprise API keys
- Serverless project API keys
The API key needs create_doc and auto_configure privileges on the data stream patterns it writes to.
create_doc allows writing documents without overwriting existing ones.
auto_configure allows the endpoint to create the target data streams on first write.
The minimum index patterns depend on which signals you ingest:
| Signals ingested | Required names patterns |
|---|---|
| Metrics | metrics-* |
| Logs | logs-* |
| Traces | traces-*, logs-* |
| All three | metrics-*, logs-*, traces-* |
Traces ingestion also writes span events to logs-* data streams, so it requires both patterns.
For example, an API key role descriptor that allows ingesting all three signals:
{
"indices": [
{
"names": ["logs-*", "metrics-*", "traces-*"],
"privileges": ["create_doc", "auto_configure"]
}
]
}
To send data from an OpenTelemetry Collector to an Elasticsearch OTLP endpoint, configure the OTLP/HTTP exporter:
exporters:
otlphttp/elasticsearch:
endpoint: <es_endpoint>/_otlp
headers:
Authorization: "ApiKey <api_key>"
sending_queue:
enabled: true
sizer: bytes
queue_size: 50_000_000
block_on_overflow: true
batch:
flush_timeout: 1s
min_size: 1_000_000
max_size: 4_000_000
service:
pipelines:
logs:
exporters: [otlphttp/elasticsearch]
receivers: ...
traces:
exporters: [otlphttp/elasticsearch]
receivers: ...
metrics:
exporters: [otlphttp/elasticsearch]
receivers: ...
- Sizes the queue and batches by uncompressed bytes.
- Limits the queue to 50 MB of uncompressed data. Increasing this value can absorb longer Elasticsearch outages or traffic bursts, but also increases Collector memory usage.
- Controls the uncompressed batch size sent to Elasticsearch. In this example, batches are sent at 1 MB and capped at 4 MB. Larger batches reduce request overhead, but increase peak memory usage and the amount of data retried after a failed request.
The exporter appends the signal-specific path (/v1/logs, /v1/traces, /v1/metrics) to the configured endpoint.
These values are starting points for a gateway Collector. Tune them for your workload and Collector resources. They are local to each Collector instance and don't increase Elasticsearch ingest capacity. If many applications need to send telemetry, scale out the gateway Collector instead of sending directly from each application.
Supported compression values are gzip (the OTLP/HTTP exporter default) and none.
To send data from a custom application, use the OpenTelemetry language SDK of your choice and point its OTLP/HTTP exporter at the corresponding Elasticsearch OTLP endpoint path.
Only encoding: proto is supported, which the OTLP/HTTP exporter uses by default.
By default, records are written to the following data streams:
| Signal | Default data stream |
|---|---|
| Logs | logs-generic.otel-default |
| Traces | traces-generic.otel-default |
| Metrics | metrics-generic.otel-default |
For more about how OTLP metrics are stored as time series data streams, refer to Ingest metrics into a TSDS using the OTLP/HTTP endpoint.
The target data stream name follows the pattern <type>-<dataset>.otel-<namespace>.
You can influence dataset and namespace by setting attributes on your data:
- Set
data_stream.datasetand/ordata_stream.namespaceas attributes. Precedence: data point or log record attribute, then scope attribute, then resource attribute. - Otherwise, if the scope name contains
/receiver/<somereceiver>,data_stream.datasetis set to the receiver name. - Otherwise,
data_stream.datasetfalls back togenericanddata_stream.namespacefalls back todefault.
Examples:
| Signal | Attributes or scope name | Target data stream |
|---|---|---|
| Logs | data_stream.dataset: nginx.access, data_stream.namespace: prod |
logs-nginx.access.otel-prod |
| Traces | data_stream.dataset: checkout, data_stream.namespace: staging |
traces-checkout.otel-staging |
| Metrics | Scope name contains /receiver/hostmetrics, no data_stream.* attributes |
metrics-hostmetrics.otel-default |
| Metrics | No matching attributes or receiver scope name | metrics-generic.otel-default |
By default, OTLP log records are mapped into Elasticsearch's standard OTel document structure, preserving resource, scope, and record metadata.
If an upstream component has already shaped the log body to match the desired document structure, you can opt into the body map mode. In this mode, the log record's body map is used as the complete document, without copying the surrounding OTLP metadata.
Enable body map mode in either of two ways:
- Per request, by setting the
X-Elastic-Mapping-ModeHTTP header tobodymap. - Per instrumentation scope, by setting the
elastic.mapping.modescope attribute tobodymap. The scope attribute takes precedence over the header.
You can configure how OTLP histogram metrics are mapped using the xpack.otel_data.histogram_field_type cluster setting.
Valid values are:
histogram(default on): Map histograms as T-Digests using the histogramfield typeexponential_histogram(default on): Map histograms as exponential histograms using the exponential_histogramfield type
The setting is dynamic and can be updated at runtime:
PUT /_cluster/settings
{
"persistent" : {
"xpack.otel_data.histogram_field_type" : "exponential_histogram"
}
}
Because both histogram and exponential_histogram support coerce, changing this setting dynamically does not risk mapping conflicts or ingestion failures.
This setting only applies to metrics ingested through the Elasticsearch OTLP endpoint. Documents ingested using the Bulk API (for example through the Elasticsearch exporter for the OpenTelemetry Collector) are not affected.
- Delivery guarantees: Elasticsearch can only acknowledge an OTLP request as a whole, not on a per-record basis. If part of a request fails, the client retries the entire batch, which can produce duplicate logs or trace spans. Metrics are not affected because metric points written to time series data streams are deduplicated based on their dimensions and timestamp.
- Profiles: Profiles are not supported. To ingest profiles, use a distribution of the OpenTelemetry Collector that includes the Elasticsearch exporter, such as the Elastic Distribution of OpenTelemetry (EDOT) Collector.
- Histogram temporality: Histograms are only supported in delta temporality.
Set the temporality preference to delta in your SDKs, or use the
cumulativetodeltaprocessor so cumulative histograms aren't dropped. - Exemplars: Exemplars are not supported yet.