Prometheus has a well-defined protocol for shipping metrics to external storage: Remote Write.
Elasticsearch now implements this protocol natively, so you can add it as a remote_write destination with a single config block.
This lets you bring your Prometheus metrics into the same cluster which can also store logs, traces, and other data. One storage backend, one set of access controls, one place to query.
Why store Prometheus metrics in Elasticsearch?
Prometheus local storage is designed for short retention, typically 15 to 30 days. For anything beyond that, you need a remote storage backend.
Elasticsearch's time series data streams (TSDS) are built for highly efficient long term metrics storage: automatic rollover, time-based partitioning, compression via index sorting, and downsampling to reduce storage costs as data ages. Your Prometheus scrape configs stay the same.
Recent Elasticsearch releases have significantly reduced the storage footprint for metrics. A dedicated post with the numbers is coming soon.
On the query side, ES|QL embraces PromQL: a built-in PROMQL function lets your existing queries run unchanged, while the rest of ES|QL is available when you want joins, aggregations, or transformations that span multiple datasets.
And because metrics land in the same store as your logs, traces, and profiling data, correlating signals across types becomes a single query rather than a cross-system investigation.
How it works
Prometheus sends metrics to Elasticsearch via the standard Remote Write protocol (v1).
The endpoint accepts protobuf-encoded, snappy-compressed WriteRequest payloads.
Each sample becomes an Elasticsearch document in a pre-defined time series data stream.
Prometheus labels become TSDS dimensions.
The metric value is stored in a typed field under metrics.<metric_name>.
Elasticsearch infers the metric type (counter vs gauge) from naming conventions.
Names ending in _total, _sum, _count, or _bucket are treated as counters.
Everything else is treated as a gauge.
Setting it up
Step 1: Get an Elasticsearch endpoint
You need an Elasticsearch cluster with the Prometheus endpoints enabled. The simplest option is Elastic Cloud Serverless, where this works out of the box.
For serverless: sign in to cloud.elastic.co, create an Observability project, and copy the Elasticsearch endpoint from the project settings page.
The endpoint looks like https://<project-id>.es.<region>.<provider>.elastic.cloud.
Step 2: Create an API key
Create an API key scoped to writing metrics data streams only. In your Elastic Cloud Serverless project, go to Admin and settings (the gear icon at the bottom left of the side nav), then API keys.
Use the following role descriptor in the Control security privileges section:
{
"ingest": {
"indices": [
{
"names": ["metrics-*"],
"privileges": ["auto_configure", "create_doc"]
}
]
}
}
Copy the key value before closing the dialog. You will not be able to retrieve it again.
Step 3: Configure Prometheus
Add the following remote_write block to your prometheus.yml:
remote_write:
- url: "https://YOUR_ES_ENDPOINT/_prometheus/api/v1/write"
authorization:
type: ApiKey
credentials: YOUR_API_KEY
That's it. Prometheus will start shipping metrics to Elasticsearch on the next scrape interval.
If you use Grafana Alloy instead of Prometheus, the equivalent configuration is:
prometheus.remote_write "elasticsearch" {
endpoint {
url = "https://YOUR_ES_ENDPOINT/_prometheus/api/v1/write"
headers = {"Authorization" = "ApiKey YOUR_API_KEY"}
}
}
Routing metrics to separate data streams
By default, all metrics land in metrics-generic.prometheus-default.
You can route metrics from different environments or teams into separate data streams using the dataset and namespace path segments in the URL.
The three URL patterns are:
/_prometheus/api/v1/writeroutes tometrics-generic.prometheus-default/_prometheus/metrics/{dataset}/api/v1/writeroutes tometrics-{dataset}.prometheus-default/_prometheus/metrics/{dataset}/{namespace}/api/v1/writeroutes tometrics-{dataset}.prometheus-{namespace}
For example, using /_prometheus/metrics/infrastructure/production/api/v1/write routes data to metrics-infrastructure.prometheus-production.
This is useful for separating production from staging metrics, or giving different teams their own data streams with independent lifecycle policies.
What gets stored
Here is what a sample document looks like in Elasticsearch:
{
"@timestamp": "2026-04-02T10:30:00.000Z",
"data_stream": {
"type": "metrics",
"dataset": "generic.prometheus",
"namespace": "default"
},
"labels": {
"__name__": "prometheus_http_requests_total",
"handler": "/api/v1/query",
"code": "200",
"instance": "localhost:9090",
"job": "prometheus"
},
"metrics": {
"prometheus_http_requests_total": 42
}
}
Labels map to keyword fields that serve as TSDS dimensions.
The metric value is stored under metrics.<metric_name> with the inferred time_series_metric type (counter or gauge).
Elasticsearch installs a built-in index template matching metrics-*.prometheus-* that configures TSDS mode, passthrough dimension container objects, and a 10,000 field limit.
The field limit is configurable via a custom component template (see the custom metric type inference section below for how to use one).
You do not need to create any templates or mappings yourself.
Custom metric type inference
Metric type inference is based on naming conventions.
Metrics that don't follow Prometheus naming best practices may be classified incorrectly.
You can override the defaults by creating a metrics-prometheus@custom component template with your own dynamic templates.
For example, to mark all *_counter metrics as counters:
{
"template": {
"mappings": {
"dynamic_templates": [
{
"counter": {
"path_match": "metrics.*_counter",
"mapping": {
"type": "double",
"time_series_metric": "counter"
}
}
}
]
}
}
}
Custom rules are merged with the built-in patterns, so the defaults still apply for metrics you don't override.
Current limitations
Only Remote Write v1 is supported. v2, which brings native histograms and exemplars, is planned.
Staleness markers (special NaN values Prometheus uses to signal a series has disappeared) are not yet stored or respected in queries.
Non-finite values (NaN, Infinity) are silently dropped.
Get started
The Prometheus Remote Write endpoint is available now on Elasticsearch Serverless with no configuration needed. To get started with a local cluster, start-local gets you a single-node cluster in minutes.
Once metrics are flowing, you can query them with ES|QL using the built-in PROMQL function for PromQL compatibility, or write native ES|QL queries to join metrics with logs and traces in the same store.