If you run Prometheus today and use Grafana to visualize your metrics, you can now point Grafana's Prometheus data source directly at Elasticsearch. No sidecars, no adapters, no pipeline changes required.
Elasticsearch Observability now implements a native Prometheus-compatible API layer, which covers ingestion via Remote Write and querying via PromQL. Grafana treats Elasticsearch as any other Prometheus-compatible backend: autocomplete, dashboards, and alerting all work out of the box.
Why use Elasticsearch as a Prometheus backend?
Many teams have invested heavily in Prometheus-based tooling: dashboards, runbooks that reference PromQL queries, on-call workflows built around Grafana panels. Migrating metrics storage has historically meant rewriting all of that. With Elasticsearch's new Prometheus-compatible endpoints, you can migrate your storage without touching your dashboards.
This is particularly relevant if you already use Elasticsearch for logs or traces and want to consolidate your observability data into a single platform, while keeping your Grafana-based workflows intact.
What's included
Elasticsearch now exposes three groups of Prometheus-compatible endpoints.
Query APIs
The core query endpoints allow Grafana to evaluate PromQL expressions against data stored in Elasticsearch:
GET /_prometheus/api/v1/query_rangeevaluates a PromQL expression over a time window and returns matrix results. This is what powers most Grafana dashboard panels.GET /_prometheus/api/v1/queryevaluates a PromQL expression at a single point in time and returns vector results.
Both endpoints implement the standard Prometheus response envelope, including result types (vector, matrix, scalar, string), status codes, and error handling.
Metadata APIs
Grafana's metric explorer, autocomplete, and variable dropdowns rely on metadata endpoints to discover what's available. Elasticsearch supports:
GET /_prometheus/api/v1/seriesreturns time series matching label selectors.GET /_prometheus/api/v1/labelsreturns all available label names.GET /_prometheus/api/v1/label/{name}/valuesreturns all values for a given label.
Without these endpoints, autocomplete and the metric browser in Grafana would not work.
Index pre-filtering
All query and metadata endpoints accept an optional {index} path segment immediately after /_prometheus/, for example:
GET /_prometheus/metrics-prod-*/api/v1/query_range
This pre-filters the Elasticsearch indices that the PromQL query runs against before any expression evaluation happens. Scoping queries to the relevant data avoids scanning unrelated indices, which can noticeably speed up your dashboards when you have large volumes of metrics data stored in a few data streams.
You can configure a separate Grafana data source per index pattern to give teams scoped access to their own metrics.
Remote Write ingestion
Elasticsearch also implements the Prometheus Remote Write protocol, which lets you ship metrics from Prometheus to Elasticsearch using the standard remote_write configuration.
Adding Elasticsearch as a remote write destination requires a single block in your existing Prometheus config:
remote_write:
- url: "https://<es_endpoint>/_prometheus/api/v1/write"
authorization:
type: ApiKey
credentials: <api_key>
Metrics are stored in the metrics-generic.prometheus-default data stream by default.
You can route metrics from different Prometheus instances or environments into separate data streams using the dataset and namespace path segments:
POST /_prometheus/metrics/{dataset}/api/v1/writestores metrics inmetrics-{dataset}.prometheus-defaultPOST /_prometheus/metrics/{dataset}/{namespace}/api/v1/writestores metrics inmetrics-{dataset}.prometheus-{namespace}
Try it yourself
Step 1: Create a serverless project
Sign in to cloud.elastic.co and create a new Observability serverless project. Once the project is ready, you will land directly in Kibana. To find the Elasticsearch endpoint, go back to the Elastic Cloud console, open Manage > Application endpoints, cluster and component IDs, and click the copy icon next to Elasticsearch. The endpoint looks like:
https://<project-id>.es.<region>.<provider>.elastic.cloud
Step 2: Create API keys
Create two API keys with scoped privileges: one for ingestion, one for querying. Using separate keys means a leaked Grafana key cannot be used to write data, and a leaked ingest key cannot be used to read it.
In your project, open Admin and settings (the ⚙️ icon at the bottom left of the side nav), go to API keys, and create the first key.
Ingest key (prometheus-remote-write): restricts access to writing metrics data streams only.
In the Control security privileges section, paste the following role descriptor:
{
"ingest": {
"indices": [
{
"names": ["metrics-*"],
"privileges": ["auto_configure", "create_doc"]
}
]
}
}
Create a second key for Grafana in the same section.
Query key (prometheus-grafana): restricts access to reading metrics data streams only.
{
"query": {
"indices": [
{
"names": ["metrics-*"],
"privileges": ["read", "view_index_metadata"]
}
]
}
}
Copy both key values before closing. You will not be able to retrieve them again.
Step 3: Run Prometheus and Grafana
Create a prometheus.yml that scrapes Prometheus itself and forwards those metrics to Elasticsearch.
Replace <es_endpoint> with the endpoint from Step 1 and <ingest_api_key> with the ingest key from Step 2:
global:
scrape_interval: 15s
scrape_configs:
- job_name: "prometheus"
static_configs:
- targets: ["localhost:9090"]
remote_write:
- url: "https://<es_endpoint>/_prometheus/api/v1/write"
authorization:
type: ApiKey
credentials: <ingest_api_key>
Then create a docker-compose.yml to start both Prometheus and Grafana:
services:
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
Start both with:
docker compose up -d
Prometheus will start scraping its own metrics and shipping them to Elasticsearch every 15 seconds.
Grafana will be available at http://localhost:3000 (default credentials: admin / admin).
Step 4: Add a Prometheus data source in Grafana
Open Grafana and go to Connections > Data sources > Add data source, then choose Prometheus.
Set the following:
- Name:
Elasticsearch - URL:
https://<es_endpoint>/_prometheus - HTTP method:
GET(Elasticsearch does not support POST for query endpoints yet) - Authentication: HTTP headers > Add header
- Header:
Authorization - Value:
ApiKey <query_api_key>
- Header:
Click Save & test. Grafana confirms the connection and autocomplete starts working in the query editor.
Step 5: Install the Prometheus dashboard
Go to Dashboards > New > Import and enter dashboard ID 3662 (or paste the URL https://grafana.com/grafana/dashboards/3662).
When prompted to select a data source, choose the Elasticsearch data source you just created.
Click Import and the Prometheus 2.0 Overview dashboard opens, showing your Prometheus self-monitoring metrics pulled from Elasticsearch.
Current limitations and what's next
This is an early implementation and some gaps remain. All of the following are actively being worked on:
PromQL coverage is not yet complete
Queries using group modifiers (e.g., on(instance, job)), set operators (or, and, unless), and certain functions like histogram_quantile are not yet supported.
Recording rules and alerts are also not supported yet.
Only GET is supported for query endpoints
Grafana defaults to POST for Prometheus queries, which is not yet supported. You need to explicitly set the HTTP method to GET in the Grafana data source settings.
Only Remote Write v1 is supported
Remote Write v2 support is planned.
Step alignment differs from Prometheus
Currently, time buckets snap to the nearest minute or hour boundary, whereas Prometheus aligns steps with the query start time and uses lookback semantics. Work is in progress to match Prometheus step semantics exactly (#139187).
Instant queries are a prototype
The instant query endpoint currently runs a short range query under the hood and returns the last sample. It will be replaced with a proper point-in-time evaluation once the step alignment work lands.
Coming next: broader PromQL function and operator coverage, Remote Write v2, Prometheus-aligned step semantics, and metric metadata and exemplar endpoints.
Stay tuned for follow-up posts covering how we implemented PromQL inside Elasticsearch, how to query your Prometheus metrics directly in Kibana, and how Remote Write ingestion works under the hood.
Availability
The Prometheus-compatible API is available now on Elasticsearch Serverless with no additional configuration. If you want to try it with a self-managed cluster, check out start-local to get up and running quickly.
If you run into issues or have feedback, open an issue on the Elasticsearch repository.