Bringing Fire to Elasticsearch: Adding Native Prometheus API Support

Elasticsearch is adding native Prometheus query, discovery, and metadata APIs on top of earlier PromQL and Remote Write work, with evolving metadata support.

Elasticsearch has native integrations to industry leading Gen AI tools and providers. Check out our webinars on going Beyond RAG Basics, or building prod-ready apps Elastic Vector Database.

To build the best search solutions for your use case, start a free cloud trial or try Elastic on your local machine now.

Prometheus-compatible clients depend on a standard HTTP API for queries, metric discovery, and ingestion. Elasticsearch is adding Prometheus-compatible APIs on top of its existing time series data streams, so those clients can talk to Elasticsearch directly.

That includes metrics ingested through Prometheus Remote Write and metrics already stored in Elasticsearch through OpenTelemetry, the Bulk API, or other ingestion paths. This post explains how the query, discovery, and metadata endpoints build on the earlier ingest and query work to form that API surface. Several companion posts go deeper on individual pieces:

This is still a work in progress. The sections below call out what is supported today and which parts are still evolving.

The API surface

Today, the Prometheus-compatible API surface falls into three groups.

Query endpoints

The query endpoints let Prometheus-compatible clients evaluate PromQL expressions:

  • GET /_prometheus/api/v1/query_range evaluates a PromQL expression over a time window (matrix results).
  • GET /_prometheus/api/v1/query evaluates at a single point in time (vector results). Currently implemented as a short range query that returns the last sample.

Only GET is supported for query endpoints today. Some clients default to POST, so you may need to configure them to use GET. The Prometheus POST convention uses application/x-www-form-urlencoded bodies, which Elasticsearch's HTTP layer rejects as a CSRF safeguard before the request ever reaches the handler.

For the full PromQL coverage status, see the companion post on PromQL in ES|QL.

Metadata endpoints

The metadata endpoints serve the discovery information that clients need for autocomplete, variable dropdowns, and metric browsing.

The series, labels, and label values endpoints all accept match[] selectors and a time range (start/end). The match[] parameter takes a Prometheus series selector like http_requests_total{job="api"} and restricts the response to time series that match. This keeps responses fast and relevant on clusters with large numbers of metrics. For example:

The first returns all series for http_requests_total where job="api", with their full label sets. The second returns only the label names that exist on http_requests_total series. The third returns only the instance values that appear on matching series.

GET /_prometheus/api/v1/metadata is different: it returns type and unit for each metric, optionally filtered by name via a metric parameter.

It does not accept match[] selectors or a time range. In Prometheus, metadata is collected from active scrape targets (the HELP, TYPE, and UNIT lines they expose), so the response does not involve a data scan. Elasticsearch does not have a dedicated metadata store like that, so the current implementation discovers metric metadata by visiting time series data from the last 24 hours. This keeps the query fast without requiring a full index scan. That 24-hour lookback is fixed today: the Prometheus metadata API does not expose start or end parameters that Elasticsearch could use to make it user-adjustable.

How the metadata endpoints work under the hood, including the TS_INFO and METRICS_INFO commands that power them, is covered below.

Index pre-filtering

All query and metadata endpoints accept an optional {index} path segment after /_prometheus/:

This restricts which Elasticsearch indices the query runs against before any expression evaluation begins. On clusters with many data streams across teams or environments, this avoids scanning unrelated indices and can significantly reduce query latency. You can configure separate data sources per index pattern to give teams scoped access to their own metrics.

A note about Remote Write

For ingestion, Elasticsearch also exposes the standard Prometheus Remote Write endpoint:

  • POST /_prometheus/api/v1/write ingests time series via the Prometheus Remote Write v1 protocol. v2 is not yet supported.

Remote Write writes into Elasticsearch's existing time series data streams (TSDS), not a separate Prometheus-specific storage layer. Prometheus labels become TSDS dimensions, and metric names become fields in the index mapping. The remote write architecture post covers the full mapping in detail, including how metric types are inferred and how labels are stored with a labels. prefix.

How it works

Under the hood, all endpoints work the same way: parse the incoming HTTP parameters, build an ES|QL query plan, execute it against time series data streams, and convert the columnar result back into the JSON format Prometheus clients expect.

TS_INFO and METRICS_INFO

The metadata endpoints need to answer questions like "what labels exist?" or "what metric types are defined?" across potentially millions of time series, without scanning every data point.

Internally, the Prometheus metadata endpoints answer those questions by building ES|QL plans around two new processing commands: METRICS_INFO and TS_INFO. You do not need to use these commands directly to use the Prometheus API, but they are the core execution primitives behind the metadata responses. Both work by visiting only one document per time series to extract its metadata, rather than scanning all samples. This means their cost scales with the number of distinct time series, not the number of data points.

METRICS_INFO returns one row per distinct metric with its name, type, unit, and associated dimension fields. TS_INFO is more granular: one row per (metric, time series) combination, including the actual dimension values as a JSON object.

A dedicated blog post on TS_INFO and METRICS_INFO is coming soon, covering the two-phase execution model, how they scale, and how to use them directly in ES|QL queries beyond the Prometheus API.

How the metadata endpoints use them

Each metadata endpoint constructs an ES|QL plan with one of these commands at its core.

/api/v1/labels and /api/v1/series use TS_INFO, since they need per-time-series detail (which labels exist, which dimension values identify each series). /api/v1/metadata and /api/v1/label/__name__/values use METRICS_INFO, since they only need per-metric information (metric names, types, units).

/api/v1/label/{name}/values for regular labels (anything other than __name__) does not use either command. Regular labels like job or instance are actual dimension fields in the index, so the endpoint can query them directly with a group-by aggregation. When match[] selectors are provided, they are translated into a WHERE clause that filters the time series before the aggregation runs.

The __name__ label needs a different strategy because it is not always present as a dimension field. Prometheus Remote Write does store labels.__name__, but metrics ingested through other paths (OpenTelemetry, the bulk API) do not have it. The metric name is encoded in the field name itself (e.g., metrics.http_requests_total). You could look at the index mappings to enumerate field names, but mappings alone do not tell you which metric has which dimensions, and they cannot be filtered by label values from a match[] selector. METRICS_INFO can do both: it enumerates metric names across indices while respecting upstream WHERE filters.

In all cases, the API layer handles the translation back to Prometheus conventions: stripping the labels. and metrics. storage prefixes and synthesizing __name__ for non-Prometheus metrics that lack it.

In conclusion

Elasticsearch is steadily becoming Prometheus-compatible at the API layer, building on the earlier work for Remote Write ingestion and PromQL query execution. The goal is to let Prometheus-aware tools query and explore Elasticsearch metrics through APIs they already understand.

This work is still in progress and is currently available as a tech preview in Elasticsearch Serverless and in version 9.4 for self-managed clusters and Elastic Cloud Hosted deployments, with the exception of GET /_prometheus/api/v1/metadata. To experiment locally, use start-local.

Quão útil foi este conteúdo?

Não útil

Um pouco útil

Muito útil

Conteúdo relacionado

Pronto para criar buscas de última geração?

Uma pesquisa suficientemente avançada não se consegue apenas com o esforço de uma só pessoa. O Elasticsearch é impulsionado por cientistas de dados, especialistas em operações de aprendizado de máquina, engenheiros e muitos outros que são tão apaixonados por buscas quanto você. Vamos nos conectar e trabalhar juntos para construir a experiência de busca mágica que lhe trará os resultados desejados.

Experimente você mesmo(a)