Query Prometheus Metrics in Elasticsearch with Native PromQL Support

Elasticsearch now supports PromQL natively as a first-class source command in ES|QL. Run familiar Prometheus queries on your time series data directly in Kibana.

7 min read

Many teams already rely on PromQL in their day-to-day work. We're making PromQL a first-class experience in Elasticsearch.

The new PROMQL command in ES|QL lets you query time series data in Elasticsearch with PromQL, whether it came from Prometheus Remote Write, OpenTelemetry, or another source.

Metrics, logs, and traces - all in one place, ready to explore in Kibana.

The PROMQL source command

PROMQL is a source command in ES|QL, similar to FROM or TS. It takes standard PromQL parameters and a PromQL expression, executes the query, and returns the results as regular ES|QL columns that you can continue to process with other commands.

Here is the general syntax:

PROMQL [index=<pattern>] [step=<duration>] [start=<timestamp>] [end=<timestamp>]
  [<value_column_name>=](<PromQL expression>)

The parameters mirror the Prometheus HTTP API query parameters (step, start, end), so they should feel familiar if you have used the Prometheus query API before.

A basic range query

This query calculates the per-second rate of HTTP requests over a sliding 5-minute window, grouped by instance:

PROMQL index=metrics-*
  step=1m
  start="2026-04-01T00:00:00Z"
  end="2026-04-01T01:00:00Z"
  sum by (instance) (rate(http_requests_total[5m]))

The result contains three columns:

ColumnTypeDescription
sum by (instance) (rate(http_requests_total[5m]))doubleThe computed metric value
stepdateThe timestamp for each evaluation step
instancekeywordThe grouping label from by (instance)

When the PromQL expression includes a cross-series aggregation like sum by (instance), each grouping label becomes its own output column. When there is no cross-series aggregation, all labels are returned in a single _timeseries column as a JSON string.

Naming the value column

By default, the value column name is the PromQL expression itself. You can assign a custom name to make it easier to reference in downstream commands:

PROMQL index=metrics-*
  step=1m
  start="2026-04-01T00:00:00Z"
  end="2026-04-01T01:00:00Z"
  http_rate=(sum by (instance) (rate(http_requests_total[5m])))
| SORT http_rate DESC

This works the same way as naming aggregations in STATS, for example STATS avg_cpu = avg(system.cpu.usage).

Index patterns

The index parameter accepts the same patterns as FROM and TS, including wildcards and comma-separated lists. If omitted, it defaults to *, which queries all indices configured with index.mode: time_series. In production, specifying an explicit index pattern avoids scanning unrelated data.

How it works under the hood

The PROMQL command does not run a separate query engine. Instead, PROMQL commands execute inside the ES|QL compute engine, using the same logic as time-series aggregations through the TS source command.

Consider this PromQL query:

PROMQL index=metrics-*
  step=1m
  start="2026-04-01T00:00:00Z"
  end="2026-04-01T01:00:00Z"
  sum by (host.name) (rate(http_requests_total[5m]))

Internally, the PROMQL command translates this into an equivalent ES|QL query using the TS source:

TS metrics-*
| WHERE TRANGE("2026-04-01T00:00:00Z", "2026-04-01T01:00:00Z")
| STATS SUM(RATE(http_requests_total, 5m)) BY TBUCKET(1m), host.name

Both queries produce the same result. The PROMQL command parses the PromQL syntax, resolves functions to their ES|QL equivalents (rate to RATE, sum to SUM, avg_over_time to AVG_OVER_TIME, and so on), and constructs a logical plan that the ES|QL engine executes.

This translation approach has a practical benefit: PromQL queries automatically benefit from all the optimizations in the ES|QL engine, including segment-level parallelism and time series-aware data access patterns.

There are currently 19 time series functions available, covering rates, deltas, derivatives, and various *_over_time aggregations.

Smart defaults that simplify queries

In Prometheus, a PromQL query requires explicit start, end, and step parameters. In Kibana, those are usually determined by the date picker and panel size. The PROMQL command has three features that make queries adapt automatically.

Auto-step

If you omit the step parameter, the command derives it automatically based on the time range and a target bucket count (default: 100). You can also set the target explicitly with buckets=<n>.

PROMQL index=metrics-*
  start="2026-04-01T00:00:00Z"
  end="2026-04-01T01:00:00Z"
  sum by (instance) (rate(http_requests_total[5m]))

With a 1-hour range and the default target of 100 buckets, the step would be 1m, resulting in 60 buckets. This uses the same date-rounding logic as the ES|QL BUCKET function.

Inferred start and end

Kibana adds a time range filter to every ES|QL request via a Query DSL range filter on @timestamp. The PROMQL command extracts those bounds and uses them as start and end when they are not specified in the query. The command picks up the date picker range from the request context without any additional configuration.

Implicit range selectors

In standard PromQL, functions like rate require a range selector: rate(http_requests_total[5m]). The PROMQL command allows omitting the range selector entirely:

PROMQL sum by (instance) (rate(http_requests_total))

When the range selector is absent, the window is determined automatically as max(step, scrape_interval). The scrape_interval defaults to 1m and can be overridden with the scrape_interval parameter if your data has a different collection interval, for example: PROMQL scrape_interval=15s sum(rate(http_requests_total)).

The result

Combining all three defaults, a fully adaptive query in Kibana looks like this:

PROMQL sum(rate(http_requests_total))

This query responds to the date picker, adjusts the step size to the selected time range, and sizes the range selector window accordingly. No manual tuning needed.

Post-processing with ES|QL

Because PROMQL is an ES|QL source command, its output flows into the rest of the ES|QL pipeline. You can filter, sort, enrich, and transform PromQL results using any ES|QL command.

Filter results

PROMQL index=metrics-*
  http_rate=(sum by (instance) (rate(http_requests_total[5m])))
| WHERE http_rate > 100

Sort and limit

PROMQL index=metrics-*
  http_rate=(sum by (instance) (rate(http_requests_total[5m])))
| SORT http_rate DESC
| LIMIT 10

Enrich with a lookup

PROMQL index=metrics-*
  http_rate=(sum by (instance) (rate(http_requests_total[5m])))
| LOOKUP JOIN instance_metadata ON instance

This is something you cannot do in Prometheus. PromQL results are self-contained; there is no way to join them with external data or apply arbitrary post-processing. In Elasticsearch, the PromQL output is just the first stage of a query that can continue with any ES|QL operation.

Current coverage and what's next

In 9.4, the PROMQL command will be available as a tech preview with over 80% query coverage benchmarked against popular Grafana open source dashboards.

The most notable gaps in the current tech preview:

  • Group modifiers like on(chip) group_left(chip_name) are not yet supported.
  • Binary set operators (or, and, unless) are not yet available.
  • Some functions are still missing, including histogram_quantile, predict_linear, and label_join.

These are all planned for upcoming releases. The roadmap includes broader PromQL function and operator coverage, Prometheus-aligned step semantics, and support for native histograms.

Try it

PromQL support is available as a tech preview on Elasticsearch Serverless with no additional configuration. For self-managed clusters, it is available starting with version 9.4.

To try it in Kibana:

  1. Go to Dashboards, create a new panel, and select ES|QL as the query type.
  2. Enter a PROMQL query, for example: PROMQL index=metrics-* sum by (host.name) (rate(http_requests_total)).
  3. The command automatically infers the time range from the Kibana date picker, so no additional parameters are needed.

You can also run PromQL queries in the ES|QL mode of Discover, which shows results in a table and an XY chart. Stay tuned for a full walkthrough of using PromQL in Kibana Dashboards, Discover, and Alerting in a dedicated Kibana blog post.

If you want to try it with a self-managed cluster, check out start-local to get up and running quickly.

If you run into issues or have feedback, open an issue on the Elasticsearch repository.

Share this article