Miguel Sánchez

Exploring metrics from a new time series data stream in Discover

Discover helps you see and understand the metrics in a time series stream, with no manual work required. Once you see that your metrics data is flowing, you're ready to build dashboards, alerts, SLOs, and more.

Getting data into Elastic is the first step toward observability. Once you start ingesting it, the next question is: what metrics are we actually collecting, and do they look right?

Whether you've added a new integration, set up an OpenTelemetry pipeline, or configured a custom agent for your infrastructure, you need to see what's landing in the cluster before you build dashboards, alerts, or SLOs on top of it. Discover gives you that view: the metrics in a time series stream, each rendered as a time series chart for your desired time range. No dashboard to build, no exploratory queries to write. Just the raw picture of what you have.

Discover your data streams

In the left navigation under Observability, open Streams. That page lists every data stream in your cluster, wherever it comes from: integrations, OpenTelemetry pipelines, custom agents, and similar sources. Each source you monitor (Docker, Kubernetes, Nginx, and so on) produces one or more data streams. Here you can see exactly what streams exist and what you can build on.

Open a stream to see its detail page.

On the top left, a "Time series" badge means the stream is a time series stream (optimized for metrics and more efficient); if the badge isn't there, the stream is regular. Click View in Discover in the top right to open Discover with the right query for that stream. The query depends on the stream type:

  • TS (time series): TS is an ES|QL source command that selects a time series data stream and enables time series aggregation functions (such as RATE or AVG_OVER_TIME). When Discover recognizes metrics data from time series metrics data streams (for example streams whose names match metrics-*), it shows each metric as a chart. See the ES|QL TS command documentation for the full reference.
  • FROM (regular, document-based streams): use for document-style queries. Discover shows documents in a table rather than the per-metric chart grid you get with time series metrics streams.

Because our example is a time series stream, Discover opens with:

TS metrics-docker.cpu-default

See all your metrics, automatically visualized

This is where it gets useful. Instead of a table of documents, Discover shows you the metrics in that stream, each rendered as a time series chart for the selected time range. No configuration needed. This capability, metrics in Discover, is currently in technical preview.

Each metric (docker.cpu.total.pct, docker.cpu.system.pct, docker.cpu.user.pct, and others) appears with a chart that shows its behavior over time. Discover recognizes different metric types and renders them accordingly: gauges as averages, counters as rates, and histograms as P95 distributions. You get an instant, at-a-glance view of what's being collected and whether the values look reasonable.

When you're onboarding a new source, that removes the guesswork: which metrics are active, which have data, what the values look like. You can confirm coverage and sanity-check the pipeline before you rely on that data for dashboards or alerting.

Iterate quickly

From here, you can adjust to get the view you need:

Change the time range. The default 15-minute window might catch a quiet period and make healthy data look flat. Expanding to 1 hour or more reveals patterns you care about: periodic spikes from batch jobs, daily traffic curves, or the ramp-up after a new deployment. Picking the right window matters when you're validating that a new pipeline or integration is behaving as expected.

Switch data streams. You don't need to go back to the Streams page to explore another data source. Update the query to a different data stream, or use a pattern like metrics-docker.* to see metrics across all your Docker data streams at once: CPU, memory, network, disk I/O, all in one view.

Search for specific metrics. With many metrics in a stream, the search on the top right of the grid lets you filter by name. Need to confirm that memory limits or request rates are present? Type the metric name and you either find it or confirm it's missing, so you can fix the pipeline or agent before you depend on that metric elsewhere.

Validate at a glance

The automatic visualizations also serve as a health check for data ingestion:

  • Data is flowing: charts show recent, continuous values, not gaps or stale data.
  • Values are reasonable: CPU in expected ranges, memory tracking activity, network I/O reflecting traffic.
  • Coverage is what you expect: if you enabled Docker monitoring but don't see network I/O metrics, the agent policy or module likely needs a change.

This kind of quick validation replaces manual doc checks, mapping inspection, and one-off exploratory queries. You get a clear picture of what's in the stream before you wire it into dashboards, alerts, or SLOs. Once you've confirmed the data looks healthy, you can add panels to dashboards or use it for alerting and SLOs.

Share this article