If you've been in the observability space for the last couple of years, you've seen OpenTelemetry go from "promising standard" to the default choice for collecting metrics, logs, and traces. Elastic has been in that journey from early on — which is why we built the Elastic Distributions of OpenTelemetry (EDOT): a hardened, production-ready suite of OTel components including the EDOT Collector and language SDKs, tuned for infrastructure and application monitoring without the typical setup overhead.
EDOT is now generally available. The collector, the SDKs, the whole stack — production-ready, enterprise-supported, no asterisks.
But here's the thing: getting your data into Elastic is only half the job. The harder half, in practice, is what happens after. Someone still has to build the dashboards, write the alert rules, and figure out which SLOs are worth tracking — before any of it is useful.
That gap is what OpenTelemetry Content Packages are designed to close.
What Are OpenTelemetry Content Packages?
Elastic's traditional Beats-based integrations always bundled data collection and visualizations together — you got curated dashboards and alerts the moment you turned something on. As Elastic moves to an OpenTelemetry-first world, that same philosophy carries over, but the model is cleaner.
OpenTelemetry Content Packs are purely about the observability assets for a given service. No data collection config is bundled in, because in an OTel world, the collector handles that. Each package contains:
- Dashboards — curated, pre-built Kibana visualizations tailored to the service being monitored
- Alert rules — pre-configured alerting rules that fire on meaningful thresholds, helping teams minimize Mean Time to Detect (MTTD) and Mean Time to Resolve (MTTR)
- SLO templates — ready-made Service Level Objective definitions you can apply immediately to track reliability targets, error budgets, and burn rates
More asset types are planned for future packages as the content pack model continues to evolve.
How Does It Work?
The core idea is simple: as soon as data arrives in Elastic, the right dashboards, alert rules, and SLO templates are ready to use. The content package activates based on the incoming data, regardless of how that data was collected.
One of the most powerful aspects of this system is automatic installation. When Elastic detects that data for a particular service has started arriving in Elasticsearch, the corresponding content pack is installed automatically — no manual steps, no hunting through the integrations catalog. By the time you open Kibana, your dashboards are already there waiting for you, your alert rules are ready to be enabled, and your SLO templates are pre-loaded.
To get the data flowing in the first place, we need to configure the collector — a YAML file that defines the building blocks of your telemetry pipeline:
- Receivers — define what data to collect and from where. Each service has its own receiver (for example, the MySQL receiver scrapes metrics directly from the database).
- Exporters — define where the collected data is sent. In our case, we use the Elasticsearch exporter, which ships the telemetry data directly into Elasticsearch in OpenTelemetry native format.
- Pipelines — wire the receivers and exporters together, defining the flow of data through the collector.
Once this configuration is in place and the collector is running, data starts flowing into Elasticsearch — and the content pack takes it from there.
Data Sources
OpenTelemetry data can reach Elastic through any of the following:
- EDOT Collector — the Elastic Distribution of the OpenTelemetry Collector, embedded in or used alongside the Elastic Agent
- Upstream OTel Collector — the standard community OpenTelemetry Collector (Contrib or custom builds)
- EDOT Cloud Forwarder (ECF) — a serverless OTel Collector that collects telemetry from AWS, GCP, and Azure (VPC Flow Logs, CloudTrail, CloudWatch, and more) and forwards it directly to Elastic Observability, with no infrastructure to manage
The content pack doesn't care how the data arrived — only that it's there.
Seeing It in Practice: MySQL Monitoring
Take a team running MySQL who wants to track query throughput, connection counts, buffer pool utilization, and slow query rates — and get alerted before small problems turn into 2am incidents. Historically, that means hours of dashboard building, custom alert queries, and a lot of guesswork about which metrics actually matter.
With the MySQL OpenTelemetry Assets Package, that work is already done. Here's how the whole thing comes together.
Step 1: Get the Data In
The data pipeline is driven by a collector configuration that defines receivers (where to scrape data from), processors (how to enrich or transform it), and exporters (where to send it — in this case, Elasticsearch).
Regardless of whether you use the EDOT Collector or the Upstream OTel Collector, the fundamental configuration structure is the same. The configuration below uses separate receivers for the primary and replica instances, because replication metrics are only available on replicas. Replace the placeholders with your actual endpoints, credentials, and Elasticsearch details.
receivers:
mysql/primary:
endpoint: <MYSQL_PRIMARY_ENDPOINT>
username: <MYSQL_USER>
password: <MYSQL_PASSWORD>
collection_interval: 10s
statement_events:
digest_text_limit: 120
limit: 250
query_sample_collection:
max_rows_per_query: 100
events:
db.server.query_sample:
enabled: true
db.server.top_query:
enabled: true
metrics:
mysql.client.network.io:
enabled: true
mysql.connection.errors:
enabled: true
mysql.max_used_connections:
enabled: true
mysql.query.client.count:
enabled: true
mysql.query.count:
enabled: true
mysql.query.slow.count:
enabled: true
mysql.table.rows:
enabled: true
mysql.table.size:
enabled: true
processors:
resourcedetection:
detectors: [system, env]
exporters:
elasticsearch/otel:
endpoint: <ES_ENDPOINT>
api_key: <ES_API_KEY>
mapping:
mode: otel
service:
pipelines:
metrics:
receivers: [mysql/primary, mysql/replica]
processors: [resourcedetection]
exporters: [elasticsearch/otel]
The MySQL receiver scrapes metrics and events from the database at the configured interval and emits them as OpenTelemetry metrics. These flow through the pipeline and land in Elasticsearch, ready to be visualized.
Step 2: Open Kibana — Everything's Already There
Dashboards
As soon as the MySQL metrics and events arrive in Elasticsearch, the MySQL OpenTelemetry Assets Package is automatically installed in the background. By the time you navigate to Kibana, the dashboards are already populated and waiting.
Users immediately get visibility into:
- Active and max connections
- Query throughput — statements executed per second
- InnoDB buffer pool hit rate and memory usage
- Slow query count and trends
- Table lock waits and contention
- Bytes sent and received over time
- Replication lag (for replicated setups)
No manual field mapping. No dashboard building from scratch. Just data in, insights out.
Below are some screenshots of the MySQL OpenTelemetry dashboard in Kibana, showing the out-of-the-box visualizations that are automatically available as soon as your data starts flowing in.
Overview Dashboard
Queries Dashboard
Availability Dashboard
Alert Rules, Ready to Enable
The package includes six pre-built alert rules — covering high connection error rates, slow query spikes, thread saturation, replication lag, buffer pool dirty page ratio, and row lock contention — each with recommended thresholds and severity levels. These are available immediately on install and can be enabled, tuned, and extended directly in Kibana without any custom query authoring. Below is an example of one of the alerts.
SLO Templates, Pre-Loaded
Four SLO templates are included out of the box, tracking replication lag, connection exhaustion errors, slow query rate, and connected thread count — each with a pre-configured target and 30-day rolling window. Teams can adopt them as-is or tune the thresholds to match their own reliability requirements.
What's Available Today
The MySQL OpenTelemetry Assets Package is just one example from a growing library of OpenTelemetry Content Packages that Elastic has already built out. Content packs are available for a range of services — and we have also started extending this to the cloud, with initial support for Cloud Service Provider integrations that use the EDOT Cloud Forwarder (ECF) to bring AWS, GCP, and Azure telemetry into Elastic with ready-made dashboards.
The same pattern holds across all of them — data in, and a complete observability package (dashboards, alert rules, SLO templates) instantly ready — whether you're monitoring a self-managed database or cloud-native services from your preferred cloud service provider.
Where This Is Going
The next step worth watching is OTel Integration Packages, which will let you push collector configurations directly from the Kibana UI — making the entire setup experience point-and-click, from data collection through to visualization, with no YAML editing required.
Get Started
Ready to try it? Start with the EDOT Collector documentation and explore the growing library of OpenTelemetry content packages in Kibana's Integrations page.