The Observability Migration Platform is a CLI-driven workflow that translates supported Grafana and Datadog assets into Kibana-native outputs and produces the evidence needed to review the result. It changes migration from a manual rebuild into a translation-and-verification workflow that gets teams into Elastic Observability faster.
Migrations covered by the Observability Migration Platform
The current scope covers Datadog and Grafana. The platform can work from exported assets or live APIs, and it focuses on dashboards and alerting content on the Datadog and Grafana paths it currently covers.
Support is not identical across the two sources. Datadog has end-to-end extraction, validation, compile, upload, smoke, and verification workflows, but it currently covers a narrower slice of widgets and monitors. Grafana coverage is broader. The platform provides a practical translation pipeline for the supported paths.
The screenshots below show examples of dashboards after migration.
How the Observability Migration Platform works
At a high level, the workflow has two halves: source-aware translation on the way in and target-aware validation and delivery on the way out. That split matters because Grafana and Datadog differ not only in JSON shape, but also in query languages, panel types, controls, and alerting models.
A run starts with exported assets or live source APIs. From there, the workflow normalizes source-specific objects, chooses a translation path for each supported dashboard, panel, and alerting artifact, and emits Kibana-native output. This is where most of the source-specific logic lives: translating queries or Datadog formulas, mapping panel semantics, carrying forward controls and links where possible, and deciding when an exact translation is not the right answer.
The second half is target-aware. The emitted output can be validated against an Elastic target, compiled, and uploaded to Kibana through the shared runtime. In the happy path, that yields a working translated dashboard. In rougher cases, validation may show that a panel cannot run safely as emitted. When that happens, the workflow is designed to fail conservatively: it can mark the panel for manual review or replace it with an upload-safe placeholder instead of shipping a broken runtime panel.
Just as important, the outcome is not simply "a dashboard showed up in Kibana." The workflow also produces reviewer-facing evidence such as a migration report, manifest, verification packets, and rollout plan so you can see what translated cleanly, what was downgraded or manualized, and what still needs human judgment. Those artifacts are what make the process operationally credible: they give teams something concrete to inspect, compare, and act on.
Running the migration
The platform is CLI-driven, and a good fit for migration work that needs to be repeatable, reviewable, and easy to automate. Users can start with a representative slice of dashboards and alerting content from Grafana or Datadog, point the workflow at an Elastic target, and use that first run to understand translation quality, validation results, and how much follow-up review is required.
To run the full path against Elastic, create an Elastic Observability Serverless project, generate a Serverless project API key, and point the CLI at your Elasticsearch and Kibana endpoints:
obs-migrate migrate \
--source grafana \
--input-mode files \
--input-dir ./grafana_exports \
--output-dir ./migration_output \
--assets all \
--native-promql \
--data-view "metrics-*" \
--validate \
--es-url "$ELASTICSEARCH_ENDPOINT" \
--es-api-key "$KEY" \
--kibana-url "$KIBANA_ENDPOINT" \
--kibana-api-key "$KEY" \
--upload
The run validates the emitted queries against Elastic, compiles the generated dashboards, uploads them to Kibana, and produces the standard migration artifacts for review.
A typical run looks like this:
- Start with exported assets or live source APIs from Grafana or Datadog.
- Choose the asset scope with
--assets dashboards,--assets alerts, or--assets all. - Translate the supported dashboards, queries, controls, and alerting artifacts into Kibana-native output.
- Validate the emitted content against an Elastic target (if configured), then compile and upload the translated dashboards for dashboard-capable runs.
- Review the migration evidence, including
migration_report.json,verification_packets.json,run_summary.json, etc., to understand what translated cleanly, where semantic gaps remain, and which dashboards, panels, or alert rules still require human review. - If alert rule creation is enabled, review the migrated rules (which are disabled by default) in Kibana before deciding which ones to enable or redesign.
What's next
The platform is still evolving, and will continue to gain depth and self-service capabilities. The biggest open areas are stronger measured source-to-target semantic verification, further coverage for Datadog, deeper coverage for harder query families and non-dashboard surfaces, and cleaner shared runtime contracts across the workflow.
It is also built to grow over time. The source and target boundaries are explicit by design, which gives the platform room to expand coverage and support additional source paths in the future.
In conclusion
If you are planning a move into Elastic, a good starting point is to create an Elastic Observability Serverless project. That gives you the target environment where translated dashboards and alerting content can be validated and reviewed.
To learn more about the migration workflow, talk to your Elastic representative about current access, supported coverage, and how it can help with your migration needs.