<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Elastic Observability Labs - Articles by Nima Rezainia</title>
        <link>https://www.elastic.co/observability-labs</link>
        <description>Trusted security news &amp; research from the team at Elastic.</description>
        <lastBuildDate>Wed, 22 Apr 2026 15:41:03 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        
        <copyright>© 2026. Elasticsearch B.V. All Rights Reserved</copyright>
        <item>
            <title><![CDATA[Centrally Managing OTel Collectors with Elastic Agent and Fleet]]></title>
            <link>https://www.elastic.co/observability-labs/blog/centrally-managed-otel-collectors-with-elastic-fleet</link>
            <guid isPermaLink="false">centrally-managed-otel-collectors-with-elastic-fleet</guid>
            <pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[How Elastic Agent 9.3 unifies Beats and OpenTelemetry (OTel) data collection and delivers central management with Elastic Fleet.]]></description>
            <content:encoded><![CDATA[<p>&quot;The dream of OpenTelemetry is vendor-neutral, standardised observability.
The challenge nobody mentions is how you operate hundreds, or thousands, of those collectors in production.&quot;</p>
<p>OpenTelemetry has won the hearts of the industry.
Adoption is accelerating: the CNCF's 2024 Observability survey found OTel to be the fastest-growing project in the foundation's history, with the OTel Collector registering hundreds of millions of downloads.
The proposition is compelling: write instrumentation once, ship it anywhere, avoid lock-in.</p>
<p>But here is what every platform team discovers once they cross into production: the collector sprawl problem.
Hundreds of collector instances deployed across regions, Kubernetes namespaces, and bare-metal hosts. Configuration drift creeping in.
An upgrade that has to be co-ordinated across a fleet of independent processes. A security patch that someone has to manually roll out to each one.
And zero visibility into which collectors are running, healthy, or stuck.</p>
<p>This is the gap between &quot;deploying OpenTelemetry&quot; and &quot;operating OpenTelemetry at scale.&quot;
With Elastic 9.3, Elastic Agent closes that gap entirely.
The Elastic Agent is now built on Elastic's Distribution of the OpenTelemetry Collector (EDOT) and, when managed by Fleet, gives platform teams a single control plane for configuring, updating, and monitoring every OTel collector in their estate — all while remaining compatible with the Beats-based integrations they already rely on.</p>
<h2>The Collector Sprawl Problem and Why It Matters</h2>
<p>OpenTelemetry's success has created a quiet operational debt for many organisations.
Individual teams adopt the collector for their services: logs here, metrics there, a custom pipeline for the new microservice.
Without a centralised management layer, each of these collectors becomes an independent snowflake: its own config file, its own upgrade cycle, its own failure domain.</p>
<p>The consequences are predictable.
Configuration drift means collectors running different versions of the same pipeline, producing subtly incompatible data.
Compliance teams ask &quot;show me all the places data is collected and where it goes&quot;, and the honest answer is a spreadsheet that's already out of date.</p>
<p>This isn't a niche problem.
A Gartner analysis of enterprise observability programmes consistently identifies operational overhead as the top barrier to expanding OTel adoption beyond initial pilots.
The technology works. The tooling to manage it at scale is what's been missing.</p>
<h2>How Elastic Agent Became an OTel Collector</h2>
<p>To understand the significance of this, it helps to understand what Elastic Agent used to be, and what it is now.</p>
<p>Elastic Agent acts as a supervisor process: Before version 9.3, it managed a collection of separate Beats sub-processes (Filebeat, Metricbeat, Winlogbeat and so on), each running its own input/output lifecycle, each consuming its own memory footprint.
The agent coordinated them, but the fundamental model was a collection of discrete daemons running under a parent.</p>
<p>With 9.3, that model has been replaced.
Elastic Agent is now itself an instance of the EDOT Collector: Elastic's hardened, production-supported distribution of the upstream OTel Collector.
The architectural shift has three important consequences.</p>
<p><strong>First</strong>, the process model simplifies dramatically.
Instead of a supervisor managing multiple sub-process lifecycles, there is a single EDOT Collector process.
This means a smaller memory footprint, fewer things that can fail independently, and fewer processes to observe for health and performance.</p>
<p><strong>Second</strong>, Beats functionality is preserved, not discarded.
Rather than forcing a breaking migration, Elastic has introduced <em>Beats Receivers</em>: beat inputs and processors re-packaged as native OTel receiver components.
A Filestream input is enabled by a <code>filebeatreceiver</code>.
The same Filebeat configuration YAML you write today is automatically translated into the corresponding EDOT receiver configuration at runtime.
Existing integrations, dashboards, and ingest pipelines continue to work without modification.</p>
<p><strong>Third</strong>, the agent is now a first-class participant in the OTel ecosystem.
It speaks OTLP natively, it runs standard OTel receivers, and it can be configured to sit alongside any other OTel-compatible tool in a modern observability pipeline.</p>
<h2>Central Management with Fleet: Configuration, Lifecycle, and Visibility</h2>
<p>The architectural shift above would be valuable on its own. But it becomes transformative when combined with Elastic Fleet, the centralised management plane for Elastic Agents.</p>
<p>Fleet gives platform and SRE teams a single console from which to manage every Elastic Agent (and by extension, every EDOT Collector instance) in their estate.
The capabilities break into three categories: configuration management, lifecycle management, and fleet-wide observability.</p>
<h3>Configuration management at scale</h3>
<p>With Fleet, you define an <em>Agent Policy</em> — a declarative description of what a collector should do.
What data should it collect?
Via which receivers?
Where should it export?
The policy is authored once in Fleet's UI (or via its API), and pushed automatically to every agent enrolled in that policy.
Change the policy, and every affected collector receives the update.
No SSH.
No Ansible playbook to maintain.
No configuration drift.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/centrally-managed-otel-collectors-with-elastic-fleet/policy-health.jpg" alt="Fleet Policy Health" /></p>
<p>Fleet pushes policies to enrolled agents across any environment. Agents send heartbeat and health data back, giving a live inventory of every collector in the estate.</p>
<h3>Lifecycle management: upgrades, enrolment, and remediation</h3>
<p>Perhaps the most operationally significant benefit of Fleet management is lifecycle control.
With Fleet, upgrading a collector is a policy action: select the target version, select the scope (all agents, a specific policy group, a canary subset), and click.
Fleet orchestrates the rolling upgrade, tracking status per agent and surfacing failures immediately.</p>
<p>This changes the security calculus fundamentally.
When a vulnerability is disclosed in the OTel Collector binary, patching is a Fleet operation measured in minutes, not a change-management ceremony measured in days across SSH sessions to individual hosts.</p>
<p>Fleet also handles enrolment and de-enrolment.
New hosts added to your infrastructure can be auto-enrolled into the appropriate policy based on tags or deployment tooling.
Agents on decommissioned hosts can be removed from Fleet's inventory, ensuring your observability map reflects your actual infrastructure.</p>
<h3>Fleet-wide observability of your collectors</h3>
<p>Every Fleet-managed Elastic Agent ships monitoring telemetry about itself: CPU and memory consumption, event throughput, error rates, pipeline latency.
This data flows into Elastic and is surfaced in the Fleet UI, giving you a live dashboard of every collector in your estate, not just the ones you happen to be watching.</p>
<p>For the first time, &quot;how healthy is my observability pipeline?&quot; becomes a question with a real-time, fleet-wide answer.
You can identify agents that have stopped sending data, agents consuming unexpectedly high resources, and agents that have fallen behind on queue processing — before those problems surface as gaps in your monitoring data.</p>
<p>In the near future this capability will be offered to non-Fleet managed agents (aka standalone) and/or 3rd party OTel collectors provided by other vendors.
These collectors can be configured via some other means but be monitored in Fleet - from both resource consumption and/or component pipeline health.</p>
<h2>The Hybrid Agent: Beats Data and OTel Data, Simultaneously</h2>
<p>One of the most practically significant capabilities introduced in 9.3 is what Elastic calls the <em>Hybrid Agent</em>: an Elastic Agent that can run both Beats-based receivers and native OTel receivers in the same pipeline, at the same time.
This does not change anything for existing installations.</p>
<p>This matters enormously for real-world adoption. Most organisations arriving at OTel in 2025 and 2026 are not starting from a blank slate.
They have years of investment in Beats-based integrations: Filebeat-powered log collection, Metricbeat-powered host metrics, bespoke ingest pipelines in Elasticsearch that normalise and enrich that data into ECS (Elastic Common Schema) format.
The business value locked in those integrations (the dashboards, the alerts, the correlation logic) is not something they can afford to throw away in order to &quot;go OTel.&quot;</p>
<p>The Hybrid Agent solves this by making the two worlds coexist.
For example, in a single agent policy you can simultaneously configure:</p>
<ul>
<li>A <code>filebeatreceiver</code> collecting application logs in ECS format, routed through your existing ingest pipeline to its existing data stream</li>
<li>A native OTel <code>filelog</code> receiver collecting OTel-native telemetry from your new services instrumented with the OTel SDK, stored in OTel-native data streams without touching ingest pipelines</li>
<li>An OTel <code>hostmetrics</code> receiver collecting system metrics in semantic convention format alongside your existing Metricbeat-derived system metrics</li>
</ul>
<p><img src="https://www.elastic.co/observability-labs/assets/images/centrally-managed-otel-collectors-with-elastic-fleet/hybrid-agent.jpg" alt="Hybrid Agent" /></p>
<p>The two lanes are independent.
Beats-receiver data travels through ingest pipelines and lands in ECS-formatted data streams, exactly as it always has.
Native OTel data follows OTel semantic conventions and is stored directly in OTel-native data streams, bypassing ingest pipelines.
Your existing dashboards and alerts continue to work. Your new OTel-native workloads get the full OTel experience.
The same agent, the same Fleet policy, the same management console.</p>
<p>This co-existence is the practical answer to the question every platform team eventually faces: &quot;We want to adopt OTel properly but we can't break what we already have.&quot;
The Hybrid Agent lets you migrate incrementally, service by service, on your timeline.</p>
<h2>The Integration Catalogue: Turning Configuration into a One-Click Operation</h2>
<p>Configuration management at scale is only as good as the configurations themselves.
Elastic's integration catalogue — over 500 packages covering everything from NGINX and PostgreSQL to AWS CloudTrail and Kubernetes — extends naturally to the Hybrid Agent model.</p>
<p>From 9.3 onwards, the catalogue includes <em>OTel integration packages</em> alongside the existing Beats-based ones. Each OTel package contains two components:</p>
<ul>
<li>An <em>Input package</em>: the configuration for the corresponding OTel receiver (receivers, processors, pipeline wiring), ready to be applied to a Hybrid Agent policy</li>
<li>A <em>Content package</em>: the assets associated with the application: pre-built dashboards, alerts, index templates, and saved queries, all calibrated for OTel semantic convention data</li>
</ul>
<p>When an operator adds an OTel integration to an Agent Policy in Fleet, the receiver configuration is pushed to all enrolled agents.
When those agents start ingesting data and it arrives in Elasticsearch, the content package assets are automatically installed based on metadata in the data received.
The dashboard is ready before you've had time to wonder where it is.</p>
<p>The same policy can hold both OTel integrations and legacy Beats integrations.
A real-world agent policy might simultaneously collect system metrics via the OTel <code>hostmetrics</code> receiver, application logs via <code>filebeat</code> receiver, and APM data via OTLP — all from one policy, all managed from Fleet, all visible in a unified Kibana experience.</p>
<p>A technical walk through of how this is done for NGINX data collection can be found <a href="https://www.elastic.co/observability-labs/blog/hybrid-elastic-agent-opentelemetry-integration">here</a> for reference.
Currently management of Elastic Agents is done via existing Fleet protocols, however in the near future this will move over to OPAMP so that Fleet will be able to provide management to 3rd party OTel collectors as well.</p>
<p>For organisations on platforms not yet in Elastic's OS support matrix, 3rd-party OTel Collectors (such as Red Hat's OpenShift-native collector) can send data to Elastic using the OTLP exporter and be observed  alongside all other collectors in their fleet.</p>
<h2>What This Means in Practice: A Migration Story</h2>
<p>Consider a mid-sized platform team operating 200 Linux hosts across three regions, currently running Elastic Agent 8.x with a mix of Filebeat and Metricbeat integrations.
Their new services are being instrumented with the OTel SDK and they want to standardise on OTel going forward without disrupting the monitoring coverage they already have.</p>
<p>With a Fleet-managed upgrade to 9.3, their existing agents become Hybrid Agents automatically.
Their Filebeat and Metricbeat configurations are internally translated to Beats receiver configurations and continue to run unmodified.
Their existing dashboards still populate. Their ingest pipelines still fire. Nothing breaks.</p>
<p>They then add OTel integration packages to their Fleet policies for each new service. The OTel-instrumented microservices start sending OTLP data, received by native OTel receivers in the same agents.
OTel-native dashboards appear automatically in Kibana. They now have both data universes in one place, managed from one console, visible in one interface.</p>
<p>Over the following quarters, as Beats-based integrations for their remaining services are superseded by OTel equivalents in the catalogue, they migrate them one by one, updating the Agent Policy in Fleet and watching the transition happen across all 200 hosts simultaneously, without touching a single one directly.</p>
<h2>Looking Forward</h2>
<p>Elastic has made a clear architectural bet: OpenTelemetry is the future of observability data collection, and the right response to that future is not to build a parallel OTel tool alongside the existing stack — it is to evolve the existing stack into OTel.
The Hybrid Agent and EDOT Collector are the result of that bet.</p>
<p>Fleet central management is the operational layer that makes that bet practical at scale.
OpenTelemetry gives you standardised, vendor-neutral instrumentation.
Fleet gives you the operational control plane to manage those collectors like the production infrastructure they are, not like artisanal YAML files scattered across your estate.</p>
<p>The collector sprawl problem is solvable.
The answer is a managed, policy-driven, centrally observable fleet of EDOT Collectors, and in Elastic 9.3, that answer is production-ready today.</p>
]]></content:encoded>
            <category>observability-labs</category>
            <enclosure url="https://www.elastic.co/observability-labs/assets/images/centrally-managed-otel-collectors-with-elastic-fleet/header.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Deploying Elastic Agent with Confluent Cloud's Elasticsearch Connector]]></title>
            <link>https://www.elastic.co/observability-labs/blog/deploying-elastic-agent-with-confluent-clouds-elasticsearch-connector</link>
            <guid isPermaLink="false">deploying-elastic-agent-with-confluent-clouds-elasticsearch-connector</guid>
            <pubDate>Wed, 22 Jan 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Confluent Cloud users can now use the updated Elasticsearch Sink Connector with Elastic Agent and Elastic Integrations for a fully-managed and highly scalable data ingest architecture.]]></description>
            <content:encoded><![CDATA[<p>Elastic and Confluent are key technology partners and we're pleased to announce new investments in that partnership. Built by the original creators of Apache Kafka®, Confluent's data streaming platform is a key component of many Enterprise ingest architectures, and it ensures that customers can guarantee delivery of critical Observability and Security data into their Elasticsearch clusters. Together, we've been working on key improvements to how our products fit together. With <a href="https://www.elastic.co/blog/elastic-agent-output-kafka-data-collection-streaming">Elastic Agent's new Kafka output</a> and Confluent's newly improved <a href="https://www.confluent.io/hub/confluentinc/kafka-connect-elasticsearch/">Elasticsearch Sink Connectors</a> it's never been easier to seamlessly collect data from the edge, stream it through Kafka, and into an Elasticsearch cluster.</p>
<p>In this blog, we examine a simple way to integrate Elastic Agent with Confluent Cloud's Kafka offering to reduce the operational burden of ingesting business-critical data.</p>
<h2>Benefits of Elastic Agent and Confluent Cloud</h2>
<p>When combined, Elastic Agent and Confluent Cloud's updated Elasticsearch Sink connector provide a myriad of advantages for organizations of all sizes. This combined solution offers flexibility in handling any type of data ingest workload in an efficient and resilient manner.</p>
<h3>Fully Managed</h3>
<p>When combined, Elastic Cloud Serverless and Confluent Cloud provide users with a fully managed service. This makes it effortless to deploy and ingest nearly unlimited data volumes without having to worry about nodes, clusters, or scaling.</p>
<h3>Full Elastic Integrations Support</h3>
<p>Sending data through Kafka is fully supported with any of the 300+ Elastic Integrations. In this blog post, we outline how to set up the connection between the two platforms. This ensures you can benefit from our investments in built-in alerts, SLOs, AI Assistants, and more.</p>
<h3>Decoupled Architecture</h3>
<p>Kafka acts as a resilient buffer between data sources (such as Elastic Agent and Logstash) and Elasticsearch, decoupling data producers from consumers. This can significantly reduce total cost of ownership by enabling you to size your Elasticsearch cluster based on typical data ingest volume, not maximum ingest volume. It also ensures system resilience during spikes in data volume.</p>
<h3>Ultimate control over your data</h3>
<p>With our new Output per Integration capability, customers can now send different data to different destinations using the same agent. Customers can easily send security logs directly to Confluent Cloud/Kafka, which can provide delivery guarantees, while sending less critical application logs and system metrics directly to Elasticsearch.</p>
<h2>Deploying the reference architecture</h2>
<p>In the following sections, we will walk you through one of the ways Confluent Kafka can be integrated with Elastic Agent and Elasticsearch using Confluent Cloud's Elasticsearch Sink Connector. As with any streaming and data collection technology, there are many ways a pipeline can be configured depending on the particular use case. This blog post will focus on a simple architecture that can be used as a starting point for more complex deployments.</p>
<p>Some of the highlights of this architecture are:</p>
<ul>
<li>Dynamic Kafka topic selection at Elastic Agents</li>
<li>Elasticsearch Sink Connectors for fully managed transfer from Confluent Kafka to Elasticsearch</li>
<li>Processing data leveraging Elastic's 300+ Integrations</li>
</ul>
<h3>Prerequisites</h3>
<p>Before getting started ensure you have a Kafka cluster deployed in Confluent Cloud, an Elasticsearch cluster or project deployed in Elastic Cloud, and an installed and enrolled Elastic Agent.</p>
<h3>Configure Confluent Cloud Kafka Cluster for Elastic Agent</h3>
<p>Navigate to the Kafka cluster in Confluent Cloud, and select <code>Cluster Settings</code>. Locate and note the <code>Bootstrap Server</code> address, we will need this value later when we create the Kafka Output in Fleet.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/deploying-elastic-agent-with-confluent-clouds-elasticsearch-connector/confluent-cluster-settings.png" alt="Confluent Cluster Settings" /></p>
<p>Navigate to <code>Topics</code> in the left-hand navigation menu and create two topics:</p>
<ol>
<li>A topic named <code>logs</code></li>
<li>A topic named <code>metrics</code></li>
</ol>
<p>Next, navigate to <code>API Keys</code> in the left-hand navigation menu:</p>
<ol>
<li>Click <code>+ Add API Key</code></li>
<li>Select the <code>Service Account</code> API key type</li>
<li>Provide a meaningful name for this API Key</li>
<li>Grant the key write permission to the <code>metrics</code> and <code>logs</code> topics</li>
<li>Create the key</li>
</ol>
<p>Note the provided Key and the Secret, we will need it later when we configure the Kafka Output in Fleet.</p>
<h3>Configure Elasticsearch and Elastic Agent</h3>
<p>In this section, we will configure the Elastic Agent to send data to Confluent Cloud's Kafka cluster and we will configure Elasticsearch so it can receive data from the Confluent Cloud Elasticsearch Sink Connector.</p>
<h4>Configure Elastic Agent to send data to Confluent Cloud</h4>
<p>Elastic Fleet simplifies sending data to Kafka and Confluent Cloud. With Elastic Agent, a Kafka &quot;output&quot; can be easily attached to all data coming from an agent or it can be applied only to data coming from a specific data source.</p>
<p>Find <code>Fleet</code> in the left-hand navigation, click the <code>Settings</code> tab. On the <code>Settings</code> tab, find the <code>Outputs</code> section and click <code>Add Output</code>.</p>
<p>Perform the following steps to configure the new Kafka output:</p>
<ol>
<li>Provide a <code>Name</code> for the output</li>
<li>Set the <code>Type</code> to <code>Kafka</code></li>
<li>Populate the <code>Hosts</code> field with the <code>Bootstrap Server</code> address we noted earlier .</li>
<li>Under <code>Authentication</code>, populate the <code>Username</code> with the <code>API Key</code> and the <code>Password</code> with the <code>Secret</code> we noted earlier <img src="https://www.elastic.co/observability-labs/assets/images/deploying-elastic-agent-with-confluent-clouds-elasticsearch-connector/fleet-output-configuration.png" alt="Elastic Fleet Output" /></li>
<li>Under <code>Topics</code>, select <code>Dynamic Topic</code> and set <code>Topic from field</code> to <code>data_stream.type</code> <img src="https://www.elastic.co/observability-labs/assets/images/deploying-elastic-agent-with-confluent-clouds-elasticsearch-connector/fleet-output-configuration-dynamic-topic.png" alt="Kafka Output Dynamic Topic Configuration" /></li>
<li>Click <code>Save and apply settings</code></li>
</ol>
<p>Next, we will navigate to the <code>Agent Policies</code> tab in Fleet and click to edit the Agent Policy that we want to attach the Kafka output to. With the Agent Policy open, click the <code>Settings</code> tab and change <code>Output for integrations</code> and <code>Output for agent monitoring</code> to the Kafka output we just created.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/deploying-elastic-agent-with-confluent-clouds-elasticsearch-connector/fleet-agent-policy-kafka.png" alt="Agent Policy Output Configuration" /></p>
<p><strong>Selecting an Output per Elastic Integration</strong>: To set the Kafka output to be used for specific data sources, see the <a href="https://www.elastic.co/guide/en/fleet/master/integration-level-outputs.html">integration-level outputs documentation</a>.</p>
<p><strong>A note about Topic Selection</strong>: The <code>data_stream.type</code> field is a reserved field which Elastic Agent automatically sets to <code>logs</code> if the data we're sending is a log and <code>metrics</code> if the data we're sending is a metric. Enabling Dynamic Topic selection using <code>data_stream.type</code>, will cause Elastic Agent to automatically route metrics to a <code>metrics</code> topic and logs to a <code>logs</code> topic. For information on topic selection, see the Kafka Output's <a href="https://www.elastic.co/guide/en/fleet/master/kafka-output-settings.html#_topics_settings">Topics settings</a> documentation.</p>
<h4>Configuring a publishing endpoint in Elasticsearch</h4>
<p>Next, we will set up two publishing endpoints (data streams) for the Confluent Cloud Sink Connector to use when publishing documents to Elasticsearch:</p>
<ol>
<li>We will create a data stream <code>logs-kafka.reroute-default</code> for handling <strong>logs</strong></li>
<li>We will create a data stream <code>metrics-kafka.reroute-default</code> for handling <strong>metrics</strong></li>
</ol>
<p><img src="https://www.elastic.co/observability-labs/assets/images/deploying-elastic-agent-with-confluent-clouds-elasticsearch-connector/sink-connector-overview.png" alt="Sink Connector Overview" /></p>
<p>If we were to leave the data in those data streams as-is, the data would be available but we would find the data is unparsed and lacking vital enrichment. So we will also create two index templates and two ingest pipelines to make sure the data is processed by our Elastic Integrations.</p>
<h4>Creating the Elasticsearch Index Templates and Ingest Pipelines</h4>
<p>The following steps use <a href="https://www.elastic.co/guide/en/kibana/current/devtools-kibana.html">Dev Tools in Kibana</a>, but all of these steps can be completed via the REST API or using the relevant user interfaces in Stack Management.</p>
<p>First, we will create the Index Template and Ingest Pipeline for handling <strong>logs</strong>:</p>
<pre><code class="language-json">PUT _index_template/logs-kafka.reroute
{
  &quot;template&quot;: {
    &quot;settings&quot;: {
      &quot;index.default_pipeline&quot;: &quot;logs-kafka.reroute&quot;
    }
  },
  &quot;index_patterns&quot;: [
    &quot;logs-kafka.reroute-default&quot;
  ],
  &quot;data_stream&quot;: {}
}
</code></pre>
<pre><code class="language-json">PUT _ingest/pipeline/logs-kafka.reroute
{
  &quot;processors&quot;: [
    {
      &quot;reroute&quot;: {
        &quot;dataset&quot;: [
          &quot;{{data_stream.dataset}}&quot;
        ],
        &quot;namespace&quot;: [
          &quot;{{data_stream.namespace}}&quot;
        ]
      }
    }
  ]
}
</code></pre>
<p>Next, we will create the Index Template and Ingest Pipeline for handling <strong>metrics</strong>:</p>
<pre><code class="language-json">PUT _index_template/metrics-kafka.reroute
{
  &quot;template&quot;: {
    &quot;settings&quot;: {
      &quot;index.default_pipeline&quot;: &quot;metrics-kafka.reroute&quot;
    }
  },
  &quot;index_patterns&quot;: [
    &quot;metrics-kafka.reroute-default&quot;
  ],
  &quot;data_stream&quot;: {}
}
</code></pre>
<pre><code class="language-json">PUT _ingest/pipeline/metrics-kafka.reroute
{
  &quot;processors&quot;: [
    {
      &quot;reroute&quot;: {
        &quot;dataset&quot;: [
          &quot;{{data_stream.dataset}}&quot;
        ],
        &quot;namespace&quot;: [
          &quot;{{data_stream.namespace}}&quot;
        ]
      }
    }
  ]
}
</code></pre>
<p><strong>A note about rerouting</strong>: For a practical example of how this works, a document related to a Linux Network Metric would be first land in <code>metrics-kafka.reroute-default</code> and this Ingest Pipeline would inspect the document and find <code>data_stream.dataset</code> set to <code>system.network</code> and <code>data_stream.namespace</code> set to <code>default</code>. It would use these values to reroute the document from <code>metrics-kafka.reroute-default</code> to <code>metrics-system.network-default</code> where it would be processed by the <code>system</code> integration.</p>
<h3>Configure the Confluent Cloud Elasticsearch Sink Connector</h3>
<p>Now it's time to configure the Confluent Cloud Elasticsearch Sink Connector. We will perform the following steps twice and create two separate connectors, one connector for <strong>logs</strong> and one connector for <strong>metrics</strong>. Where the required settings differ, we will highlight the correct values.</p>
<p>Navigate to your Kafka cluster in Confluent Cloud and select Connectors from the left-hand navigation menu. On the Connectors page, select <code>Elasticsearch Service Sink</code> from a catalog of connectors available.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/deploying-elastic-agent-with-confluent-clouds-elasticsearch-connector/sink-connector-install.png" alt="Sink Connector Setup" /></p>
<p>Confluent Cloud presents a simplified workflow for the user to configure a connector. Here we will walk through each step of the process:</p>
<h4>Step 1: Topic Selection</h4>
<p>First, we will select the topic that the connector will consume data from based on which connector we are deploying:</p>
<ul>
<li>When deploying the Elasticsearch Sink Connector for <strong>logs</strong>, select the <code>logs</code> topic.</li>
<li>When deploying the Elasticsearch Sink Connector for <strong>metrics</strong>, select the <code>metrics</code> topic.</li>
</ul>
<h4>Step 2: Kafka Credentials</h4>
<p>Choose <code>KAFKA_API_KEY</code> as the cluster authentication mode. Provide the <code>API Key</code> and <code>Secret</code> noted earlier  when we gather required Confluent Cloud Cluster information. <img src="https://www.elastic.co/observability-labs/assets/images/deploying-elastic-agent-with-confluent-clouds-elasticsearch-connector/sink-connector-credentials.png" alt="Sink Connector Credentials" /></p>
<h4>Step 3: Authentication</h4>
<p>Provide the Elasticsearch Endpoint address of our Elasticsearch cluster as the <code>Connection URI</code>. The <code>Connection user</code> and <code>Connection password</code> are the authentication information for the account in Elasticsearch that will be used by the Elasticsearch Sink Connector to write data to Elasticsearch.</p>
<h4>Step 4: Configuration</h4>
<p>In this step we will keep the <code>Input Kafka record value format</code> set to <code>JSON</code>. Next, expand <code>Advanced Configuration</code>.</p>
<ol>
<li>We will set <code>Data Stream Dataset</code> to <code>kafka.reroute</code></li>
<li>We will set <code>Data Stream Type</code>based on the connector we are deploying:
<ul>
<li>When deploying the Elasticsearch Sink Connector for logs, we will set <code>Data Stream Type</code> to <code>logs</code></li>
<li>When deploying the Elasticsearch Sink Connector for metrics, we will set <code>Data Stream Type</code> to <code>metrics</code></li>
</ul>
</li>
<li>The correct values for other settings will depend on the specific environment.</li>
</ol>
<h4>Step 5: Sizing</h4>
<p>In this step, notice that Confluent Cloud provides a recommended minimum number of tasks for our deployment. Following the recommendation here is a good starting place for most deployments.</p>
<h4>Step 6: Review and Launch</h4>
<p>Review the <code>Connector configuration</code> and <code>Connector pricing</code> sections and if everything looks good, it's time to click <code>continue</code> and launch the connector! The connector may report as provisioning but will soon start consuming data from the Kafka topic and writing it to the Elasticsearch cluster.</p>
<p>You can now navigate to Discover in Kibana and find your logs flowing into Elasticsearch! Also check out the real time metrics that Confluent Cloud provides for your new Elasticsearch Sink Connector deployments.</p>
<p>If you have only deployed the first <code>logs</code> sink connector, you can now repeat the steps above to deploy the second <code>metrics</code> sink connector.</p>
<h2>Enjoy your fully managed data ingest architecture</h2>
<p>If you followed the steps above, congratulations. You have successfully:</p>
<ol>
<li>Configured Elastic Agent to send logs and metrics to dedicated topics in Kafka</li>
<li>Created publishing endpoints (data streams) in Elasticsearch dedicated to handling data from the Elasticsearch Sink Connector</li>
<li>Configured managed Elasticsearch Sink connectors to consume data from multiple topics and publish that data to Elasticsearch</li>
</ol>
<p>Next you should enable additional integrations, deploy more Elastic Agents, explore your data in Kibana, and enjoy the benefits of a fully managed data ingest architecture with Elastic Serverless and Confluent Cloud!</p>
]]></content:encoded>
            <category>observability-labs</category>
            <enclosure url="https://www.elastic.co/observability-labs/assets/images/deploying-elastic-agent-with-confluent-clouds-elasticsearch-connector/title.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Pivoting Elastic's Data Ingestion to OpenTelemetry]]></title>
            <link>https://www.elastic.co/observability-labs/blog/elastic-agent-pivot-opentelemetry</link>
            <guid isPermaLink="false">elastic-agent-pivot-opentelemetry</guid>
            <pubDate>Tue, 03 Jun 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Elastic has fully embraced OpenTelemetry as the backbone of its data ingestion strategy, aligning with the open-source community and contributing to make it the best data collection platform for a broad user base. This move benefits users by providing enhanced flexibility, efficiency, and control over telemetry data.]]></description>
            <content:encoded><![CDATA[<h1>Introduction</h1>
<p>Elastic has fully embraced OpenTelemetry as the backbone of its data ingestion strategy, aligning with the open-source community and contributing to make it the best data collection platform for a broad user base. This move benefits users by providing enhanced flexibility, efficiency, and control over telemetry data.</p>
<h1>Why OpenTelemetry?</h1>
<p>OpenTelemetry provides a powerful set of capabilities that make it a compelling choice for open-source-focused users. Elastic is re-architecting its data ingest tools around OpenTelemetry to offer users vendor-agnostic flexibility, performance optimization through OTel's efficient data model for correlating telemetry, and enhanced flexibility and control over data pipelines. This move brings the benefits of open-source telemetry to Elastic users.</p>
<p>Elastic engineers are active contributors to the Otel project in several areas of the project. Demonstrating its commitment to open source, Elastic continues to make significant <a href="https://opentelemetry.devstats.cncf.io/d/5/companies-table?orgId=1%5C&amp;var-period_name=Last%20year&amp;var-metric=contributions">contributions to OpenTelemetry</a>.</p>
<h1>OpenTelemetry as the Core of Elastic's Data Ingestion</h1>
<p>Elastic is transforming its data ingestion strategy by basing all ingestion mechanisms on the OpenTelemetry components. Elastic currently supports the following OTel based ingest architecture, which support OTel SDKs and Collectors from OTel or Elastic's Distribution of OpenTelemetry (EDOT).</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/elastic-agent-pivot-opentelemetry/edot-components.png" alt="EDOT components" /></p>
<p>This marks a fundamental shift, ensuring a more standardized and scalable telemetry pipeline. All the existing Elastic ingest components will become OTel based.</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Beats</strong></td>
<td>Beats architecture will be based on OTel.</td>
</tr>
<tr>
<td><strong>Elastic Agent</strong></td>
<td>Agent architecture will be based on OTel to support both beats based inputs and OTel receivers.</td>
</tr>
<tr>
<td><strong>Integrations</strong></td>
<td>Integrations catalogue will additionally include OTel based modules for ease of configuration.</td>
</tr>
<tr>
<td><strong>Fleet central management</strong></td>
<td>Fleet will support monitoring of Elastic OTel collectors.</td>
</tr>
</tbody>
</table>
<p>Let's discuss how each component of Elastic's data ingestion platform will be based on an OpenTelemetry collector whilst still providing the same functionality to the user.</p>
<h2>Beats</h2>
<p>Elastic's traditional data shippers will be re-architected as OpenTelemetry Collectors, aligning with OTel's extensibility model. Current Beat architecture is essentially made up of a few stages in its pipeline, as shown in the diagram below. It consists of an Input, Processors for enrichments, Queuing of events and Output for batching and writing the data to a specific output.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/elastic-agent-pivot-opentelemetry/filebeat.png" alt="filebeat" /></p>
<h2>Beatreceiver Concept</h2>
<p>To ensure a smooth transition without major disruptions, a &quot;beatsreceiver&quot; concept is being implemented. These <code>beatreceivers</code> (like <code>filebeatreceiver</code> or <code>metricbeatreceiver</code>) act as dedicated Beat inputs integrated into the OpenTelemetry Collector as native receivers. They support all existing inputs and processors, guaranteeing that the final architecture accepts the user's current configuration and delivers the same functionality as today's Beats, all without introducing any breaking changes.</p>
<p>An OTel based Beats architecture will see the Input phase embedded as an OTel receiver (eg.  <code>filebeatreceiver</code> to represent the functionality of <code>filebeat</code>). This receiver would only be available as part of Elastic's distribution of OTel in support of our current user base and not a functionality that would be available upstream.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/elastic-agent-pivot-opentelemetry/filebeatreceiver.png" alt="filebeat" /></p>
<p>All the remaining components of the pipeline will be based on OTel. The new Beat will accept the same filebeat configuration (as an example) and will transform it to an OTel based configuration in order to avoid any deployment disruption. It should be noted that in this architecture the Beats will continue to only support ECS formatted data. In order to keep the Beat functionality inline with what exists today, the Elasticsearch exporter (as an example) will output ECS formatted data only.</p>
<p>The following diagram illustrates the <code>beatreceiver</code> concept by showing how a basic <code>filebeat</code> configuration is automatically translated into an OpenTelemetry-based configuration. This new configuration retains the original inputs and processors but leverages the native OpenTelemetry pipeline and exporter to achieve the same overall <code>filebeat</code> functionality. Existing <code>filebeat</code> configurations will be automatically converted, eliminating the need for manual adjustments or introducing breaking changes.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/elastic-agent-pivot-opentelemetry/elastic-agent-otel-config.png" alt="Filebeat OTel config" /></p>
<h2>Elastic Agent</h2>
<p>Elastic Agent is a unified agent for data collection, security, and observability. It can also be deployed in an OpenTelemetry only mode, enabling native OTel workflows. Elastic Agent is a supervisor that manages many other Beats as sub-processes in order to provide a more comprehensive data collection tool. It is capable of translating Agent Policy received from Fleet into configuration acceptable by the various sub-processes.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/elastic-agent-pivot-opentelemetry/elastic-agent-architecture.png" alt="Elastic Agent Architecture" /></p>
<p>Expanding on the Beat receiver concept described above, the Elastic Agent, which currently can be deployed as an OTel collector (see <a href="https://www.elastic.co/observability-labs/blog/elastic-distributions-opentelemetry-ga">blog</a>), will be also modified to a much simpler OTel based architecture based on these receivers. As shown below, this architecture will streamline the components within the Elastic Agent and remove duplicated functionality such as queuing and output. Whilst supporting the current functionality, these changes will reduce the agent footprint and also present a reduction in number of connections opened to pipeline elements egress of the agent (such as Elasticsearch clusters, Logstash or Kafka brokers).</p>
<p>By moving to an OTel based architecture Elastic Agent is now able to operate as a truly hybrid Elastic Agent which provides not only the Beat functionality but also allows our users to create OTel native pipelines and take advantage of plethora of functionality available as part of the open source project.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/elastic-agent-pivot-opentelemetry/elastic-agent-otel-architecture.png" alt="Elastic Agent OTel Architecture" /></p>
<p>Elastic's commitment to OpenTelemetry will deepen through increased contributions, resulting in OpenTelemetry receivers gradually superseding Beats receiver features. This evolution will eventually reduce the need for a distinct Beats receiver within the Elastic Agent architecture. The envisioned architecture will empower the Elastic Agent to transmit data in OTLP format as well, granting users the flexibility to select any OTLP-compatible backend, thereby upholding the principle of vendor neutrality.</p>
<h2>Fleet &amp; Integrations: Managing OpenTelemetry at Scale</h2>
<p>Elastic's centralized management system will support OpenTelemetry-based configurations, making large-scale deployments easier to manage. Managing thousands of telemetry agents at scale presents a significant challenge. Elastic's <strong>Fleet &amp; Integrations</strong> simplify this process by providing robust lifecycle management for these new OpenTelemetry-based Elastic agents.</p>
<p><strong>Key Capabilities Offered:</strong></p>
<ul>
<li>
<p><strong>Scalability:</strong> Manage up to 100K+ agents across distributed environments.</p>
</li>
<li>
<p><strong>Automated Upgrades:</strong> Staged rollouts and automatic upgrades ensure minimal downtime.</p>
</li>
<li>
<p><strong>Monitoring &amp; Diagnostics:</strong> Real-time status updates, failure detection, and diagnostic downloads improve system reliability.</p>
</li>
<li>
<p><strong>Policy-Based Configuration Management:</strong> Enables centralized control over agent configurations, improving consistency across deployments.</p>
</li>
<li>
<p><strong>Pre-Built Integrations:</strong> Elastic offers a catalog of <strong>470+ pre-built integrations</strong>, allowing users to ingest data seamlessly from various sources. These will also include OTel based packages making configuration much more efficient across a large deployment.</p>
</li>
</ul>
<p>The goal is for Fleet to also provide monitoring capabilities for native OTel collectors as well in a vendor agnostic fashion.</p>
<h1>Conclusion</h1>
<p>Elastic's adoption of OpenTelemetry marks a significant milestone in the evolution of open-source observability. By standardizing on OpenTelemetry, Elastic is ensuring that its data ingestion strategy remains <strong>open, scalable, and future-proof</strong>.</p>
<p>For open-source users, this shift means:</p>
<ul>
<li>
<p>Greater interoperability across observability tools.</p>
</li>
<li>
<p>Enhanced flexibility in choosing telemetry backends.</p>
</li>
<li>
<p>A stronger commitment to <strong>community-driven</strong> observability standards.</p>
</li>
<li>
<p>Existing Beats and Elastic Agent users can <strong>seamlessly adopt OpenTelemetry</strong> without rearchitecting their pipelines.</p>
</li>
<li>
<p>OpenTelemetry users can <strong>integrate with Elastic's observability stack</strong> without additional complexity.</p>
</li>
</ul>
<p>Stay tuned for more updates as Elastic continues to expand its OpenTelemetry-based data collection capabilities! In the mean time here are some other references:</p>
<ul>
<li>
<p><a href="https://www.elastic.co/observability-labs/blog/elastic-distributions-opentelemetry-ga">Elastic Distributions of OpenTelemetry (EDOT) Now GA</a></p>
</li>
<li>
<p><a href="https://www.elastic.co/observability-labs/blog/k8s-discovery-with-EDOT-collector">Dynamic workload discovery on Kubernetes now supported with EDOT Collector</a></p>
</li>
<li>
<p><a href="https://www.elastic.co/observability-labs/blog/introducing-the-ottl-playground-for-opentelemetry">Introducing the OTTL Playground for OpenTelemetry</a></p>
</li>
</ul>
]]></content:encoded>
            <category>observability-labs</category>
            <enclosure url="https://www.elastic.co/observability-labs/assets/images/elastic-agent-pivot-opentelemetry/self-service-blog-image-templates.jpg" length="0" type="image/jpg"/>
        </item>
    </channel>
</rss>