<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Elastic Observability Labs - Articles by Jesse Miller</title>
        <link>https://www.elastic.co/observability-labs</link>
        <description>Trusted security news &amp; research from the team at Elastic.</description>
        <lastBuildDate>Tue, 21 Apr 2026 18:57:28 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        
        <copyright>© 2026. Elasticsearch B.V. All Rights Reserved</copyright>
        <item>
            <title><![CDATA[Kubernetes Observability from alert to root cause: Dashboards, Alerts, and Anomaly Detection with Elastic]]></title>
            <link>https://www.elastic.co/observability-labs/blog/kubernetes-dashboards-alerts-anomaly-detection</link>
            <guid isPermaLink="false">kubernetes-dashboards-alerts-anomaly-detection</guid>
            <pubDate>Tue, 21 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Kubernetes observability with Elastic includes dashboards, alert rules, and ML anomaly detection for alerts with root-cause context.]]></description>
            <content:encoded><![CDATA[<h1>Kubernetes observability with Elastic, Dashboards, Alerts, and Anomaly Detection</h1>
<p>Kubernetes observability with Elastic is built for the operator who gets paged at 3 AM. That operator is often in a terminal, a chat tool, or an IDE. They need an answer that is grounded in what is happening in the cluster right now.</p>
<p>The new <a href="https://www.elastic.co/docs/reference/integrations/kubernetes">Elastic Kubernetes integration</a> is built for that operator. It includes  dashboards with drilldowns, alert rule templates, and ML anomaly detection jobs. Additionally Elastic also offers Agentic Investigations, that drives investigations automatically.</p>
<p>This blog will cover the foundational observability components (dashboards, drilldowns, alert templates, etc), while a part 2 covering the agentic investigations will cover workflows, agent skills, and MCP tools and views</p>
<p>The new Kubernetes integration content in this post is generally available across Elastic Cloud Hosted, Serverless, and self-managed deployments.</p>
<hr />
<h2>Dashboards designed for drill-down, not just display</h2>
<p>The new Kubernetes dashboards are organized around a three-tier design: a cluster Overview that surfaces what needs attention at a glance, object summary pages for clusters, nodes, namespaces, workloads, and pods, and object detail pages that give you the full picture for any single entity.</p>
<p>Every layer connects to the next: click any entity in a summary table and choose: apply it as a filter on the current view, or open its dedicated detail page.</p>
<p>Here's what that looks like when something's actually wrong:</p>
<p><strong>Following a restart cascade from overview to container</strong></p>
<p><strong>Overview:</strong> The Overview surfaces what needs attention across your cluster.
You can see top pods by CPU, top namespaces by container restarts, and top nodes by memory utilization in one screen.
When the &quot;container restarts&quot; panel starts climbing, you know where to look.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/kubernetes-dashboards-alerts-anomaly-detection/overview-dashboard.jpg" alt="Kubernetes observability with Elastic, cluster overview dashboard showing top pods by CPU and container restarts by namespace" /></p>
<p><strong>Namespaces Overview:</strong> Click into the flagged namespace with 1232 restarts and CPU limit utilization at 116%.
The detail view plots CPU and memory against requests and limits over time.
This shows both the size and duration of the overage.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/kubernetes-dashboards-alerts-anomaly-detection/namespace-overview.jpg" alt="Kubernetes observability with Elastic, namespace overview showing multiple namespaces" /></p>
<p><strong>Namespace Details:</strong> We can get more info on the various pods in this namespace here.
Click the pod driving the restarts.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/kubernetes-dashboards-alerts-anomaly-detection/namespace-details.jpg" alt="Kubernetes observability with Elastic, namespace detail view showing CPU limit utilization at 116% and container restart count" /></p>
<p><strong>Pod Details:</strong> The pod detail dashboard is organized into capacity, metrics, and containers sections.
Container restarts are flagged in red at the top of the page.
Most panels are metric-driven, and the dashboard also links to correlated pod logs in Discover.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/kubernetes-dashboards-alerts-anomaly-detection/pod-details.jpg" alt="Kubernetes observability with Elastic, pod detail dashboard with container restart alerts, capacity metrics, and log drilldown links" /></p>
<p>It takes four clicks to move from the Cluster Overview to container logs that explain the failure.
These dashboards are starting points for your team.
You can copy and customize them with ESQL visualizations.</p>
<hr />
<h2>Alert rules that fire on day one</h2>
<p>The integration ships with pre-built alerting rule templates for states that are wrong by definition.
No historical baseline or warmup period is required.
Enable them during setup and they work immediately.</p>
<p>These rules do not ask, &quot;Is this abnormal for this service?&quot;
They ask, &quot;Is this a known bad state in Kubernetes?&quot;
A pod in CrashLoopBackOff is always a problem.
A container killed by the kernel for exceeding its memory limit is always a problem.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/kubernetes-dashboards-alerts-anomaly-detection/alert-list.png" alt="Kubernetes observability with Elastic, list of alerts with the CrashLoopBackOff alert rule selected" /></p>
<p>Like the Kubernetes dashboards, these alerts are built on ES|QL queries.
You can see that in the CrashLoopBackOff definition below.
If you are new to ES|QL, you can start with the <a href="https://www.elastic.co/docs/explore-analyze/query-filter/languages/esql">ES|QL docs</a>.</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/kubernetes-dashboards-alerts-anomaly-detection/alert-detail.png" alt="Kubernetes observability with Elastic, ES|QL query that defines the CrashLoopBackOff alert rule" /></p>
<p>The alert templates cover:</p>
<ul>
<li><strong>CrashLoopBackOff detection</strong> - Fires when a pod's restart count exceeds a configurable threshold within a rolling window.
The default catches a real restart cycle without triggering on routine restarts during a rolling deployment.</li>
<li><strong>Container OOMKilled</strong> - Surfaces kernel-level container terminations due to memory limits.
These events are easy to miss in dashboards and often precede wider failures.
This rule fires on any occurrence.</li>
<li><strong>Deployment below desired replicas</strong> - Fires when a deployment runs fewer replicas than declared for longer than a grace period.
This catches scaling failures and partially failed rollouts.</li>
<li><strong>Pod stuck in Pending</strong> - Fires when a pod cannot be scheduled past a configurable time threshold.
This surfaces node capacity problems, missing resources, and affinity failures before availability drops.</li>
<li><strong>Node disk pressure</strong> - Fires immediately when the Kubernetes DiskPressure node condition is <code>True</code>.
A node condition is a direct state signal, not a statistical threshold.</li>
<li><strong>Persistent volume near capacity</strong> - Alerts when storage utilization crosses a configurable threshold before writes start failing.</li>
</ul>
<p>Each template is parameterized.
Adjust thresholds in the ES|QL query to match your environment.
Connect notifications to PagerDuty, Slack, or another destination in your runbook.</p>
<hr />
<h2>Anomaly detection jobs with ML baselines</h2>
<p>Alert rules catch what is definitively wrong.
ML anomaly detection catches patterns that often precede failures.
If you are new to this area, see the <a href="https://www.elastic.co/guide/en/machine-learning/current/ml-ad-overview.html">Elastic anomaly detection overview</a>.</p>
<p>A pod that always runs at 85% memory utilization might be healthy.
A pod that grew from 40% to 85% over twelve hours is usually not healthy.
A static threshold often catches this only after an OOM kill.
The ML module should catch the trajectory earlier.</p>
<p>The integration ships with ML module configurations that learn workload baselines and flag meaningful deviations.
These jobs need 24 to 48 hours of data before results become useful.
Results become more reliable as jobs continue to run.</p>
<h3>The included modules</h3>
<p><strong>1. Pod memory growth anomalies</strong></p>
<ul>
<li><strong>What it learns:</strong> per-pod memory consumption pattern over time</li>
<li><strong>What it flags:</strong> Growth trajectories that are inconsistent with baseline behavior, such as a slow leak that never crosses the hard limit.</li>
<li><strong>Why ML (not alert rule):</strong> The alert rule catches the OOMKill after the fact.
The ML job catches the trajectory that leads there.</li>
</ul>
<p><strong>2. Network I/O anomalies</strong></p>
<ul>
<li><strong>What it learns:</strong> per-pod network transmit/receive byte rate patterns</li>
<li><strong>What it flags:</strong> Unusual spikes or drops relative to the pod baseline.
A spike can indicate a runaway process or unexpected load.
A drop can indicate a network partition that causes the pod to go idle.</li>
<li><strong>Why ML (not alert rule):</strong> Normal network traffic varies by time of day and workload type.
A batch job pod at high throughput during its normal window is expected.
The same throughput outside that window can be anomalous.</li>
</ul>
<p><strong>3. Pod restart frequency</strong></p>
<ul>
<li><strong>What it learns:</strong> Per-workload restart rate patterns during deployments, scaling events, and routine operations.</li>
<li><strong>What it flags:</strong> Restart patterns that are anomalous relative to each workload's own history.
This is distinct from the CrashLoopBackOff alert rule, which fires on a fixed threshold regardless of context.</li>
<li><strong>Why ML (not alert rule):</strong> A deployment that restarts twice during every rollout can be healthy.
The same deployment restarting twice on a Tuesday afternoon may be unhealthy.
The alert rule cannot distinguish these cases without workload history.</li>
</ul>
<p>Here's our Single Metric Viewer showing anomalies triggered against a specific pod, for the memory growth job:</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/kubernetes-dashboards-alerts-anomaly-detection/single-metric-viewer.png" alt="Kubernetes observability with Elastic, ML Single Metric Viewer showing pod memory growth anomaly detection for one pod" /></p>
<p>And here's the multi-series Anomaly Explorer view of the same job, showing detections firing across a variety of pods:</p>
<p><img src="https://www.elastic.co/observability-labs/assets/images/kubernetes-dashboards-alerts-anomaly-detection/anomaly-explorer.png" alt="Kubernetes observability with Elastic, Anomaly Explorer showing pod memory anomaly detections across multiple pods" /></p>
<hr />
<h2>Try it yourself: the OTel Astronomy Shop</h2>
<p>If you do not have a Kubernetes cluster ready, you can use the OpenTelemetry Astronomy Shop demo environment.
It uses the same commands as Getting Started Step 2, Path A, but points to demo services.
Create the namespace and secret, then run the Helm install.
All 16 services, Kafka, and PostgreSQL start flowing into Elastic without instrumentation changes.</p>
<p>The demo ships with a built-in feature flag service, <code>flagd</code>, that lets you activate failure scenarios.
Enable <code>cartServiceFailure</code> and watch the checkout-service restart cascade unfold in real time.
The CrashLoopBackOff alert rule fires.
The ML modules begin establishing baselines.
If you have the investigation workflow enabled, it runs automatically when the alert fires.</p>
<hr />
<h2>Getting started</h2>
<p><strong>Step 1 - Install the Kubernetes integration.</strong>
Dashboards are available immediately.
No additional configuration is required.</p>
<p><strong>Step 2 - Deploy data collection.</strong>
There are two supported paths, both based on Helm.
Choose the one that fits your deployment model.</p>
<p><strong>Path A - OpenTelemetry (EDOT collector):</strong>
This path uses the <code>opentelemetry-kube-stack</code> Helm chart with the Elastic Distribution of OpenTelemetry (EDOT) collector.
Create a namespace and a secret with your endpoint and API key, then install:</p>
<pre><code class="language-bash">kubectl create namespace opentelemetry-operator-system

kubectl create secret generic elastic-secret-otel \
  --namespace opentelemetry-operator-system \
  --from-literal=elastic_otlp_endpoint='https://&lt;your-endpoint&gt;.elastic.cloud:443' \
  --from-literal=elastic_api_key='&lt;your-api-key&gt;'

helm upgrade --install opentelemetry-kube-stack open-telemetry/opentelemetry-kube-stack \
  --namespace opentelemetry-operator-system \
  --values 'https://raw.githubusercontent.com/elastic/elastic-agent/refs/tags/v9.3.2/deploy/helm/edot-collector/kube-stack/managed_otlp/values.yaml' \
  --version '0.12.4'
</code></pre>
<p><strong>Path B - Elastic Agent (standalone):</strong>
This path uses the <code>elastic/elastic-agent</code> Helm chart.
The default manifest includes resource limits that may not be appropriate for production.
Review the <a href="https://www.elastic.co/docs/reference/fleet/scaling-on-kubernetes">Scaling Elastic Agent on Kubernetes guide</a> before deploying.</p>
<pre><code class="language-bash">helm repo add elastic https://helm.elastic.co/ &amp;&amp; \
helm install elastic-agent elastic/elastic-agent \
  --version 9.3.2 \
  -n kube-system \
  --set outputs.default.url=https://&lt;your-endpoint&gt;.es.elastic.cloud:443 \
  --set outputs.default.type=ESPlainAuthAPI \
  --set outputs.default.api_key=$(echo &quot;&lt;your-base64-api-key&gt;&quot; | base64 -d) \
  --set kubernetes.enabled=true
</code></pre>
<p><strong>Step 3 - Enable the alert rule templates.</strong>
Go to Observability &gt; Alerts in Kibana.
The Kubernetes templates are in the rule library.
Enable the templates relevant to your environment, set thresholds, and connect your notification channel.</p>
<p><strong>Step 4 - Let the ML modules warm up.</strong>
After 24 to 48 hours, anomaly detection modules establish baselines and begin surfacing pattern-based deviations.
Longer running jobs usually produce better baselines.
Find results in the ML Anomaly Explorer, linked from the Kubernetes dashboards.</p>
<p><strong>Steps 5, 6, and 7 - Agentic content</strong> will be covered in Part 2 (forthcoming), Kubernetes observability with Elastic: Agentic Investigations.</p>
<hr />
<h2>What's next</h2>
<p>The next step is the layer that runs investigation workflows when an alert fires.
That includes skills that encode investigation logic, tools that expose facts like ML state and topology, and MCP apps that render outputs in places like Claude Desktop or VS Code.
These technical preview capabilities are available today and will be covered in Part 2 (forthcoming), Kubernetes observability with Elastic: Agentic Investigations.</p>
<p>If you are running Kubernetes on Elastic today, tell us which investigation steps you repeat manually on every incident.
Tell us which remediations you would trust a workflow to propose.
You can <a href="https://discuss.elastic.co/c/observability">join the Elastic Community Discussion here</a>.</p>
<hr />
<p><em>The release and timing of any features or functionality described in this post remain at Elastic's sole discretion.</em>
<em>Any features or functionality not currently available may not be delivered on time or at all.</em></p>
]]></content:encoded>
            <category>observability-labs</category>
            <enclosure url="https://www.elastic.co/observability-labs/assets/images/kubernetes-dashboards-alerts-anomaly-detection/header.jpg" length="0" type="image/jpg"/>
        </item>
    </channel>
</rss>