Hooked on Linux: Rootkit Detection Engineering

In this second part of a two-part series, we explore Linux rootkit detection engineering, focusing on the limitations of static detection reliance, and the importance of rootkit behavioral detection.

Hooked on Linux: Rootkit Detection Engineering

Introduction

In part one, we examined how Linux rootkits work: their evolution, taxonomy, and techniques for manipulating user space and kernel space. In this second part, we turn to detection engineering. We begin by showing why static detection is often unreliable against Linux rootkits, even when binaries are only trivially modified, and then move on to behavioral and runtime signals that defenders can use instead. From shared object abuse and LKM loading to eBPF, io_uring, persistence, and defense evasion, this article focuses on practical ways to detect and investigate rootkit activity in real environments.

Static detection via VirusTotal

Before focusing on behavioral detection techniques, it is useful to examine how well traditional static detection mechanisms identify Linux rootkits. To do so, we conducted a small experiment using VirusTotal as a proxy for traditional signature-based antivirus detection. A dataset of ten Linux rootkits was assembled from publicly available research papers and open-source repositories. Each sample was either uploaded to VirusTotal or retrieved from existing submissions.

For every rootkit, we recorded the number of antivirus engines that flagged the original binary. We then performed two additional tests:

  1. Stripped binaries, created using strip --strip-all, removing symbol tables and other non-essential metadata.
  2. Trivially modified binaries, created by appending a single null byte to the original file: an intentionally unsophisticated change.

The goal was not to evade detection through advanced obfuscation, but to assess how fragile static signatures are when faced with even the simplest binary modifications.

Table 1: Technical overview of the analyzed rootkit dataset

RootkitBasic detectionsStrippedNull byte added
Azazel36/6619/6621/66
Bedevil*32/6632/6621/66
BrokePKG7/663/663/66
Diamorphine33/668/6422/66
Kovid27/661/6615/66
Mobkit29/666/6617/66
Reptile32/663/6620/66
Snapekit30/663/6619/66
Symbiote42/668/6622/66
TripleCross31/6617/6619/66

* Bedevil is stripped by default, and thus, the basic and stripped detections are the same

Observations

As expected, stripping binaries generally resulted in a sharp drop in detection rates. In several cases, detections fell to near-zero, suggesting that some antivirus engines rely heavily on symbol information or other easily removable metadata. Even more telling is the impact of adding a single null byte: a modification that does not alter program logic, execution flow, or behavior, yet still significantly degrades detection for many samples.

This highlights a fundamental weakness of static, signature-based detection. If a one-byte change can meaningfully affect detection outcomes, attackers do not need sophisticated obfuscation to evade static scanners.

Obfuscation techniques in rootkits

Interestingly, most of the rootkits in this dataset employ little to no advanced static obfuscation. Where obfuscation is present, it is typically limited to simple XOR encoding of strings or configuration data, or lightweight packing techniques that slightly alter the binary layout. These methods are inexpensive to implement and sufficient to defeat many static signatures.

The absence of more advanced obfuscation in these samples is notable. Many are open-source proof-of-concept rootkits designed to demonstrate techniques rather than to aggressively evade detection. Yet even with minimal or no obfuscation, static detection proves unreliable.

Why static detection is not enough

This experiment reinforces a key point: static detection alone is fundamentally insufficient for reliable rootkit detection. The fragility of static signatures (especially in the face of trivial modifications) means defenders cannot rely on file-based indicators or hash-based detection to uncover stealthy threats.

When binaries can be altered without affecting behavior, the only remaining consistent signal is the rootkit's behavior at runtime. For that reason, the remainder of this blog shifts its focus from static artifacts to dynamic analysis and behavioral detection, examining how rootkits interact with the operating system, manipulate execution flow, and leave observable traces during execution.

That is where detection engineering becomes both more challenging and far more effective.

Dynamic detection engineering

Userland rootkit loading detection techniques

Userland rootkits often hijack the dynamic linking process, injecting malicious shared objects into target processes without needing kernel-level access. An infection begins with the creation of a shared object file. The detection of newly created shared object files can be detected through a detection rule similar to the one displayed below:

file where event.action == "creation" and
(file.extension like~ "so" or file.name like~ "*.so.*")

These files are often written to writable or ephemeral paths such as /tmp/, /dev/shm/, or hidden subdirectories under user home directories. Attackers may either download, compile, or drop them directly from a loader. This knowledge may be applied to the detection rule above to reduce noise.

As an example, in the telemetry shown above, we can see the threat actor using scp to download a shared object file into a hidden subdirectory within /tmp, then move it to a library directory, attempting to blend in. We detected this, and similar threats, via:

Once the shared object file is present on the system, the attacker has several options for activating it. The most commonly abused mechanisms are the LD_PRELOAD environment variable, the /etc/ld.so.preload file, and dynamic linker configuration paths such as /etc/ld.so.conf.

The LD_PRELOAD environment variable allows an attacker to specify a shared object that will be loaded before any other libraries during the execution of a dynamically linked binary. This allows for a complete override of libc functions, such as execve(), open(), or readdir(). This method works on a per-process basis and does not require root access.

To detect this technique, telemetry for the LD_PRELOAD environment variable is required. Once this is available, any detection logic to detect uncommon LD_PRELOAD values can be written. For example:

process where event.type == "start" and event.action == "exec" and
process.env_vars != null

As shown in Figure 1, this was also the next step for the attackers. The attackers moved libz.so.1 from /tmp/.X12-unix/libz.so.1 to /usr/local/lib/libz.so.1.

To be higher fidelity, we implemented this logic using the new_terms rule type, only flagging on previously unseen shared object entries within the LD_PRELOAD variable via:

Of course, if more than just LD_PRELOAD and LD_LIBRARY_PATH environment variables are collected, the rule above should be altered to include these two items specifically. To reduce noise, statistical analysis and/or baselining should be conducted.

Another method of activation is to leverage the /etc/ld.so.preload file. If present, this file forces the dynamic linker to inject the listed shared object into every dynamically linked binary on the system, resulting in global injection.

A similar method involves altering the dynamic linker’s configuration to prioritize malicious library paths. This can be achieved by modifying /etc/ld.so.conf or adding entries to /etc/ld.so.conf.d/, followed by executing ldconfig to update the cache. This changes the resolution path of critical libraries, such as libc.so.6.

These scenarios can be detected by monitoring the /etc/ld.so.preload and /etc/ld.so.conf files, as well as the /etc/ld.so.conf.d/ directory for creation/modification events. Using this raw telemetry, a detection rule to flag these events can be implemented:

file where event.action in ("creation", "rename") and
file.path like ("/etc/ld.so.preload", "/etc/ld.so.conf", "/etc/ld.so.conf.d/*")

We frequently see this chain, where a shared object is created, and then the dynamic linker is modified.

Which we detect via the following detection rules:

Chaining these two alerts together on a single host warrants investigation.

Kernel-space rootkit loading detection techniques

Loading an LKM manually typically requires using built-in command-line utilities such as modprobe, insmod, and kmod. Detecting the execution of these utilities will detect the loading phase (when performed manually).

process where event.type == "start" and event.action == "exec" and (
  (process.name == "kmod" and process.args == "insmod" and
   process.args like~ "*.ko*") or
  (process.name == "kmod" and process.args == "modprobe" and
   not process.args in ("-r", "--remove")) or
  (process.name == "insmod" and process.args like~ "*.ko*") or
  (process.name == "modprobe" and not process.args in ("-r", "--remove"))
)

Many open-source rootkits are published without a loader and rely on pre-installed LKM-loading utilities. An example is Singularity, which provides a load_and_persistence.sh script, which performs several actions, after which it eventually calls insmod "$MODULE_DIR/$MODULE_NAME.ko". Although insmod is called in the command, insmod is actually kmod under the hood, with insmod as a process argument. An example of a Singularity load:

Which can be easily detected via the following detection rules:

This detection approach, however, is far from bulletproof, as many rootkits rely on a loader to load the LKM, thereby bypassing execution of these userland utilities.

For example, Reptile’s loader directly invokes the init_module syscall with an in-memory decrypted kernel blob:

#define init_module(module_image, len, param_values) syscall(__NR_init_module, module_image, len, param_values)

int main(void) {
    [...]
    do_decrypt(reptile_blob, len, DECRYPT_KEY);
    module_image = malloc(len);
    memcpy(module_image, reptile_blob, len);
    init_module(module_image, len, "");
    [...]
}

Additionally, Reptile’s kmatryoshka module acts as an in-kernel chainloader that decrypts and loads another hidden LKM using a direct function pointer to sys_init_module, located via kallsyms_on_each_symbol(). This further obscures the loading mechanism from userland visibility.

Because of this, it's essential to understand what these utilities do under the hood; they are merely wrappers around the init_module() and finit_module() system calls. Effective detection should therefore focus on tracing these syscalls directly, rather than the tooling that invokes them.

To ensure the availability of the data sources required to load LKMs, various security tools can be employed. Auditd or Auditd Manager are suitable choices. To facilitate the collection of init_module() and finit_module syscalls, the subsequent configuration can be implemented.

-a always,exit -F arch=b64 -S finit_module -S init_module
-a always,exit -F arch=b32 -S finit_module -S init_module

Combining this raw telemetry with a detection rule that alerts when this event occurs allows for a strong defense.

driver where event.action == "loaded-kernel-module" and
auditd.data.syscall in ("init_module", "finit_module")

This strategy will allow detection of the kernel module loading, regardless of the utility being used for the loading event. In the example below, we see a true positive detection of the Diamorphine rootkit.

This pre-built rule is available here:

Additional Linux detection engineering guidance through Auditd is presented in the Linux detection engineering with Auditd research.

Out-of-tree and unsigned modules

Another sign of a malicious LKM is the presence of the kernel “taint” flag. When the kernel detects that a module is loaded that is either not part of the official kernel tree, lacks a valid signature, or uses a non-permissive license, it marks the kernel as “tainted”. This is a built-in integrity mechanism that indicates the kernel is in a potentially untrusted state. An example of this is shown below, where the reveng_rtkit module is loaded:

[ 2853.023215] reveng_rtkit: loading out-of-tree module taints kernel.
[ 2853.023219] reveng_rtkit: module license 'unspecified' taints kernel.
[ 2853.023220] Disabling lock debugging due to kernel taint
[ 2853.023297] reveng_rtkit: module verification failed: signature and/or required key missing - tainting kernel

The kernel identifies the module as out-of-tree, with an unspecified license, and missing cryptographic verification. This results in the kernel being marked tainted.

To detect this behavior, system and kernel logging must be parsed and ingested. Once kernel log telemetry is available, simple pattern matching or rule-based detection can flag these events. Out-of-tree module loading can be detected through:

event.dataset:"system.syslog" and process.name:"kernel" and
message:"loading out-of-tree module taints kernel."

And similar detection logic can be implemented to detect unsigned module loading:

event.dataset:"system.syslog" and process.name:"kernel" and
message:"module verification failed: signature and/or required key missing - tainting kernel"

Using the detection logic above, we observed true positives in telemetry, attempting to load Singularity:

These rules are by default available in:

The log entry will always show the module name that triggered the event, enabling easy triage. When the LKM is not present in the system during a manual check triggered by this alert, it may indicate that the LKM is hiding itself.

Kill signals

Many (open-source) rootkits leverage kill signals, specifically those in the higher, unassigned ranges (32+), as covert communication channels or triggers for malicious actions. For instance, a rootkit might intercept a specific high-numbered kill signal (e.g., kill -64 <pid>). Upon receiving this signal, the rootkit's payload could be configured to elevate privileges, execute commands, toggle hiding capabilities, or establish a backdoor.

To detect this, we can leverage Auditd and create a rule that collects all kill signals:

-a exit,always -F arch=b64 -S kill -k kill_rule

The arguments passed to kill() are kill(pid, sig). We can query a1 (the signal) to flag any kill signal above 32.

process where event.action == "killed-pid" and
auditd.data.syscall == "kill" and auditd.data.a1 in (
"21", "22", "23", "24", "25", "26", "27", "28", "29", "2a",
"2b", "2c", "2d", "2e", "2f", "30", "31", "32", "33", "34",
"35", "36", "37", "38", "39", "3a", "3b", "3c", "3d", "3e",
"3f", "40", "41", "42", "43", "44", "45", "46", "47"
)

Analyzing the kill() syscall for unusual signal values via Auditd presents a strong detection opportunity against rootkits that utilize these signals, as seen in techniques such as those employed by Diamorphine. The kill-related pre-built rules are available at:

Segfaults

Finally, it’s essential to recognize that kernel-space rootkits are inherently fragile. LKMs are typically compiled for a specific kernel version and configuration. An incorrectly resolved symbol or a misaligned memory write may trigger a segmentation fault. While these failures may not immediately expose the rootkit’s functionality, they provide strong forensic signals.

To detect this, raw syslog collection must be enabled. From there, writing a detection rule to flag segfault messages can help identify either malicious behavior or kernel instability, both of which warrant investigation:

event.dataset:"system.syslog" and process.name:"kernel" and message:"segfault"

This detection rule is available out-of-the-box as a building block rule:

Combining syscall-level module-loading visibility with kernel taint, out-of-tree messages, kill-signal detection, and segfault alerts lays the foundation for a layered strategy to detect LKM-based rootkits.

eBPF rootkits

eBPF rootkits exploit the legitimate functionality of the Linux kernel’s BPF subsystem. Programs can be dynamically loaded and attached using utilities like bpftool or via custom loaders that abuse the bpf() syscalls.

Detecting eBPF-based rootkits requires visibility into both bpf() syscalls and the use of sensitive eBPF helpers. Key indicators involved include:

  • bpf(BPF_MAP_CREATE, ...)
  • bpf(BPF_MAP_LOOKUP_ELEM, ...)
  • bpf(BPF_MAP_UPDATE_ELEM, ...)
  • bpf(BPF_PROG_LOAD, ...)
  • bpf(BPF_PROG_ATTACH, ...)

Leveraging Auditd, an audit rule can be created where a0 is leveraged to specify the specific BPF syscalls of interest:

-a always,exit -F arch=b64 -S bpf -F a0=0 -k bpf_map_create
-a always,exit -F arch=b64 -S bpf -F a0=1 -k bpf_map_lookup_elem
-a always,exit -F arch=b64 -S bpf -F a0=2 -k bpf_map_update_elem
-a always,exit -F arch=b64 -S bpf -F a0=5 -k bpf_prog_load
-a always,exit -F arch=b64 -S bpf -F a0=8 -k bpf_prog_attach

These must be tuned on a per-environment basis to ensure that benign programs (e.g., EDRs or other observability tools) that leverage eBPF do not generate noise. Another important signal is the use of eBPF helper functions.

The bpf_probe_write_user helper function

The bpf_probe_write_user helper allows kernel-space eBPF programs to write directly to userland memory. Although intended for debugging, this function can be abused by rootkits.

Detection remains challenging, but Linux kernels commonly log the use of sensitive helpers, such as bpf_probe_write_user. Monitoring for these entries offers a detection opportunity, requiring raw syslog collection and specific detection rules, such as the following:

event.dataset:"system.syslog" and process.name:"kernel" and
message:"bpf_probe_write_user"

This rule will alert on any kernel log entry indicating the use of bpf_probe_write_user. While legitimate tools may occasionally invoke it, unexpected or frequent use, especially alongside suspicious process behavior, warrants investigation. Context, such as the eBPF program’s attachment point and the userland process involved, aids triage. This detection rule is available here:

Below are a few obvious examples of true positives detected by this logic:

The rule triggers on nysm (a stealthy post-exploitation container) and boopkit (a Linux eBPF backdoor).

io_uring rootkits

ARMO research (2025) introduced a new defense evasion technique that leverages io_uring, a design for asynchronous I/O, to reduce observable syscall activity and bypass standard telemetry. This technique is limited to kernel versions 5.1 and above and avoids using hooks. Although the method was recently discovered by rootkit researchers, it is still actively being developed and remains relatively immature in its feature set. An example tool that leverages this technique is RingReaper. Rootkits can batch file, network, and other I/O operations via io_uring_enter(). A code example is shown below.

struct io_uring_sqe *sqe = io_uring_get_sqe(&ring);
io_uring_prep_read(sqe, fd, buf, size, offset);
io_uring_submit(&ring);

These calls queue and submit a read request using io_uring, bypassing typical syscall telemetry paths.

Unlike syscall table hooking or LD_PRELOAD-based injection, io_uring is not a rootkit delivery mechanism itself but provides a stealthier means of interacting with the filesystem and devices post-compromise. While io_uring cannot directly execute binaries (due to the lack of execve-like capabilities), it enables malicious actions such as file creation, enumeration, and data exfiltration, while minimizing observability.

Detecting io_uring-based rootkits requires visibility into the syscalls that underpin their operation, such as io_uring_setup(), io_uring_enter(), and io_uring_register().

While EDR solutions may struggle to capture the indirect effects of io_uring, Auditd can trace these syscalls directly. The following audit rule captures relevant events for analysis:

-a always,exit -F arch=b64 -k io_uring
-S io_uring_setup -S io_uring_enter -S io_uring_register

However, this only exposes the syscall usage itself, not the specific file or object being accessed. The real "magic" of io_uring occurs within userland libraries (e.g., liburing), making analysis of syscall arguments essential.

For example, monitoring io_uring_enter() with to_submit > 0 indicates that an I/O operation is being batched, while alternating calls with min_complete > 0 signals completion polling. Correlating with process attributes (e.g., UID=0, unusual paths such as /dev/shm, /tmp, or tmpfs-backed locations) enhances detection efficacy.

A practical method for tracing io_uring activity is via eBPF with tools like BCC, targeting tracepoints such as sys_enter_io_uring_enter. This allows analysts to monitor process behavior and active file descriptors during io_uring operations:

tracepoint:syscalls:sys_enter_io_uring_enter
{
    printf("\nPID %d (%s) called io_uring_enter with fd=%d, to_submit=%d, min_complete=%d, flags=%d\n",
        pid, comm, args->fd, args->to_submit, args->min_complete, args->flags);

    printf("Manually inspect with: ls -l /proc/%d/fd\n", pid);
}

To illustrate this, several techniques introduced by RingReaper were tested. Live tracing reveals the file descriptors in use, helping identify suspicious activity like reading from /run/utmp to detect what users are logged in:

The activity of writing to a file, in this example /root/test:

Or listing process information via ps by reading the comm contents for each active PID:

While syscall monitoring exposes io_uring usage, it does not directly reveal the nature of the I/O without additional correlation. io_uring is a relatively new technique and therefore still stealthy; however, it also has several limitations. io_uring cannot directly execute code; however, attackers may abuse file writes (e.g., cron jobs, udev rules) to achieve delayed or indirect execution, as demonstrated by persistence techniques used by the Reptile and Sedexp malware families.

Rootkit persistence techniques

Rootkits, whether in userland or kernel space, require some form of persistence to remain functional across reboots or user sessions. The methods vary depending on the type and privileges of the rootkit, but commonly involve abusing configuration files, service management, or system initialization scripts.

Userland rootkits – environment variable persistence

When using LD_PRELOAD to activate a userland rootkit, the behavior is not persistent by default. To achieve persistence, attackers may modify shell initialization files (e.g., ~/.bashrc, ~/.zshrc, or /etc/profile) to export environment variables such as LD_PRELOAD or LD_LIBRARY_PATH. These modifications ensure that every new shell session automatically inherits the environment required to activate the rootkit. Notably, these files exist for both user and root contexts. Therefore, even non-privileged users can introduce persistence that hijacks execution flow at their privilege level.

To detect this, a rule similar to the one displayed below can be used:

file where event.action in ("rename", "creation") and file.path like (
  // system-wide configurations
  "/etc/profile", "/etc/profile.d/*", "/etc/bash.bashrc",
  "/etc/bash.bash_logout", "/etc/zsh/*", "/etc/csh.cshrc",
  "/etc/csh.login", "/etc/fish/config.fish", "/etc/ksh.kshrc",

  // root and user configurations
  "/home/*/.profile", "/home/*/.bashrc", "/home/*/.bash_login",
  "/home/*/.bash_logout", "/home/*/.bash_profile", "/root/.profile",
  "/root/.bashrc", "/root/.bash_login", "/root/.bash_logout",
  "/root/.bash_profile", "/root/.bash_aliases", "/home/*/.bash_aliases",
  "/home/*/.zprofile", "/home/*/.zshrc", "/root/.zprofile", "/root/.zshrc",
  "/home/*/.cshrc", "/home/*/.login", "/home/*/.logout", "/root/.cshrc",
  "/root/.login", "/root/.logout", "/home/*/.config/fish/config.fish",
  "/root/.config/fish/config.fish", "/home/*/.kshrc", "/root/.kshrc"
)

Depending on the environment, several of these shells may not be in use, and a more tailored detection rule may be created, focusing only on bash or zsh, for example. The full detection logic using Elastic Defend and Elastic’s File Integrity Monitoring integration can be found here:

For more information, a full breakdown of this persistence technique, including several other ways to detect its abuse, is presented in Linux Detection Engineering - A primer on persistence mechanisms.

Userland rootkits – configuration-based persistence

Modifying the /etc/ld.so.preload, /etc/ld.so.conf, or the /etc/ld.so.conf.d/ configuration files allow rootkits to persist globally across users and sessions (more information on this persistence vector is available in Linux Detection Engineering - A Continuation on Persistence Mechanisms). Once written, the dynamic linker will continue injecting the malicious shared object unless these configurations are explicitly reverted. These methods are persistent by design. Detection strategies mirror those described in the previous section and rely on monitoring file creation or modification events in these paths.

Kernel-space rootkits – LKM persistence

Similar to userland rootkits, LKMs are not persistent by default. An attacker must explicitly configure the system to reload the malicious module on boot. This is typically achieved by leveraging legitimate kernel module loading mechanisms:

Modules file: modules

This file lists kernel modules that should be loaded automatically during system startup. Adding a malicious .ko filename here ensures that modprobe will load it upon boot. This file is located at /etc/modules.

Configuration directory for modprobe

This directory contains configuration files for the modprobe utility. Attackers may use aliasing to disguise their rootkit or autoload it when a specific kernel event occurs (e.g., when a device is probed). These modprobe configuration files are located at /etc/modprobe.d/, /run/modprobe.d/, /usr/local/lib/modprobe.d/, /usr/lib/modprobe.d/, and /lib/modprobe.d/.

Configure kernel modules to load at boot: modules-load.d

These configuration files specify which modules to load early in the boot process and are located at /etc/modules-load.d/, /run/modules-load.d/, /usr/local/lib/modules-load.d/, and /usr/lib/modules-load.d/.

To detect all of the persistence techniques listed above, a detection rule similar to the one below can be created:

file where event.action in ("rename", "creation") and file.path like (
  "/etc/modules",
  "/etc/modprobe.d/*",
  "/run/modprobe.d/*",
  "/usr/local/lib/modprobe.d/*",
  "/usr/lib/modprobe.d/*",
  "/lib/modprobe.d/*",
  "/etc/modules-load.d/*",
  "/run/modules-load.d/*",
  "/usr/local/lib/modules-load.d/*",
  "/usr/lib/modules-load.d/*"
)

This pre-built rule that combines all of the paths listed above into a single detection rule is available here:

An example of a rootkit that automatically deploys persistence using this method is Singularity. Within its deployment, the following commands are executed:

read -p "Enter the module name (without .ko): " MODULE_NAME
CONF_DIR="/etc/modules-load.d"
mkdir -p "$CONF_DIR"
echo "[*] Setting up persistence..."
echo "$MODULE_NAME" > "$CONF_DIR/$MODULE_NAME.conf"

By default, this means that singularity.conf will be created as a new entry under /etc/modules-load.d/. Looking at telemetry, we detect this technique simply by monitoring for new file creations:

These directories are also used for benign LKMs and will therefore be prone to false positives. Another persistence method involves using a trigger- or schedule-based technique to load the kernel module by executing the loader.

Udev-based persistence – Reptile example

A less common but powerful persistence method involves abusing udev, the Linux device manager that handles dynamic device events. Udev executes rule-based scripts when specific conditions are met. A full breakdown of this technique is presented in Linux Detection Engineering - A Sequel on Persistence Mechanisms. The Reptile rootkit demonstrates this technique by installing a malicious udev rule under /etc/udev/rules.d/:

ACTION=="add", ENV{MAJOR}=="1", ENV{MINOR}=="8", RUN+="/lib/udev/reptile"

This rule was likely used as inspiration by the Sedexp malware discovered by Levelblue. Here’s how the rule works:

  • ACTION=="add": Triggers when a new device is added to the system.
  • ENV{MAJOR}=="1": Matches devices with major number “1”, typically memory-related devices such as /dev/mem, /dev/null, /dev/zero, and /dev/random.
  • ENV{MINOR}=="8": Further narrows the condition to /dev/random.
  • RUN+="/lib/udev/reptile": Executes the Reptile loader binary when the above device is detected.

This rule establishes persistence by triggering the execution of a loader binary whenever the /dev/random device is loaded. As a widely used random number generator essential for numerous system applications and the boot process, this method is effective. Activation occurs only upon specific device events, and execution happens with root privileges through the udev daemon. To detect this technique, a detection rule similar to the one below can be created:

file where event.action in ("rename", "creation") and file.extension == "rules" and file.path like (
  "/lib/udev/*",
  "/etc/udev/rules.d/*",
  "/usr/lib/udev/rules.d/*",
  "/run/udev/rules.d/*",
  "/usr/local/lib/udev/rules.d/*"
)

We cover the creation and modification of these files via the following pre-built rules:

General persistence mechanisms

In addition to kernel module loading paths, attackers may rely on more generic Linux persistence methods to reload userland or kernel-space rootkits via the loader:

Systemd: Create or append to a service/timer under any (e.g., /etc/systemd/system/) directory that supports the loader at boot.

file where event.action in ("rename", "creation") and file.path like (
  "/etc/systemd/system/*", "/etc/systemd/user/*",
  "/usr/local/lib/systemd/system/*", "/lib/systemd/system/*",
  "/usr/lib/systemd/system/*", "/usr/lib/systemd/user/*",
  "/home/*.config/systemd/user/*", "/home/*.local/share/systemd/user/*",
  "/root/.config/systemd/user/*", "/root/.local/share/systemd/user/*"
) and file.extension in ("service", "timer")

Initialization scripts: Create or append to a malicious run-control (/etc/rc.local), SysVinit (/etc/init.d/), or Upstart (/etc/init/) script.

file where event.action in ("creation", "rename") and
file.path like (
  "/etc/init.d/*", "/etc/init/*", "/etc/rc.local", "/etc/rc.common"
)

Cron jobs: Create or append to a cron job that allows for repeated execution of a loader.

file where event.action in ("rename", "creation") and
file.path like (
  "/etc/cron.allow", "/etc/cron.deny", "/etc/cron.d/*",
  "/etc/cron.hourly/*", "/etc/cron.daily/*", "/etc/cron.weekly/*",
  "/etc/cron.monthly/*", "/etc/crontab", "/var/spool/cron/crontabs/*",
  "/var/spool/anacron/*"
)

Sudoers: Create or append to a malicious sudoers configuration as a backdoor.

file where event.type in ("creation", "change") and
file.path like "/etc/sudoers*"

These methods are widely used, flexible, and often easier to detect using process lineage or file-modification telemetry.

The list of pre-built detection rules to detect these persistence techniques is listed below:

Rootkit defense evasion techniques

Although rootkits are, by definition, tools for defense evasion, many implement additional techniques to remain undetected during and after deployment. These methods are designed to avoid visibility in logs, evade endpoint detection agents, and interfere with common investigation workflows. The following section outlines key evasion techniques employed by modern Linux rootkits, categorized by their operational targets.

Attempts to remain stealthy upon deployment

Threat actors commonly focus on stealthy execution tactics from a forensics perspective. For example, a threat actor may store and execute its payloads from the /dev/shm shared-memory directory, as this is a fully virtual file system, and therefore the payloads will never touch disk. This is great from a forensics perspective, but as behavioral detection engineers, we find this behavior very suspicious and uncommon.

As an example, although not an actual threat actor, Singularity’s author suggests the following deployment method:

cd /dev/shm
git clone https://github.com/MatheuZSecurity/Singularity
cd Singularity
sudo bash setup.sh
sudo bash scripts/x.sh

There are several trip wires to be installed to detect this behavior with a nearly zero false-positive rate, starting with cloning a GitHub repository into the /dev/shm directory.

sequence by process.entity_id, host.id with maxspan=10s
  [process where event.type == "start" and event.action == "exec" and (
     (process.name == "git" and process.args == "clone") or
     (
       process.name in ("wget", "curl") and
       process.command_line like~ "*github*"
     )
  )]
  [file where event.type == "creation" and
   file.path like ("/tmp/*", "/var/tmp/*", "/dev/shm/*")]

Cloning directories in /tmp and /var/tmp is common, so these could be removed from this rule in environments where cloning repositories is common. The same activity in /dev/shm, however, is very uncommon.

The setup.sh script, called by the loader, continues by compiling the LKM in a /dev/shm/ subdirectory. Real threat actors generally do not compile on the host itself, however, it is not that uncommon to see this happen either way.

sequence with maxspan=10s
  [process where event.type == "start" and event.action == "exec" and
   process.name like (
     "*gcc*", "*g++*", "c++", "cc", "c99", "c89", "cc1*", "clang*",
     "musl-clang", "tcc", "zig", "ccache", "distcc"
   )] as event0
  [file where event.action == "creation" and file.path like "/dev/shm/*" and
   process.name like (
     "ld", "ld.*", "lld", "ld.lld", "mold", "collect2", "*-linux-gnu-ld*", 
     "*-pc-linux-gnu-ld*"
   ) and
   stringcontains~(event0.process.command_line, file.name)]

This endpoint logic detects the execution of a compiler, followed by the linker creating a file in /dev/shm (or a subdirectory).

And finally, since it cloned the whole repository in /dev/shm, and executed setup.sh and x.sh, we will observe process execution from the shared memory directory, which is uncommon in most environments:

process where event.type == "start" and event.action == "exec" and
process.executable like ("/dev/shm/*", "/run/shm/*")

These rules are available within the detection-rules and protections-artifacts repositories:

Masquerading as legitimate processes

To avoid scrutiny during process enumeration or system monitoring, rootkits often rename their processes and threads to match benign system components. Common disguises include:

  • kworker, migration, or rcu_sched (kernel threads)
  • sshd, systemd, dbus-daemon, or bash (userland daemons)

These names are chosen to blend in with the output of tools like ps, top, or htop, making manual detection more difficult. Examples of rootkits that leverage this technique include Reptile and PUMAKIT. Reptile generates unusual network events through kworker upon initialization:

network where event.type == "start" and event.action == "connection_attempted" 
and process.name like~ ("kworker*", "kthreadd") and not (
  destination.ip == null or
  destination.ip == "0.0.0.0" or
  cidrmatch(
    destination.ip,
    "10.0.0.0/8", "127.0.0.0/8", "169.254.0.0/16", "172.16.0.0/12",
    "192.0.0.0/24", "192.0.0.0/29", "192.0.0.8/32", "192.0.0.9/32",
    "192.0.0.10/32", "192.0.0.170/32", "192.0.0.171/32", "192.0.2.0/24", 
    "192.31.196.0/24", "192.52.193.0/24", "192.168.0.0/16", "192.88.99.0/24",
    "224.0.0.0/4", "100.64.0.0/10", "192.175.48.0/24","198.18.0.0/15", 
    "198.51.100.0/24", "203.0.113.0/24", "240.0.0.0/4", "::1",
    "FE80::/10", "FF00::/8"
  )
)

The example below shows Reptile’s port knocking functionality, where the kernel thread forks, changes its session ID to 0, and sets up the network connection:

Reptile is also seen to leverage the same kworker process to create files:

file where event.type == "creation" and
process.name like~ ("kworker*", "kthreadd")

PUMAKIT spawns kernel threads to execute userland commands through kthreadd, but similar activity has been observed through a kworker process in other rootkits:

process where event.type == "start" and event.action == "exec" and
process.parent.name like~ ("kworker*", "kthreadd") and
process.name in ("bash", "dash", "sh", "tcsh", "csh", "zsh", "ksh", "fish") and
process.args == "-c"

These kworker and kthreadd rules may generate false positives due to the Linux kernel's internal operations. These can easily be excluded on a per-environment basis, or additional command-line arguments can be added to the logic.

These rules are available in the detection-rules and protections-artifacts repositories:

Additionally, malicious processes, such as an initial dropper or a persistence mechanism, may masquerade as kernel threads and leverage a built-in shell function to do so. Leveraging the exec -a command, any process can be spawned with a name of the attacker’s choosing. Kernel process masquerading can be detected through the following detection query:

process where event.type == "start" and event.action == "exec" and 
process.command_line like "[*]" and process.args_count == 1

This behavior is shown below, where several pieces of malware tried to masquerade as either a kernel worker or a web service process.

This technique is also commonly abused by threat actors leveraging The Hacker’s Choice (THC) toolkit, specifically upon deploying gsocket.

Rules related to kernel masquerading, and masquerading via exec -a generally, are available in the protections-artifacts repository:

Another technique seen in the wild, and also in Horse Pill, is the use of prctl to stomp its process name. To ensure this telemetry is available, a custom Auditd rule can be created:

-a exit,always -F arch=b64 -S prctl -k prctl_detection

And accompanied by the following detection logic:

process where host.os.type == "linux" and auditd.data.syscall == "prctl" and
auditd.data.a0 == "f"

Will allow for the detection of this technique. In the screenshot below, we can see telemetry examples of this technique being used, where the process.executable is gibberish, and prctl will then be used to masquerade on the system as a legitimate process.

This rule, including its setup instructions, is available here:

Although there are many ways to masquerade, these are the most common ones observed.

Log and audit cleansing

Many rootkits include routines that erase traces of their installation or activity from logs. One of these techniques is to clear the victim’s shell history. This can be detected in two ways. One method is to detect the deletion of the shell history file:

file where event.type == "deletion" and file.name in (
  ".bash_history", ".zsh_history", ".sh_history", ".ksh_history",
  ".history", ".csh_history", ".tcsh_history", "fish_history"
)

The second method is to detect process executions with command line arguments related to clearing the shell history:

process where event.type == "start" and event.action == "exec" and (
  (
    process.args in ("rm", "echo") or
    (
      process.args == "ln" and process.args == "-sf" and
      process.args == "/dev/null"
    ) or
    (process.args == "truncate" and process.args == "-s0")
  )
  and process.command_line like~ (
    "*.bash_history*", "*.zsh_history*", "*.sh_history*", "*.ksh_history*",
    "*.history*", "*.csh_history*", "*.tcsh_history*", "*fish_history*"
  )
) or
(process.name == "history" and process.args == "-c") or
(
  process.args == "export" and
  process.args like~ ("HISTFILE=/dev/null", "HISTFILESIZE=0")
) or
(process.args == "unset" and process.args like~ "HISTFILE") or
(process.args == "set" and process.args == "history" and process.args == "+o")

Having both detection rules (process and file) active will enable a more robust defense-in-depth strategy.

Upon loading, rootkits may taint the kernel or generate out-of-tree messages that can be identified when parsing syslog and kernel logs. To erase their tracks, rootkits may delete these log files:

file where event.type == "deletion" and file.path in (
  "/var/log/syslog", "/var/log/messages", "/var/log/secure", 
  "/var/log/auth.log", "/var/log/boot.log", "/var/log/kern.log", 
  "/var/log/dmesg"
)

Or clear the kernel message buffer through dmesg:

process where event.type == "start" and event.action == "exec" and
process.name == "dmesg" and process.args in ("-c", "--clear")

An example of a rootkit that automatically cleans the dmesg is the bds rootkit, which loads by executing /opt/bds_elf/bds_start.sh:

Another means of clearing these logs is by using journalctl:

process where event.type == "start" and event.action == "exec" and
process.name == "journalctl" and
process.args like ("--vacuum-time=*", "--vacuum-size=*", "--vacuum-files=*")

This is a technique that was used by Singularity:

Another technique employed by Singularity’s loader script is the deletion of all files associated to the rootkit in case it cannot load, or once it completes its loading process. For more thorough deletion, the author chose the use of shred over rm. rm (remove) simply deletes the file's pointer, making it fast but allowing for data recovery. shred overwrites the file data multiple times with random data, ensuring it cannot be recovered. This makes the deletion more permanent but, at the same time, noisier from a behavior-detection point of view, since shred is not commonly used on most Linux systems.

process where event.type == "start" and event.action == "exec" and
process.name == "shred" and (
// Any short-flag cluster containing at least one of u/z, 
// and containing no extra "-" after the first one
process.args regex~ "-[^-]*[uz][^-]*" or
process.args in ("--remove", "--zero")
) and
not process.parent.name == "logrotate"

The regex above ensures that attempts to evade detection by combining or modifying flags become more difficult. Below is an example of Singularity looking for any files related to its deployment, and shredding them:

These file and log removal techniques can be detected via several out-of-the-box detection rules:

Once a rootkit is finished clearing its traces, it may timestomp the files it altered to ensure no file modification trace is left behind:

process where event.type == "start" and event.action == "exec" and
process.name == "touch" and
process.args like (
  "-t*", "-d*", "-a*", "-m*", "-r*", "--date=*", "--reference=*", "--time=*"
)

An example of this is shown here, where a threat actor uses the /etc/ld.so.conf file’s timestamp as a reference time to the files on the /dev/shm drive in an attempt to blend in:

This is a technique that we have added coverage for via both detection rules and protection artifacts:

Although there are always more techniques we did not discuss in this research, we are confident that this research will help deepen the understanding of the Linux rootkit landscape and its detection engineering.

Rootkit prevention techniques

Preventing Linux rootkits requires a layered defense strategy that combines kernel and userland hardening, strict access control, and continuous monitoring. Mandatory access control frameworks, such as SELinux and AppArmor, limit process behavior and userland persistence opportunities. Meanwhile, kernel hardening techniques, including Lockdown Mode, KASLR, SMEP/SMAP, and tools like LKRG, mitigate the risk of kernel-level compromise. Restricting kernel module usage by disabling dynamic loading or enforcing module signing further reduces common vectors for rootkit deployment.

Visibility into malicious behavior is enhanced through Auditd and file integrity monitoring for syscall and file activity, as well as through EDR solutions that identify and prevent suspicious runtime behaviors. Security is further strengthened by minimizing process privileges through seccomp-bpf, Linux capabilities, and the landlock LSM, thereby restricting syscall access and filesystem interactions.

Timely kernel and software updates, supported by live patching when necessary, close known vulnerabilities before they are exploited. Additionally, filesystem and device configurations should be hardened by remounting sensitive filesystems with restrictive flags and disabling access to kernel memory interfaces, such as /dev/mem and /proc/kallsyms.

No single control can prevent rootkits outright. A layered defense, combining configuration hardening, static and dynamic detection, and forensic readiness, remains essential.

Conclusion

In part one of this series, we examined how Linux rootkits operate internally, exploring their evolution, taxonomy, and techniques for manipulating user space and kernel space. In this second part, we translated that knowledge into practical detection strategies, focusing on the behavioral signals and runtime telemetry that expose rootkit activity.

While Windows malware continues to dominate the focus of commercial security vendors and threat research communities, Linux remains comparatively under-researched, despite powering the majority of the world’s cloud infrastructure, high-performance computing environments, and internet services.

Our analysis highlights that Linux rootkits are evolving. The increasing adoption of technologies such as eBPF, io_uring, and containerized Linux workloads introduces new attack surfaces that are not yet well understood or widely protected.

We encourage the security community to:

  • Invest in Linux-focused detection engineering from both static and dynamic angles.
  • Share research findings, proofs of concept, and detection strategies openly to accelerate collective knowledge among defenders.
  • Collaborate across vendors, academia, and industry to push Linux rootkit defense toward the same maturity level achieved on Windows.

Only by collectively improving visibility, detection, and response capabilities can defenders stay ahead of this stealthy and rapidly evolving threat landscape.

Share this article