Präambel
The landscape of cybersecurity is evolving, and the role of the Detection Engineer (DE) is more critical and demanding than ever. Traditionally, this role involves a comprehensive, end-to-end workflow: from threat modeling and telemetry tuning to writing, testing, and maintaining performance-optimized detection rules to flag malicious behavior.
Elastic Security is purpose-built to streamline this entire workflow, empowering DEs - and anyone involved in security operations - to build, manage, and optimize detection rules at scale. This allows security teams to concentrate their efforts on the most critical task: protecting the organization.
The rise of generative AI and, more specifically, advanced AI coding agents like Claude and Cursor, is fundamentally changing and supercharging this workflow. These tools are no longer just for general software development; they are becoming expert partners for the Security Operations Center (SOC). By integrating the power of conversational AI, these agents can take high-level security requirements and instantly translate them into validated, workable detection logic.
From Generalist to Elastic Expert: Agent Skills
Elastic Security is embracing this shift not only by having native AI capabilities built-into our agentic security operations platform , but also by open-sourcing agent skills for 3rd party agentic IDEs, a native platform experience for the entire Elastic ecosystem (Security, Observability, etc.). By loading these skills into any agent runtime, your AI assistant moves from being a generalist to an on-demand expert in Elastic’s tooling. You can then ask your agent to triage alerts or, in this context, expertly create and tune detection rules
A Use Case Walkthrough: The Notepad++ Attack
To illustrate the agent’s power, let’s look at a real-world supply chain-based attack involving a backdoor targeting the Notepad++ infrastructure described in Elastic Security Lab’s blog, “Speeding APT Attack”.
Instant Conditional Rules
A detection engineer’s first step is often to create conditional rules based on known Indicators of Compromise (IOCs). To begin, we can instruct the agent to investigate data within Elastic Security, as evidence of the attack was present in our cluster.
"Can you help me create a detection rule that will detect malicious activity similar
to what I'm seeing in my Elastic Security deployment involving notepad++.exe
and BluetoothService.exe?"
The agent immediately went to work:
- It rapidly found process lineage and documented attack details.
- It extracted key IOCs and found the corresponding MITRE ATT&CK™ mappings.
- It generated two foundational rules: one for a suspicious child process spawned by Notepad++, and one focusing on the masqueraded executable.
- Crucially, the rules were immediately tested against threat emulation data, confirming multiple successful hits.
Each step is happening quickly, and the built-in validation significantly accelerates the 'test and tune' phase.
Let’s take a look at the agent-created rule in Elastic Security:
Diving into Advanced ESQL Aggregation
Conditional logic is great, but modern threats require more behavioral and entity-focused detections. Using Elastic’s powerful piping language, ES|QL (Elastic Search Query Language), the agent was challenged to create an aggregation-based rule that looks for generic, suspicious characteristics across tasks, aggregates them, and assigns a dynamic risk score to host and user entities.
The agent delivered, creating an advanced query that looks for suspicious executables, negates benign directories, and assesses scores based on the activity's risk level. This demonstrates the agent's ability to create sophisticated detections unique to Elastic's capabilities, moving beyond simple lookups to complex entity analytics.
Here’s the rule in Elastic Security:
Sequential Detections with EQL and Suppression
To detect multi-stage attacks, a sequential rule is essential—if Event A, then Event B, then Event C, then alert. Using the Event Query Language (EQL), the agent crafted a perfect three-stage sequence for the attack:
- Unsigned dropper activity.
- Service masquerade (implant deployed).
- Final execution for persistence.
To make the rule more reliable and reduce noise, suppression logic was then added, focusing on limiting alerts per unique Host ID. This quick iteration shows how an agent can help a detection engineer rapidly move from a basic detection to a highly robust, multi-stage rule.
The LLM-Augmented Query: Summaries in the Alert
The ultimate demonstration of the new agentic workflow is using Elastic’s ESQL COMPLETION syntax. This feature allows an inference model to be referenced directly within the query.
The prompt asked the agent to:
Based off this recent elastic blog,
https://www.elastic.co/security-labs/beyond-behaviors-ai-augmented-detection-engineering-with-esql-completion,
create a rule that incorporates a COMPLETION command with my default inference
model that will summarize findings from attack into one "esql.summary"
The result? The generated rule didn't just fire an alert; it natively included an ES|QL summary row in the alert itself:
This telemetry shows a masquerading technique where a process named "BluetoothService.exe" is executing from a user's AppData directory with a PE original name of "BDSubWiz.exe" (a legitimate file mismatch), running as SYSTEM with service-like characteristics including spawning from services.exe, indicating persistence establishment (MITRE ATT&CK T1036.004 Masquerading and T1543 Service Persistence). The executable's location in a user directory, combined with SYSTEM-level execution, service persistence indicators, and the name/PE mismatch across multiple events, suggests Defense Evasion and Persistence stages. This represents high severity due to successful SYSTEM-level persistence with active defense evasion through masquerading.
This cuts triage time dramatically, as analysts no longer need to pivot to a separate runbook to understand the context and severity of the alert.
The Agentic SOC is Here
The collaboration between AI agents and the Elastic Security solution provides a glimpse into Elastic’s Agentic SOC of the future. It’s a world where detection engineers can have a conversation, define their intent, and instantly generate, test, and deploy highly sophisticated, context-rich detection rules. This is not about replacing the human expert, but about augmenting their knowledge and accelerating their workflow, allowing them to focus on high-value threat intelligence and modeling.
Einrichten der DGA-Erkennung
Before you get started: AI coding agents operate with real credentials, real shell access, and often the full permissions of the user running them. When those agents are pointed at security workflows, the stakes are higher: you're handing an automated system access to detection logic, response actions, and sensitive telemetry. Every organization's risk profile is different. Before enabling AI-driven security workflows, evaluate what data the agent can access, what actions it can take, and what happens if it behaves unexpectedly
Don't have an Elasticsearch cluster yet? Start an Elastic Cloud free trial. It takes about a minute to get a fully configured environment.
