First Time Python Accessed Sensitive Credential Files

edit
IMPORTANT: This documentation is no longer updated. Refer to Elastic's version policy and the latest documentation.

First Time Python Accessed Sensitive Credential Files

edit

Detects the first time a Python process accesses sensitive credential files on a given host. This behavior may indicate post-exploitation credential theft via a malicious Python script, compromised dependency, or malicious model file deserialization. Legitimate Python processes do not typically access credential files such as SSH keys, AWS credentials, browser cookies, Kerberos tickets, or keychain databases, so a first occurrence is a strong indicator of compromise.

Rule type: new_terms

Rule indices:

  • logs-endpoint.events.file-*

Severity: medium

Risk score: 47

Runs every: 5m

Searches indices from: now-9m (Date Math format, see also Additional look-back time)

Maximum alerts per execution: 100

References:

Tags:

  • Domain: Endpoint
  • OS: macOS
  • Use Case: Threat Detection
  • Tactic: Credential Access
  • Data Source: Elastic Defend
  • Resources: Investigation Guide
  • Domain: LLM

Version: 1

Rule authors:

  • Elastic

Rule license: Elastic License v2

Investigation guide

edit

Triage and analysis

Investigating First Time Python Accessed Sensitive Credential Files

Attackers who achieve Python code execution — whether through malicious scripts, compromised dependencies, or model file deserialization (e.g., pickle/PyTorch __reduce__) — often target sensitive credential files such as SSH keys, cloud provider credentials, browser session cookies, and macOS keychain data. Since legitimate Python processes do not typically access these files, a first occurrence from a Python process is highly suspicious.

This rule leverages the Elastic Defend sensitive file open event, which is only collected for known sensitive file paths, combined with the New Terms rule type to alert on the first time a specific credential file is accessed by Python on a given host within a 7-day window.

Possible investigation steps

  • Examine the Python process command line and arguments to identify the script or command that triggered the file access.
  • Determine if the Python process was loading a model file (look for torch.load, pickle.load), running a standalone script, or executing via a compromised dependency.
  • Review the specific credential file that was accessed and assess the potential impact (SSH keys enable lateral movement, AWS credentials enable cloud access, browser cookies enable session hijacking).
  • Check for outbound network connections from the same process tree that may indicate credential exfiltration.
  • Investigate the origin of any recently downloaded scripts, packages, or model files on the host.
  • Look for file creation events in /tmp/ or other staging directories that may contain copies of the stolen credentials.

False positive analysis

  • Python-based secret management tools (e.g., aws-cli, gcloud) legitimately access credential files. Consider excluding known trusted executables by process path.
  • SSH automation scripts using paramiko or fabric may read SSH keys. Evaluate whether the access pattern matches known automation workflows.
  • Security scanning tools running Python may enumerate credential files as part of their assessment.

Response and remediation

  • Immediately rotate any credentials that were potentially accessed (SSH keys, AWS access keys, cloud tokens).
  • Quarantine the Python process and investigate the source script, package, or model file that triggered the access.
  • If a malicious file is confirmed, identify all hosts where it may have been distributed.
  • Review outbound network connections from the host around the time of the credential access to check for exfiltration.
  • Consider implementing weights_only=True enforcement for PyTorch model loading across the environment.

Rule query

edit
event.category:file and host.os.type:macos and event.action:open and
process.name:python*

Framework: MITRE ATT&CKTM