Possible Consent Grant Attack via Azure-Registered Application

edit

Possible Consent Grant Attack via Azure-Registered Application

edit

Detects when a user grants permissions to an Azure-registered application or when an administrator grants tenant-wide permissions to an application. An adversary may create an Azure-registered application that requests access to data such as contact information, email, or documents.

Rule type: query

Rule indices:

  • filebeat-*
  • logs-azure*
  • logs-o365*

Severity: medium

Risk score: 47

Runs every: 5m

Searches indices from: now-25m (Date Math format, see also Additional look-back time)

Maximum alerts per execution: 100

References:

Tags:

  • Domain: Cloud
  • Data Source: Azure
  • Data Source: Microsoft 365
  • Use Case: Identity and Access Audit
  • Resources: Investigation Guide
  • Tactic: Initial Access

Version: 213

Rule authors:

  • Elastic

Rule license: Elastic License v2

Investigation guide

edit

Triage and analysis

Investigating Possible Consent Grant Attack via Azure-Registered Application

In an illicit consent grant attack, the attacker creates an Azure-registered application that requests access to data such as contact information, email, or documents. The attacker then tricks an end user into granting that application consent to access their data either through a phishing attack, or by injecting illicit code into a trusted website. After the illicit application has been granted consent, it has account-level access to data without the need for an organizational account. Normal remediation steps like resetting passwords for breached accounts or requiring multi-factor authentication (MFA) on accounts are not effective against this type of attack, since these are third-party applications and are external to the organization.

Official Microsoft guidance for detecting and remediating this attack can be found here.

Possible investigation steps

  • From the Azure AD portal, Review the application that was granted permissions:
  • Click on the Review permissions button on the Permissions blade of the application.
  • An app should require only permissions related to the app’s purpose. If that’s not the case, the app might be risky.
  • Apps that require high privileges or admin consent are more likely to be risky.
  • Investigate the app and the publisher. The following characteristics can indicate suspicious apps:
  • A low number of downloads.
  • Low rating or score or bad comments.
  • Apps with a suspicious publisher or website.
  • Apps whose last update is not recent. This might indicate an app that is no longer supported.
  • Export and examine the Oauth app auditing to identify users affected.

False positive analysis

  • This mechanism can be used legitimately. Malicious applications abuse the same workflow used by legitimate apps. Thus, analysts must review each app consent to ensure that only desired apps are granted access.

Response and remediation

  • Initiate the incident response process based on the outcome of the triage.
  • Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context:
  • Identify the account role in the cloud environment.
  • Assess the criticality of affected services and servers.
  • Work with your IT team to identify and minimize the impact on users.
  • Identify if the attacker is moving laterally and compromising other accounts, servers, or services.
  • Identify any regulatory or legal ramifications related to this activity.
  • Disable the malicious application to stop user access and the application access to your data.
  • Revoke the application Oauth consent grant. The Remove-AzureADOAuth2PermissionGrant cmdlet can be used to complete this task.
  • Remove the service principal application role assignment. The Remove-AzureADServiceAppRoleAssignment cmdlet can be used to complete this task.
  • Revoke the refresh token for all users assigned to the application. Azure provides a playbook for this task.
  • Report the application as malicious to Microsoft.
  • Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords or delete API keys as needed to revoke the attacker’s access to the environment. Work with your IT teams to minimize the impact on business operations during these actions.
  • Investigate the potential for data compromise from the user’s email and file sharing services. Activate your Data Loss incident response playbook.
  • Disable the permission for a user to set consent permission on their behalf.
  • Enable the Admin consent request feature.
  • Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).

Setup

edit

The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule.

Rule query

edit
event.dataset:(azure.activitylogs or azure.auditlogs or o365.audit) and
  (
    azure.activitylogs.operation_name:"Consent to application" or
    azure.auditlogs.operation_name:"Consent to application" or
    event.action:"Consent to application."
  ) and
  event.outcome:(Success or success)

Framework: MITRE ATT&CKTM