AI on offense: Can ChatGPT be used for cyberattacks?

The difference between clever and intelligence

log-management-720x420_(1).jpeg
Key takeaways
  • AI is yet another "dual-use" tool that, while designed for benign purposes, can be repurposed by attackers for malicious purposes.
  • The current state-of-the-art generative AI models cannot carry out sophisticated cyberattacks but can assist during some phases of the attack lifecycle.
  • Guardrails can be overcome with some trickery, but to limited benefit.

Generative AI models have a long history in artificial intelligence (AI). It all started back in the 1950s with Hidden Markov Models and Gaussian Mixture Models, and it really evolved with the advent of Deep Learning. In the past five years alone, we have gone from models with several millions of parameters to the latest being GPT-4, estimated to have over 100 trillion parameters. For comparison, the human brain contains on average 86 billion neurons and 100 trillion synapses. So, GPT-4 will have as many parameters as the human brain has synapses . . .

With these models seeming increasingly human-like, it is understandable that there is so much concern and focus with the impacts AI may have on the future of humanity. As security practitioners, we’ve been thinking a lot about what these advances in AI mean for defenders as well as offenders, and what the impacts on cybersecurity might be. 

Can bad actors use AI technologies to successfully carry out attacks? How do defense strategies need to evolve to keep up with AI-powered attackers? Which outcomes will be the most likely? Let's explore those questions.

Plausible forecasts

One of the current concerns around this new age of generative AI models is the threat posed by humans armed with AI. While there haven't been any widely publicized incidents of threat actors using generative AI to execute cyberattacks yet, it is not hard to imagine these models being used by threat actors to create deep fake audio, video, images, etc. to achieve results. Masquerading as a fictional persona would be useful for phishing and general social engineering, while also being relevant to disinformation operations.

A quick internet search resulted in dozens of services capable of producing realistic photographic images, audio, and video depicting fictional persons. There are even companies generating demographically specific personas using generative AI so they can simulate the traits and language patterns their customers will be most familiar with.

Practice versus theory

Generating a phishing email is not the same thing as sending a phishing lure. While these models can assist in carrying out sophisticated cyberattacks by providing inputs to those human capabilities, they cannot execute these attacks independently. At least not yet. 

To prove it out, we experimented with how far we could get using a generative model such as ChatGPT to help us execute a relatively involved attack, such as a Golden Ticket Attack

A Golden Ticket attack* is a malicious cybersecurity attack in which a threat actor attempts to gain almost unlimited access to an organization’s domain (devices, files, domain controllers, etc.) by accessing user data stored in Active Directory (AD). It exploits weaknesses in the Kerberos identity authentication protocol, which is used to access the AD, allowing an attacker to bypass normal authentication.

At a high level, the steps to successfully execute a Golden Ticket attack are as follows:

  1. Initial Access: Commonly achieved via a phishing lure
  2. Reconnaissance: Investigate and gather intel about the domain, such as domain name, domain security identifier, and useful privileged accounts
  3. Credential Access: After an attacker has access to the domain controller, steal an NTLM hash of the Active Directory Key Distribution Service Account (KRBTGT) to crack
  4. Privilege Escalation: With the password for the KRBTGT, the attacker can obtain a Kerberos Ticket Granting Ticket (TGT), which gives them unrestricted access to resources in the network
  5. Persistence: The TGT can grant the attacker virtually indefinite persistence if not detected since they appear to be a legitimate user, with a valid ticket

We asked ChatGPT (powered by GPT-4) to help us with some of the steps above:

chatgpt write me a phishing email

ChatGPT’s guardrails kicked in, but maybe if we lie about our intent we can overcome them:

chatgpt give me an example

It's interesting to note that the model does not readily provide content that can be used for malicious purposes. However, it is easy enough to get the model to give you what you need by stating that the intent behind the question is something benign. 

Let’s see if ChatGPT can explain how to perform some reconnaissance steps:

chatgpt how to find domain

Generative models such as ChatGPT excel at producing content based on patterns learned from the data they are trained on. While they are able to identify that certain information can be used in a malicious context, they fail on others. Here, we’ll ask it if it can help us obfuscate a tool we’d need in this attack:

chatgpt write a wrapper

When it came to more specific kinds of malicious behavior like writing malware, developing an exploit, and creating scripts or webshells, we weren’t impressed. Even with guardrails off, these tools produced unusable code most of the time. 

While it may be possible for generative AI to coincidentally succeed, we had no such luck, though the pace of technological advancement may change that.

Defense is less theoretical and much more practical

As previously discussed by Elastic Security Labs, generative AI models show tremendous potential to improve comprehension and explain security events. Given a language corpus that included things like incident response plans, host triage guides, and other institutional knowledge, there’s no reason a model like GPT-4 couldn’t assist with a variety of important tasks:

  • New hire analyst onboarding
  • Preserving and refining response processes
  • Guiding host triage
  • Interpreting sequences of events to explain the underlying cause(s)

It would be a mistake to believe generative AI is capable of entirely replacing the human part of our security equation, but it seems very likely that this class of technologies will make every human in your system better informed and more confident — two qualities common to mature and effective security teams.

Conclusions

These basic experiments suggest that the “best” case (or maybe the worst case, depending on your perspective) for current state-of-the-art generative models could be to aid attackers. However, we theorize that they will not be able to carry out a sophisticated attack end-to-end for the following reasons:

  • Limited autonomy: The models lack the ability to autonomously plan and strategize like a human attacker would, though the power rests with the consumer of the content to decide how to use results.
  • Lack of intent: The models don’t have an intent or purpose of their own, they are trained to operate on human preferences and the specific tasks they were designed to perform.
  • Narrow range of focus: Generative models are trained to learn complex internal representations solely based on the data they were trained on, and they do not produce the broad range of capabilities required to carry out sophisticated cyberattacks.

Ultimately, AI is a human-enabling technology. Just like any other invention, humans can choose to use it for good or bad. There is already precedence for "dual-use" tools in the security world itself: 

  • Mimikatz, which was originally created to demonstrate vulnerabilities in the Windows authentication process, can be used by attackers to extract plaintext passwords, hashes, and Kerberos tickets from memory.
  • Metasploit, a penetration testing framework that enables security professionals to test networks and systems for vulnerabilities, but can be used by attackers to exploit known vulnerabilities and deliver payloads to compromised systems.

As Bill Gates speculates, superintelligent or "strong" AIs are our future. Until then, the buck stops with us: we must decide how to use these technologies.

*Source: https://www.crowdstrike.com/cybersecurity-101/golden-ticket-attack/ 



In this blog post, we may have used third party generative AI tools, which are owned and operated by their respective owners. Elastic does not have any control over the third party tools and we have no responsibility or liability for their content, operation or use, nor for any loss or damage that may arise from your use of such tools. Please exercise caution when using AI tools with personal, sensitive or confidential information. Any data you submit may be used for AI training or other purposes. There is no guarantee that information you provide will be kept secure or confidential. You should familiarize yourself with the privacy practices and terms of use of any generative AI tools prior to use.  

Elastic, Elasticsearch and associated marks are trademarks, logos or registered trademarks of Elasticsearch N.V. in the United States and other countries. All other company and product names are trademarks, logos or registered trademarks of their respective owners.