The State of Cybersecurity and Cybercriminals a Year After the Explosion of LLMs

Explore how cybercriminals exploit generative AI and how security teams can stay ahead of them.

June 26, 2024

The State of Cybersecurity
(Credits: Shutterstock)

It’s been a year since large language models (LLMs) dramatically improved. Threat actors quickly exploited generative AI for malicious purposes, while security teams began using AI to enhance their defenses, says Nick Ascoli, senior product strategist at Flare.

In the past year, the capabilities of large language models (LLMs) have expanded dramatically. OpenAI released the largest language model, GPT-4, in March 2023, which can, in some contexts, perform close to human capabilities. A blessing and a curse in cybersecurity. Therefore, it’s unsurprising that a Datanami report found that 67% of organizations already leverage generative AI.   

Threat actors swiftly began exploiting generative AI to bolster their malicious activities, while security teams and cybersecurity companies were also quick to embrace generative AI to enhance their security operations. 

How are threat actors applying LLMs to their criminal activities, and how are cyber teams pushing to evolve security operations ahead of them?

Cybercriminals and LLMs

Generative AI and LLMs are a double-edged sword, as threat actors and blue teamers have much to gain from them. Cybercriminals have abused open-source models and created malicious versions of AI chatbots such as DarkBard, FraudGPT, and WormGPT. Last year, researchers started identifying open-source LLMs that had been tweaked for cybercrime and were sold on the dark web. 

Relevant generative AI applications for threat actors include:

  • Writing variations of convincing phishing emails.
  • Social engineering with voice phishing (vishing).
  • Speeding up previously manual elements of cybercrime to increase their attacks in rate and scale. For example, threat actors can buy stealer logs that contain login credentials and can use an LLM to speed up the rate of trying each login and password. 

The use of open-source LLMs to facilitate targeted phishing is of particular concern because studies have shown that spear-phishing attacks are dramatically more effective than generic phishing emails. In fact, spear phishing emails account for less than 0.1% of all emails sent yet cause 66%Opens a new window of all breaches, according to Barracuda‘s 2023 Phishing Trends Report.

Even within days of GPT-4’s release, threat actors were testing out how to use the LLM for malicious purposes, including:

  • Voice spoofing to get MFA (multi-factor authentication) access or OTP codes
  • Bypassing safeguards built into LLMs and jailbreaking them

Outsmarting Threat Actors

Though threat actors can and do use AI and LLMs in nefarious ways, security teams can and must evolve ahead of cybercriminals’ AI applications to protect their organizations better. 

The cybersecurity industry, in general, faces challenges with secure coding, bad documentation, and a lack of guardrails and training. These issues, though preventable, can make organizations’ attack surfaces even more vulnerable to threat actors. 

However, AI can boost efforts and help secure entry points across the organization. AI is particularly useful for analyzing and synthesizing large amounts of information. This can improve an organization’s security posture and strengthen its infrastructure against threat actors’ attacks. Some ways AI can help cybersecurity teams include:

  • Creating up-to-date and consistent documentation makes it harder for attackers to exploit vulnerabilities.
  • Writing development guides with security in mind helps reduce the attack surface for malicious actors.
  • Implementing and deploying custom code across the organization helps flag potential security risks in real-time, allowing developers to address them immediately.
  • Identifying external threats faster than ever with automated risk monitoring before threat actors act on them.

With greater consistency in building coding infrastructure, fewer areas can be vulnerable to threat actor exploitation. 

It can also be helpful in more “casual settings.” For example, if a security analyst has a question that is not well answered in relevant forums, such as “How would I write a script to do [insert action], entering the question into an LLM could provide a helpful answer that is synthesized across many sources. 

See More: The Problem With Disparate Threat Actor Naming Taxonomies

For organizations looking to leverage AI models to protect themselves, the following can be a good start to help them use AI safely: 

  • Policies and processes: Establishing guidelines and implementing controls on how employees interact with AI models, such as not inputting sensitive internal information into an LLM and sharing data, can help prevent data leaks.
  • Tokenization: Masking sensitive information via tokenization allows organizations to securely use models in corporate applications, reducing potential data risks while ensuring consistent output.

A Peek into the Future of AI

New technology makes the future exciting but uncertain, and there are possibilities for short and medium-term AI risks that organizations should be aware of and implement protections for. 

Short term risks

  • Data leakage risks: Employees using LLMs may unintentionally disclose sensitive data into the models, potentially losing control of this information. Cybercriminals could exploit compromised accounts to access the user’s interaction history with the model and sell this data on the dark web. Security teams should conduct workshops explaining how LLMs work, their data practices, and the potential risks of entering proprietary information. They can also develop guidelines outlining what data is off-limits for LLMs, including examples like formulas, client details, or financial data.
  • Model attacks and training data exposure: Adversaries can attack models, causing them to reveal sensitive training data, including personally identifiable information or confidential user-provided prompts. Security teams can strengthen their organization’s infrastructure through pen testing. This involves sending specific prompts and inputs designed to expose vulnerabilities in the LLM and allows security teams to be proactive in their approach to cybersecurity. Discovering and fixing vulnerabilities before attackers can exploit them can significantly reduce the risk of data breaches and other security incidents. 

Medium-term risks

In the coming years, agential models could significantly alter the threat landscape. As increasingly capable models are integrated to create AI “agents,” language models could automate traditionally manual tasks, such as:

  • Vulnerability scanning: Language models can actively search for vulnerabilities and steal logs containing corporate access credentials and secrets on platforms like GitHub faster and more comprehensively than humans. The good news is that security teams can, too. Instead of falling behind with manual monitoring, security teams can monitor for external threats to remediate them before threat actors can exploit them.
  • Deepfake and vishing campaigns: AI models can already analyze authentic voices and videos to synthesize near-perfect imitations and analyze data breaches or social media information to help attackers craft personalized messages—helping them appear more trustworthy. Organizations can train their employees to better recognize these by showing examples of how realistic they can be and what cues to look for, such as unexpected urgency and requests for sensitive information. 

Moreover, these medium-term risks have substantial implications for ransomware, a trend that has reached epic proportions in recent years. Strengthening spear phishing training campaigns by increasing their frequency will become crucial as attackers leverage AI tools to enhance their tactics. 

Cybersecurity professionals can build a stronger defense by understanding how LLMs are being weaponized. With strategies including continuous monitoring for suspicious activity, user education on phishing tactics, and leveraging AI-powered security solutions specifically designed to detect LLM-generated content, security professionals can ensure they remain a step ahead of malicious actors in the ever-changing digital landscape.

MORE ON AI IN CYBERSECURITY

Nick Ascoli
Nick Ascoli

Senior Product Strategist, Flare

Nick Ascoli is a Senior Product Strategist at Flare and an experienced threat researcher who is recognized for his expertise in data leaks, reconnaissance, and detection engineering. Nick is an active member of the cybersecurity community contributing to open-source projects, regularly appearing on podcasts (Cyberwire, Simply Cyber, etc.) and speaking at conferences (GrrCON, B-Sides, DEFCON Villages, SANS, etc.)
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.