SlashNext this week published a report detailing how cybercriminals are harnessing generative artificial intelligence (AI) capabilities, dubbed WormGPT, to launch sophisticated phishing and business email compromise (BEC) attacks in greater volume.
WormGPT is an AI platform based on the GPTJ language model that is specifically trained to augment the development of these types of attacks.
SlashNext CEO Patrick Harr said it is now becoming even more difficult to recognize these attacks because WormGPT can create text with impeccable grammar. As such, no amount of end-user training is likely to detect these attacks, he added.
Instead, cybersecurity is now moving into an era where AI needs to be employed to combat the AI that cybercriminals are now using to launch more cyberattacks faster.
The challenge is that cybercriminals appear to be adopting tools such as WormGPT faster than cybersecurity teams can embrace AI. Many cybersecurity teams have begun to use general-purpose AI platforms, like OpenAI’s ChatGPT, to more quickly summarize alerts and identify remediations to known vulnerabilities. However, cybersecurity teams will need access to generative AI platforms trained using large language models (LLMs) that use cybersecurity data, noted Harr.
That approach is critical, because while generative AI platforms can augment cybersecurity teams, every organization needs to ensure that corporate data isn’t inadvertently incorporated into a platform that is generally accessible to anyone.
SlashNext has been making a case for using a mix of machine learning algorithms and LLMs it developed to identify and remove phishing and BEC attacks from email. Collectively, those capabilities make it possible to use AI to, for example, identify attacks based on tone, intent and other tactics. Those specific tactics are indicative of an attempt to use social engineering techniques to convince an end user that an email is legitimate. At the core of those capabilities is a database that analyzes 700,000 potential threats per day.
At this point, it’s not a question of if cybersecurity teams will rely more on AI but rather to what extent. AI was initially greeted with skepticism by many cybersecurity professionals, but as it evolves and the volume and sophistication of cyberattacks increase, it’s becoming apparent that few cybersecurity teams are going to want to do without it. It’s not likely there will be enough cybersecurity professionals available to fill all the existing open positions any time soon, so the only alternative is to rely more on automation.
In fact, given the inherent stress that securing IT environments generates, most cybersecurity professionals want access to tools that make it more likely they can succeed. Over time, most cybersecurity professionals will migrate to organizations that provide them with those tools rather than relying on antiquated platforms. Like it or not, organizations of all sizes are now locked into an AI arms race that shows no signs of abating.