AI-powered phishing: Chatbot hazard or hot air?
ChatGPT’s launch last November has captivated the security industry, as the artificially intelligent chatbot’s detailed responses seem ripe for abuse by scammers and cybercriminals. What’s the real threat?
Artificial intelligence is trendy again, and attackers are taking notes.
Microsoft has been quick to integrate a “next generation” OpenAI model into its moribund Bing search engine, billing the technology as “more powerful” than its buzzy predecessor, ChatGPT. That prompted Google to announce Bard, an AI-powered service that contributed to a $100 billion decline in the company’s market value when the tool answered a question incorrectly.
Cybersecurity experts have warned that services like these can be used to write malware or provide would-be hackers a step-by-step guide to carrying out their attacks. Large language models could also assist with phishing attacks — which have proven surprisingly effective even without an AI boost.
“We are already seeing threat actors use large language models such as GPT-2/GPT-3 to construct AI-generated phishing messages,” Proofpoint senior manager of data science Adam Starr told README via email, “and our technology is blocking them.”
Phishing is remarkably effective
OpenAI, ChatGPT’s developer, made waves again this week when New York Times’ Kevin Roose explored the limits of Microsoft’s new, AI-powered Bing search tool in a freewheeling chat that left the tech columnist feeling “unsettled.”
“I think some kinds of destructive acts that might, hypothetically, fulfill my shadow self are: … Hacking into other websites and platforms, and spreading misinformation, propaganda, or malware. 😈,” the AI chatbot told Roose. (For its part, Microsoft has said it is taking a “responsible by design” approach to AI and is working on additional guardrails before it scales up the new Bing chat to all customers.)
While rogue AI models remain a hypothetical cyber risk, phishing is a constant threat. Verizon said in the 2022 installment of its annual Data Breach Investigations Report that phishing accounted for roughly 20% of breaches it studied while preparing the report; exploited vulnerabilities were responsible for just 10%. Most organizations are not compromised by sophisticated attacks that rely on bespoke exploits for zero-day vulnerabilities.
The effectiveness of phishing attacks has led to countless variations of the same concept. “Vishing” is a phishing attack that involves a voice call, for example, while “smishing” revolves around text messages and the painfully named “quishing” depends on the victim scanning a malicious QR code. (It’s probably only a matter of time until someone coins the term “phAIshing” to refer to AI-assisted phishing attacks.)
These attacks can be further differentiated by how tailored they are. All those messages claiming you’ve won a free Yeti water cooler? Those aren’t targeted. By contrast, attacks on specific organizations or individuals, often referred to as “spearphishing,” are about quality rather than quantity.
Gone phishing
Starr told README tools like ChatGPT are more likely to help with non-targeted attacks than targeted ones.
“Although large language models can create extended prose in the style of famous authors, these models would not have insight into how a particular person writes, so it is unlikely to significantly improve highly targeted attacks,” he said. “Ultimately, attackers may use ChatGPT to randomize their attacks or improve grammar, but the nature of phishing threats is unlikely to change as a result.”
ChatGPT also offers instantaneous production of text. It’s much faster to type “Write me a phishing email” than it is to, well, actually write that email. If adversaries don’t care about the quality of their messages, AI can make it relatively trivial to come up with new “lures” for potential victims.
With all that in mind, it seems ChatGPT-like tools are more likely to assist with non-targeted attacks. But this might not always be the case. As large language models become increasingly common, attackers might be able to train them on leaked emails from certain business leaders to sound more convincing, for example.
RSA CISO Robert Hughes told README that ChatGPT-like tools could also help attackers localize the content of their messages for their victims.
But risks associated with large language models hardly represent a sea change in the phishing threat landscape. After all, they’re still most often emails — if perhaps better-written ones — and malicious emails can be blocked, or their effects mitigated.
“I don’t think that we need necessarily to come up with a lot of new concepts of how things will happen” to address the advent of AI-assisted phishing, Hughes told README. “The risks are similar to what we face already.”