How AI could inflame one of the costliest cyber scams

Adam Nir / Unsplash

Audio deepfakes, stolen internal email addresses and identity fraud drive continued gains in business email compromise attacks. What do the latest and most advanced attacks demonstrate about the future, and how can defenders prepare?

The email reads as instructions from a CEO to pay a specific vendor as soon as possible, carries an urgent tone and is written in impeccable prose: "I understand that this might be short notice, but this payment is incredibly important and needs to be done over the next 24 hours."

Welcome to the next generation of business email compromise (BEC) scams, enabled by a large language model (LLM) — a generative AI system trained to mimic natural language — and available to anyone. While legitimate generative AI chatbots, such as OpenAI's ChatGPT and Google's Bard, have protections in place to guard against being used in malicious ways, crimeware developers have already created their own tools to offer similar capabilities to underground markets. A service known as WormGPT, for example, created the message excerpted above.

Generative AI systems not only improve the quality of these scams, but also make the generation of convincing fake emails, voices and videos much more scalable, allowing attackers to broadly gain the benefits of AI, Brittany Allen, the trust and safety architect at Sift, a cybersecurity firm, told README.

"This tech will only make committing fraud at scale a simpler task," Allen said. "Beyond deepfakes and AI voice generators being used by the fraudsters who build the tools, those same tools can be packaged and sold to other fraudsters who lack the technological skills to build them on their own."

ChatGPT-like generative AI can create written content that easily passes for notes and memos penned by humans. Synthesized audio created by generative AI and deep neural networks, so-called deepfake audio, can fool unwary workers and employees. And similarly, manufactured videos can be inserted into online conferences to mimic executives and better advance fraud schemes.

The content is already finding its way into the costliest scam facing companies: BEC, also known as CEO fraud. In the scam, a cybercriminal inserts themselves into financial operations or the supply chain in a variety of ways — through a phone call using a CEO's voice, an email changing the bank account of a legitimate vendor or an invoice that appears to have been sent from a corporate executive. The ultimate goal is initiating a legitimate transfer of funds from the target company to the criminal's account.

In 2022, nearly 22,000 companies and individuals reported more than $2.8 billion in losses due to BEC scams, according to the annual report of the FBI's Internet Crime Complaint Center (IC3). While investment fraud — typically linked to cryptocurrency scams — topped BEC in 2022 for the first time as the costliest form of fraud, more than doubling to account for $3.3 billion in losses, BEC will likely continue to lead over the long term.

Major incidents in the UK, UAE

AI is already being incorporated into such attacks. In 2021, attackers using a deepfake audio synthesizer of a corporate executive stole $35 million from a United Arab Emirates company. In 2019, a similar deepfake mimicked a German CEO leading to the transfer of 220,000 euros from a UK subsidiary.

The trend will surely continue as threat actors gain more experience integrating AI with their schemes. In the past three years, more than 130,000 posts have been published to underground forums seeking information on deepfakes and tools to carry out attacks, threat intelligence firm Flashpoint revealed in a June blog post.

As cybercriminals learn to use these tools, they are also coming up with more uses for them, such as data-mining personal details to individualize email attacks, Flashpoint analyst Karly Kaliaskar told README.

steve-johnson-WhAQMsdRKMI-unsplash

Steve Johnson / Unsplash

"AI is definitely shifting the landscape," Kaliaskar said. "Threat actors are utilizing the data-gathering capability of AI — and specifically LLM language processing models — to not only collect, but provide detailed contextual details from social media profiles and other sources of open-source information. This helps threat actors to craft highly convincing, and tailored, spear phishing emails, which are getting harder to detect."

The U.S. Department of Homeland Security (DHS) warned that deepfakes will have a variety of impacts in the future and issued a whitepaper with a variety of potential scenarios that illustrate the potential threat of deepfakes, including cases of CEO fraud.

"If deepfakes become convincing enough and ubiquitous enough, companies may be at increased legal risk due to affected consumers’ seeking damages and compensation for financial loss due to ensuing breaches, identity theft, etc.," the DHS stated, adding: "The fact that deepfake technology will be accessible on a large scale to many people is a challenge."

Deepfakes' impact on business security

Generative AI is not just fooling humans but machines as well. Companies that rely on biometric authentication to allow employees to access systems or initiate financial transactions have already seen their systems bypassed by deepfakes, as we learned when a synthesized voice created by a Wall Street Journal reporter fooled the voice biometric security system used by Chase bank.

Voice-based authentication should be used only with other forms of authentication, and companies should, at the very least, use adaptive authentication, which uses context to determine whether more rigorous identity checks are necessary, Malek Ben Salem, a managing director for emerging tech security at global consultancy Accenture, told README.

"It still can be used as another layer of defense, but don't rely on it by itself, definitely," she said. "You have to complement it with other systems."

Technology, training not enough

Like nuclear deterrence, part of the solution may be to protect the raw materials needed to make deepfake content. To harden their digital identities, business executives and consumers may have to take more care to minimize the available information. Posting audio or video recordings online, for example, could allow attackers to collect the data required to create deepfakes.

In a public advisory warning of attackers using deepfake technology to create pornographic videos of people as part of an extortion campaign, the FBI warned that "[i]mages, videos, or personal information posted online can be captured, manipulated, and distributed by malicious actors without your knowledge or consent."

Whether or not minimizing digital footprints are a reasonable approach, only the future will tell, said Erika Sonntag, a cyberthreat intelligence analyst at Flashpoint, who pointed out that attackers will likely find other ways to get sampled information.

"Those spam calls that come in, you don't know who's on the other side, they can grab your audio recording, and next thing you know, they could use that for their own purposes," she said. "So we expect a rise in audio deepfakes."

While humans are likely to be fooled, and an arms race to detect synthesized voices is likely to end badly, a combination of machine and humans seems poised to offer the most effective defense. A human can ask a variety of specific questions to throw the adversary off, depart from the pre-recorded script and include questions only the person should know. Machines can detect anomalies in the audio that could indicate fraud.

Companies should have processes in place that allow for authentication to be done through a specific channel, whether hanging up to call an executive directly or confirming changes to financial details offline.

As with many defense-in-depth measures, protecting against deepfakes will require people, processes and technologies, said Sift's Allen.

"We in the fraud prevention community must up our effort to educate the general public to protect themselves," she said. "But even that effort won't be enough to protect all consumers, so we also need to take action behind the scenes to identify and mitigate fraud. We can't just rely on [individuals] to be able to detect a deepfake."

link?