AI’s peril and promise for policymakers and cyber defenders
Tingey Injury Law Firm / Unsplash
At this year’s Billington Cybersecurity Summit, government officials and experts tackled questions surrounding the hottest topic in technology, artificial intelligence, highlighting the risks and benefits that the sudden emergence of rapidly developing AI platforms poses for national security and the cybersecurity sector.
Washington, DC—The rise of artificial intelligence (AI) apps such as ChatGPT, Microsoft's Bing chatbot and Google's Bard has created an abrupt shift in the tech sector. In the first two months after its introduction, ChatGPT soared to 100 million users, making it the fastest-growing consumer app in history and sparking the development of dozens, if not hundreds, of AI platforms specializing in content creation, conversation, coding, cybersecurity and more.
The sudden onset of widely available AI apps has forced the tech sector and the policy world to quickly clamber to ensure these technologies are deployed in safe and secure ways, with the focus and fears now expanding to generative AI that could approximate human-level capabilities capable of logical reasoning and possibly independent action.
So it’s little surprise that AI was a dominant theme at this year's Billington Cybersecurity Summit. Public sector and cybersecurity leaders gathered at the summit to discuss the threats and promises of the coming AI era, broadly concluding that the technology could be a boon to threat actors hoping to harness its potential for evil ends while also providing cyber defenders with new tools for spotting and thwarting those actors' activity.
Nakasone to US companies: Engage with us
Despite how AI has galvanized the tech world over the past year, cybersecurity defenders have used artificial intelligence and machine learning systems in some form for decades.
"The private sector has been doing artificial intelligence for quite a while," Gen. Paul M. Nakasone, Commander, United States Cyber Command, Director, National Security Agency, said in kicking off the event. "We've been doing it for a long time as well," particularly as part of NSA's role in boosting the nation’s cybersecurity readiness.
Owing to the swift rise of ChatGPT and successor apps, NSA recently concluded a 60-day study to develop a roadmap for AI and machine learning (ML). One conclusion Nakasone said came out of the study is that the NSA has a "tremendous responsibility" to engage with US companies with AI intellectual property so "that they understand that they are the targets of foreign entities. We've had the opportunity to talk to some of the leading experts among the leading corporations in America to say, 'This is what we're seeing.'"
Nakasone suggested that his joint command's AI remit could encompass more than NSA. "I would add not only are we looking at the agency, but Congress has also asked Cyber Command to develop a five-year plan for AI. And so, we have a five-year plan," he said, without elaboration on the plan's details. Nakasone will soon retire from his dual-hatted role, with Air Force Lt. Gen. Timothy D. Haugh nominated as his successor.
Hurtling into an AI future
Nakasone was not alone among government officials elevating AI's profile at the Summit. "We have, for the past 40 years, suffered the fact that technology has been developed, putting cost, performance features over security," Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency (CISA), said. "And it's why we have an internet full of malware, why we have software full of vulnerabilities, why we have social media full of disinformation. And it's why I'm very concerned about the world of artificial intelligence that we're hurdling into with the rapid acceleration progression of large language models."
Flickr / Wally Gobetz
Nate Fick, U.S. Ambassador at Large for Cyberspace and Digital Policy at the State Department, said that the explosion of AI chatbots over the past year has left governments worldwide scrambling to get ahead of the curve in establishing policy and guardrails around the technology. "These foundation models kind of cracked into the popular consciousness last fall, I think. Not because they hit any particular plateau of capability or broke into some new capacity,” he said. “It's more or less a progression up an exponential curve. It's going to continue into the foreseeable future, but there was something about the user experience and the accessibility of it that just captured imaginations around the world."
But what governments should do is unclear given the fluid and rapidly advancing nature of the technology. "It has been a hundred miles an hour on generative AI," Fick said. "And I think that every government in the world said, 'We have to do something.' And then there was an uncomfortable pause, but nobody knew what to do."
In June, the European Union took a significant step forward by passing the AI Act to restrict what is seen as the technology's riskiest uses. The UK announced in June it will host what the government calls the first global Summit on AI. In July, China published new rules for generative AI. India, however, has opted not to impose regulations on the burgeoning technology lest it advertently stifle it.
The U.S. has taken a more cautious approach than its Western allies, with the Biden administration securing voluntary commitments from leading AI companies to manage the risks posed by the technology. "I give a lot of credit to the United States for rather than diving into the fray, waving the flag and saying, ‘follow us’ with no clear direction where to go, the White House invested the time and the energy with the companies to develop this set of voluntary commitments," Fick said.
Fast evolution and threat actor adaptation
Private sector speakers and other government representatives delved into the risks of AI's advent for the cybersecurity world. Dr. Viveca Pavon-Harr, Director of theAccenture Federal Services Discover Lab , suggested that cybersecurity workers might face difficulty keeping up with the speedy evolution of AI technologies. "Today, on average, a new LLM [large language model] is being released every six weeks, which means that we have to vet and test and look for security and biases and threats every six weeks and then do it all over again every time that they're released," she said.
Shing-hon Lau, Senior Cybersecurity Engineer, CERT Division, at CMU SEI, said that malicious actors could get wise to the AI-based systems that cyber defenders use and quickly pivot to evade them. "If they know that you're using some kind of artificial intelligence model in order to detect their behavior, whether that's network security or they're trying to put malware in your system or whatever else, the types of attacks that are being considered are far more targeted than they previously have been," Lau said.
The spread of easy-to-use AI technologies expands the pool of malicious actors who can cause damage, Fiaz Hossain, Distinguished Architect of Security at Salesforce, said. "Script kiddies can now actually use these tools and be fairly sophisticated. There are code generators out there that can generate good code as well, and we have to have a lot of checks in there."
Fortunately, some helpful tools are emerging that could help cyber defenders better incorporate AI systems into their threat detection practices. In June, the White House released an AI Bill of Rights, spelling out five principles that should guide automated systems' design, use, and deployment. This document is "really helpful for those who have to deal with and address these issues," Lakshmi Raman, Director of AI at the CIA, said.
AI as a force multiplier for cybersecurity
Despite the hype surrounding AI, cybersecurity has long relied on AI for detecting and blocking adversarial threats. André Murphy, Federal CTO at CrowdStrike, said at the summit, "We've been leaning into it since 2011, building AI and ML [machine learning] into different products, all products, whether it be next-gen AV [antivirus] or endpoint detection and response. And we've had significant success looking at third-party testing and things of that nature. So, you can have success with it."
But that doesn’t mean defenders can blindly trust AI to improve the security of their networks. Vinh Nguyen, Chief Data Scientist for Operations at NSA, cautioned the cybersecurity professionals in attendance not to rely on AI technologies as a wholesale solution to detecting and countering cyber threats.
"You need to be strategic on how you want to implement and what problems you're trying to solve, but at the same time, prepare for the future when the adversary is using it against you as well,” Nguyen said. “Right now, I hate to say there are really no killer apps in cybersecurity and the application of AI to cybersecurity. I think a lot of people want to have AI solve your cybersecurity problem, but to be honest, it's going to be really hard."