Artificial Intelligence

Q&A: Implications of AI in Cybersecurity

Just as there are two opposing sides in a traditional conflict, there are two in cyberspace as well. Those on offense exploit information system vulnerabilities to steal information, lay in wait, or cause business disruption. Meanwhile, those on defense strive to preserve and protect their technology resources and the real-world functions that depend on them. Together, the two form an “offense-defense balance,” the delicate combination of factors and forces that help influence, or even predict the outcome.

Enter artificial intelligence. With the assistance of AI, defenders and offenders can analyze vast amounts of data at speed and scale. AI-powered deepfakes and sophisticated phishing can supercharge social engineering schemes, threaten user authentication, and challenge human-to-human interactions. AI-enhanced code writing, review, and vulnerability detection can bolster defenses–but also introduce new risks. AI-assisted workflows can enhance security operations. And AI-enhanced reconnaissance can improve attackers’ ability to prioritize targets and penetrate defenses. 

But how are AI capabilities altering the cyber offense-defense balance? IST’s latest report, The Implications of Artificial Intelligence in Cybersecurity, seeks to find out. This work was made possible by the generous support of Google.org as part of its Digital Futures Project to support responsible AI.

Director of Strategic Communications Sophia Mauro sat down with report authors Associate for Cybersecurity and Emerging Technologies Jennifer Tang, Adjunct Cyber and Artificial Intelligence Policy Fellow Tiffany Saade, and Chief Trust Officer Steve Kelly to learn more. 

Read the full interview in the October edition of IST’s newsletter, The TechnologIST. 

Q: Based on your findings, how is artificial intelligence already impacting the offense-defense balance, and who has the current advantage?  

Steve: “There are numerous use cases for which AI can assist both cyber defenders and offenders, but in the short- to mid-term, we concluded that the advantage goes to the defender. Why? AI is fundamentally data-driven, and defenders typically have the data advantage. But don’t take false comfort from these words; organizations that fail to capitalize on their home field advantage will soon be outpaced by sophisticated actors who are moving quickly to add AI capabilities to their quivers.”

Tiffany: “One of the most immediate benefits of AI—specifically large language models—is its incredible ability to analyze content. You might ask, how is this relevant to cybersecurity? On the defensive side, AI’s speed and efficiency helps cybersecurity teams by rapidly analyzing massive datasets, detecting anomalies and predicting potential threats before they escalate. On the flip side, attackers are using AI to enhance their reconnaissance and conduct faster data mining. For instance, AI can sift through stolen data at unprecedented speed, helping a ransomware gang prioritize victims and optimize their extortion negotiations.”

Q: This report took a unique approach to the research process. Not only did you review the very nascent literature in the field of AI and cyber, but you also surveyed a number of professionals in the cybersecurity industry who are confronting the real-world implications of AI in cybersecurity on a daily basis. Did the two sources of information generally lead to the same conclusions?

Jenn: “Yes and no! At the outset of the study, we weren’t able to find literature that directly addressed our research question. This led us to the idea of interviewing innovators and frontline practitioners to put our finger on the pulse of current developments. But six months later, numerous on point research articles, security blogs, and proof-of-concepts were published. Taken together, these sources validated one another and gave us a sufficiently clear view of the state of play.”

Tiff: “For example, we found a Stanford University study on how humans interact with AI while generating software code and the adverse effects this can have on software security. This research complemented our industry interviews, in which we learned of the successes and limitations in their early use of AI code assistants. This approach gave us confidence to make several bold predictions for the future, while being grounded in the reality of current challenges and limitations that must be overcome.”

Q: What were some of your key takeaways or the most surprising findings from this research?

Steve: “Most have no doubt heard of the recent leaps in generative AI’s ability to create realistic “deep fake” images and videos. As we dug into this topic, I was surprised by some of the examples we found in which GenAI has already complicated our ability to authenticate a human. For instance, AI-generated fakes tricked an employee on a video call into thinking he was talking with a group of coworkers, and other reports show GenAI being used to defeat facial recognition and fingerprint biometric authentication. This caused us to go down the rabbit hole on the significant improvements needed in Identity and Access Management approaches to deal with this. Here’s a teaser… your state-level motor vehicle department or post office might be part of the answer!”

Q: Of the “watch list” trends that you identified as potential future issues for the intersection of AI and cybersecurity, which concerns you the most? Why? 

Jenn: “Definitely AI agents. We don’t get too into the weeds about agents in this report, but agentic AI is a technology that’s developing incredibly quickly with profound implications for a range of applications we cover in the piece. In short, AI agents are advanced software systems that can seek and learn from information, make decisions, and plan autonomously in ways that affect our environment. Agents have shown a lot of promise for defenders in their ability to optimize workflows and execute otherwise burdensome tasks, but our team is increasingly interested in—and concerned by—how agents will be used maliciously. There’s a lot of work that we could do on AI agents looking to the future.”

Tiff:  “In a multi-agent malicious threat model, AI agents would likely operate autonomously, yet with some level of collaboration; each agent would specialize in specific tasks that contribute to the overall attack strategy. For instance, one agent might be trained to focus on technical reconnaissance, probing the target’s network defenses, scanning endpoints, and mapping vulnerabilities. Meanwhile, another agent might focus on resource development based on the reconnaissance outputs, such as selecting malware most suited to exploit these vulnerabilities. Together, these malicious agents would continuously refine their tactics based on feedback from each other and from the target network. For defenders, this means that traditional, signature-based security measures could be insufficient. As a result, they must shift towards behavior-based detection and real-time threat intelligence to proactively address threats from multi-agent hacking scenarios.”

Q: What’s your outlook for the future?

Steve: “AI’s use in cybersecurity will be a continued arms race, but unlike in traditional domains like military weapons technology, most of the innovation is happening within the private sector. It will be a tricky balance for responsible governments to prevent AI with national security-relevant capabilities from getting into the hands of bad actors—such as illiberal regimes—while at the same time not stifling innovation. It is this very innovation that will keep our defenders ahead of the attackers. While I cannot foretell the future, I’ll say this: cybersecurity in 2025 and beyond is not a “do it yourself” endeavor. Organizations large and small will need highly capable cyber defenses, and from our research and interactions with numerous cybersecurity vendors, there is a competitive marketplace of capable providers ready to help.”