Skip to content
Sign Up for Our Newsletter
About
Close About
Open About
About the Institute for Security and Technology
Our Team
Board Of Directors
Careers
Contact Us
Featured Events
Cyber Policy Awards
Critical Effect DC
Projects
Close Projects
Open Projects
AI and NC3
Pioneering action-oriented efforts to explore how advanced AI capabilities will be integrated into nuclear command, control, and communications
AI Antitrust and National Security
Exploring how to more effectively account for national security considerations in AI antitrust cases while respecting precedent, scope, and the core principles of antitrust law
AI Risk Reduction Initiative
Assessing the emerging risks and opportunities of AI foundation models and developing risk reduction strategies
AI Chip Export Control Initiative
Safeguarding U.S. national competitiveness by closing critical compliance and enforcement gaps
AI Risk Barometer
Measuring national security professionals’ perceptions of AI futures through a technically-informed survey
CATALINK
Preventing the onset or escalation of conflict by building a resilient global communications system
Energy FIRST
Powering U.S. and allied security & prosperity through a resilient energy future
Ransomware Task Force (RTF)
Combating the ransomware threat with a cross-sector approach
Religious Voices and Responsible AI
Engaging religious communities on safe and beneficial AI
SL5 Task Force
Strengthening AI security through a multistakeholder approach
UnDisruptable27
Driving more resilient lifeline critical infrastructure for our communities
All Projects
» Explore all of IST's projects, past and current
Focus Areas
Future of Digital Security
Geopolitics of Technology
Innovation and Catastrophic Risk
Events
Insights
Contact
Search
Donate
Archive
artificial intelligence
Blog
Design Obligation or Third-Party Liability? The White House Framework and AI’s Grey Area
The White House’s newly-released National Policy Framework for Artificial Intelligence addresses the boundary between federal and state AI regulation, including a recommendation that states should not hold developers liable for third parties’ unlawful use of their models. Gabrielle Tran examines what this means for the emerging legal landscape around AI harm.
AI
,
artificial intelligence
,
chatbot
,
Executive Order
,
generative AI
,
liability
,
Section 230
,
third party
April 13, 2026
Blog
Meet the Andrew Carnegie AI-Nuclear Policy Accelerator Inaugural Cohort
IST introduces the 26 members of the inaugural Andrew Carnegie AI-Nuclear Policy Accelerator cohort, a program that will equip mid-career security practitioners with the tools to confront the intersection between AI and nuclear issues.
AI
,
announcement
,
arms control
,
artificial intelligence
,
fellows
,
ICBM
,
modernization
,
nuclear
,
nuclear weapons
April 9, 2026
Blog
Meet the Religious Voices and Responsible AI Steering Committee
Thirteen computer science, philosophy, theology, and public policy experts have joined IST and AI and Faith to provide strategic guidance on the Religious Voices and Responsible AI initiative.
AI
,
artificial intelligence
,
Christianity
,
faith
,
Islam
,
religion
March 19, 2026
Blog
Reflecting on the Munich Cybersecurity Conference Roller Coaster: Digital Sovereignty, Decoupling, and the Risks and Opportunities of AI
IST Chief Strategy Officer Megan Stifel visited Germany in February to join cybersecurity experts from around the globe at the Munich Cyber Security Conference. From digital sovereignty to the risks and opportunities of AI, she reflects on her experience for the NatSpecs blog.
artificial intelligence
,
cybersecurity
,
decoupling
,
digital sovereignty
,
Munich
,
transatlantic
March 5, 2026
Blog
Lessons from Moltbook: When Agents Talk to Agents
Moltbook, a Reddit-like forum meant solely for AI agents, had already garnered over 1.5 million users before research suggested that a multi-agent environment with weak cybersecurity guardrails could be a hotbed for scaling fraud and influence operations. For the IST blog, Gabrielle Tran outlines what this means for the cyber-enabled manipulation of tomorrow.
AI
,
AI agents
,
artificial intelligence
,
fraud
,
malware
,
phishing
,
prompt injection
,
trust
February 26, 2026
Memo
,
Primer
A Changing Export Control Landscape: H200 Exports, Remote Access Rules, and What Comes Next
January 2026 was, in many ways, an inflection point for export controls. From the resumption of Nvidia H200 sales to China to congressional discussion on restricting remote cloud access, IST's AI Chip Export Control Initiative team unpacks what these developments mean for the future.
AI
,
artificial intelligence
,
Bureau of Industry and Security
,
China
,
chip
,
export control
,
NVIDIA
,
semiconductor
February 24, 2026
Blog
Q&A: An AI Loss of Control “Warning Shot”
How might you lose control of a system as complex as a frontier AI model? In this month's newsletter, we speak to the authors of IST's newest report diving into the risk of AI Loss of Control – that is, models diverging from authorized constraints, to the extent that a human operator can no longer prevent or constrain undesired outcomes - and presenting a framework for mitigation.
AI
,
AI Loss of Control
,
artificial intelligence
,
deception
,
model drift
,
risk reduction
,
scheming
February 19, 2026
Blog
Something Mysterious Is Happening
In a blog reflecting on her co-authored report, AI Loss of Control Risks, Mariami Tkeshelashvili writes, "There is a difference between embracing mystery as a driver of discovery and being incurious about risks that are already becoming visible. Monitoring AI behaviours that we cannot fully explain is not pessimism—it is responsibility. It is what keeps us safe."
AI
,
artificial intelligence
,
deception
,
Indications and Warning
,
loss of control
,
manipulation
,
monitoring
,
risk reduction
,
scheming
,
strategic foresight
February 19, 2026
Report
AI Loss of Control Risk: Indications & Warning
Though technologists and policymakers alike are eager to address AI Loss of Control–a state in which an AI system diverges from authorized constraints–there are significant gaps in the ways stakeholders understand, anticipate, and perceive this risk. "AI Loss of Control Risk" proposes applying the Indications & Warning (I&W) methodology used by the intelligence community to monitor this risk.
AI
,
AI Loss of Control
,
artificial intelligence
,
deception
,
model drift
,
risk reduction
,
scheming
February 19, 2026
Blog
IST Launches the Andrew Carnegie AI–Nuclear Policy Accelerator
The Institute for Security and Technology (IST) is pleased to announce the launch of the Andrew Carnegie AI–Nuclear Policy Accelerator, a new, practitioner-focused initiative designed to strengthen decision-making at the intersection of artificial intelligence (AI) and nuclear policy.
AI
,
announcement
,
artificial intelligence
,
fellowship
,
NC3
,
nuclear policy
,
nuclear systems
February 10, 2026
Blog
Autonomous Agents, Human Consequences: Key Insights from IST’s Workshop on AI Agents & Agency in the Internet Ecosystem
AI agents are shaping how decisions are made, how systems behave, and how humans navigate the digital world. To better understand the implications of this shift, IST hosted a closed-door workshop to explore the potential effects of AI agents on human agency.
Agentic AI
,
AI
,
AI agents
,
artificial intelligence
December 19, 2025
Blog
AI and NC3 Initiative Enters Phase 2: Bridging Perspectives on Risks and Opportunities
IST, with support from Longview Philanthropy, is entering phase 2 of our efforts on the integration of artificial intelligence into nuclear command, control, and communications. In its next phase, the AI and NC3 initiative will establish an executive committee and 4 working groups, driving further research on competitive AI dynamics, global perspectives on AI-NC3, AI technical development and trajectories, and AI norms and governance.
AI
,
artificial intelligence
,
competitive AI
,
governance
,
NC3
,
norms
,
nuclear
,
strategic stability
December 11, 2025
Blog
Giving Tuesday 2025: IST’s Impact
Every day, the work we do at IST makes a difference, from driving Congress to consider new approaches to intractable problems, to bringing national security leaders together to ensure they consider the existential threat of AI, to arming K-12 schools with the tools they need to defend against ransomware. As a 501(c)(3) nonprofit, this is only possible because of the generous support of our partners and donors.
AI
,
artificial intelligence
,
critical infrastructure
,
cybersecurity
,
Giving Tuesday
,
national security
,
Ransomware
December 2, 2025
Blog
Putting Research into Policy Action: IST and the Korea Artificial Intelligence Safety Institute Join Forces to Tackle AI Risk
By convening developers, deployers, national security professionals, and policymakers, IST's AI Risk Reduction Initiative maps both the opportunities and risks of frontier AI and designs corresponding mitigation strategies. A recent collaboration between IST and the Korea AI Safety Institute (AISI) exemplifies how international cooperation can advance our understanding of AI risks across multiple domains.
AI
,
AI safety
,
artificial intelligence
,
Korea
,
mapping
,
Republic of Korea
,
risk map
December 1, 2025
Blog
Q&A: Approaching the Nuclear Brink?
With nuclear tensions on the rise, nuclear weapons have been in the headlines, and on our screens, lately. In the latest edition of the TechnologIST, IST’s Nuclear Policy team, Sylvia Mishra, Sahil V. Shah, Brandon Cortino, and Catherine Murphy discuss what the latest developments could mean for global security and stability.
AI
,
artificial intelligence
,
Moscow Treaty
,
NC3
,
New START Treaty
,
nuclear policy
,
nuclear weapons
,
strategic stability
November 24, 2025
Blog
Cybersecurity Awareness Month at IST: Spotlighting our cyber efforts across borders, at the intersection of AI, and throughout sectors
As technologies evolve across the world, so do cyber threats. For Cybersecurity Awareness Month, the Institute for Security and Technology shares practical resources, novel research, and critical insights from IST’s cadre of experts to help individuals, organizations, and communities strengthen their cybersecurity practices.
artificial intelligence
,
cybersecurity
,
K-12 Cyber Defense Coalition
,
Ransomware
,
Ransomware Task Force
,
UnDisruptable27
November 17, 2025
Previous
Page
1
Page
2
Page
3
Page
4
Page
5
Next
Search
Search
MENU
HOME PAGE
About
FAQ
The CATALINK Brief
Insights
Events
Analysis
Podcasts
Why crisis communications?
Our Team
GET IN TOUCH
Email:
[email protected]
Send us a message:
Contact
JOIN THE CATALINK MAILING LIST
First Name
Last Name
Organization
Email
Subscribe