Skip to content
Sign Up for Our Newsletter
About
Close About
Open About
About the Institute for Security and Technology
Our Team
Board Of Directors
Careers
Contact Us
Featured Events
Cyber Policy Awards
Critical Effect DC
Projects
Close Projects
Open Projects
AI and NC3
Pioneering action-oriented efforts to explore how advanced AI capabilities will be integrated into nuclear command, control, and communications
AI Antitrust and National Security
Exploring how to more effectively account for national security considerations in AI antitrust cases while respecting precedent, scope, and the core principles of antitrust law
AI Risk Reduction Initiative
Assessing the emerging risks and opportunities of AI foundation models and developing risk reduction strategies
AI Chip Export Control Initiative
Safeguarding U.S. national competitiveness by closing critical compliance and enforcement gaps
AI Risk Barometer
Measuring national security professionals’ perceptions of AI futures through a technically-informed survey
CATALINK
Preventing the onset or escalation of conflict by building a resilient global communications system
Energy FIRST
Powering U.S. and allied security & prosperity through a resilient energy future
Ransomware Task Force (RTF)
Combating the ransomware threat with a cross-sector approach
Religious Voices and Responsible AI
Engaging religious communities on safe and beneficial AI
SL5 Task Force
Strengthening AI security through a multistakeholder approach
UnDisruptable27
Driving more resilient lifeline critical infrastructure for our communities
All Projects
» Explore all of IST's projects, past and current
Focus Areas
Future of Digital Security
Geopolitics of Technology
Innovation and Catastrophic Risk
Events
Insights
Contact
Search
Donate
Archive
AI
Blog
Autonomous Agents, Human Consequences: Key Insights from IST’s Workshop on AI Agents & Agency in the Internet Ecosystem
AI agents are shaping how decisions are made, how systems behave, and how humans navigate the digital world. To better understand the implications of this shift, IST hosted a closed-door workshop to explore the potential effects of AI agents on human agency.
Agentic AI
,
AI
,
AI agents
,
artificial intelligence
December 19, 2025
Announcement
AI and NC3 Initiative Enters Phase 2: Bridging Perspectives on Risks and Opportunities
IST, with support from Longview Philanthropy, is entering phase 2 of our efforts on the integration of artificial intelligence into nuclear command, control, and communications. In its next phase, the AI and NC3 initiative will establish an executive committee and 4 working groups, driving further research on competitive AI dynamics, global perspectives on AI-NC3, AI technical development and trajectories, and AI norms and governance.
AI
,
artificial intelligence
,
competitive AI
,
governance
,
NC3
,
norms
,
nuclear
,
strategic stability
December 11, 2025
Blog
Giving Tuesday 2025: IST’s Impact
Every day, the work we do at IST makes a difference, from driving Congress to consider new approaches to intractable problems, to bringing national security leaders together to ensure they consider the existential threat of AI, to arming K-12 schools with the tools they need to defend against ransomware. As a 501(c)(3) nonprofit, this is only possible because of the generous support of our partners and donors.
AI
,
artificial intelligence
,
critical infrastructure
,
cybersecurity
,
Giving Tuesday
,
national security
,
Ransomware
December 2, 2025
Announcement
Putting Research into Policy Action: IST and the Korea Artificial Intelligence Safety Institute Join Forces to Tackle AI Risk
By convening developers, deployers, national security professionals, and policymakers, IST's AI Risk Reduction Initiative maps both the opportunities and risks of frontier AI and designs corresponding mitigation strategies. A recent collaboration between IST and the Korea AI Safety Institute (AISI) exemplifies how international cooperation can advance our understanding of AI risks across multiple domains.
AI
,
AI safety
,
artificial intelligence
,
Korea
,
mapping
,
Republic of Korea
,
risk map
December 1, 2025
Blog
Q&A: Approaching the Nuclear Brink?
With nuclear tensions on the rise, nuclear weapons have been in the headlines, and on our screens, lately. In the latest edition of the TechnologIST, IST’s Nuclear Policy team, Sylvia Mishra, Sahil V. Shah, Brandon Cortino, and Catherine Murphy discuss what the latest developments could mean for global security and stability.
AI
,
artificial intelligence
,
Moscow Treaty
,
NC3
,
New START Treaty
,
nuclear policy
,
nuclear weapons
,
strategic stability
November 24, 2025
Blog
The New Nuclear Age: At the Precipice of Armageddon – IST Hosts Book Talk with Author Ankit Panda
IST’s Nuclear Policy team hosted international security expert and author Ankit Panda in Palo Alto to learn more about his latest book unpacking the trilateral nuclear competition between the United States, China, and Russia. IST CEO Philip Reiner sat down for a fireside chat with Ankit, highlighting in their discussion the need to collectively identify political and technical solutions to address the nexus of emerging technologies and the new nuclear age.
AI
,
and communications
,
artificial inteligence
,
China
,
control
,
NC3
,
nuclear
,
nuclear command
,
nuclear risk
,
Russia
,
United States
November 17, 2025
Announcement
IST and AI and Faith launch Religious Voices and Responsible AI Initiative with support from the Future of Life Institute
Religious Voices and Responsible AI will explore the complex ethical, moral, and societal implications of Artificial Intelligence through a religious and spiritual lens. By bringing together leading religious scholars, AI ethicists, and technologists, IST, in partnership with AI and Faith, aim to foster a deeper, more inclusive conversation about the responsible development and deployment of AI.
AI
,
artificial intelligence
,
faith
,
religion
,
religious voices
,
responsible AI
October 30, 2025
Report
AI, NC3, and Strategic Stability: Scenario Exercise
How might the integration of AI into global NC3 systems transform strategic stability and deterrence dynamics? In Washington, D.C. this spring, IST convened more than 60 senior officials, technical experts, and civil society actors for an in-depth analysis of the risks and opportunities of AI in NC3 using this scenario exercise.
adaptive targeting
,
AI
,
decision support
,
nuclear
,
nuclear command
,
scenario
,
strategic warning
,
TTX
October 16, 2025
Announcement
IST launches the AI Risk Barometer project in partnership with the Future of Life Institute
IST, with support from and in partnership with the Future of Life Institute, is launching the AI Risk Barometer project to elucidate AGI and ASI capability thresholds; potential benefits and harms, including a catastrophic AI loss of control scenario; timelines; the efficacy of potential governance approaches to mitigate risk; and policymakers’ risk appetites given tradeoffs.
AGI
,
AI
,
artificial intelligence
,
ASI
,
capability threshold
,
governance
,
loss of control
,
national security
,
risk
October 14, 2025
Event
October 23, 2025 2:00 pm
Book Talk | The New Nuclear Age: At the Precipice of Armageddon
In a world with increasing nuclear dyads, can emerging technologies make us safer – or are we opening Pandora's Box? Join IST in Palo Alto, CA or virtually for a book talk with Ankit Panda.
AI
,
China
,
international security
,
NC3
,
nuclear
,
nuclear dyad
,
nuclear security
,
Russia
October 6, 2025
Report
Artificial Intelligence in Nuclear Command, Control & Communications: A Technical Primer
AI has been integral to nuclear weapons systems and operations for decades, but rapidly advancing “novel” models pose unforeseen technical opportunities. For the Institute for Security and Technology’s scenario-driven workshop on the integration of AI into NC3 systems, IST’s Sylvia Mishra and Philip Reiner co-authored a short primer to establish what constitutes 'novel' AI in the context of nuclear weapons decision-making.
AI
,
artificial intelligence
,
NC3
,
nuclear
,
system of systems
September 10, 2025
Blog
Q&A: National security, competition, and risk reduction — AI at IST
As AI models continue to proliferate and gain prominence, what are the cross-sector implications for national security, strategic competitiveness, and more? In this month's edition of the TechnologIST, Lillian Ilsley-Greene sat down with IST AI experts Jennifer Tang, Gabrielle Tran, and Fatima Faisal Khan to hear about how their work addresses the potential risks and opportunities of AI.
AI
,
antitrust
,
CFIUS
,
China
,
chips
,
competition
,
national security
,
NVIDIA
,
outbound investment
,
TechnologIST
,
venture capital
August 29, 2025
Announcement
IST Statement on President Trump’s AI Action Plan and Related Executive Orders
In February, IST submitted comments in response to the Office of Science and Technology Policy’s request for information on the development of an AI Action Plan. We were heartened to see many of the key issues we raised reflected in the Trump Administration’s recent AI executive orders.
AI
,
artificial intelligence
,
chips
,
compute
,
distributed energy
,
energy security
,
export control
,
national security
,
risk reduction
July 25, 2025
Blog
Q&A: Separating the hype from reality
Nuclear command, control, and communications systems are often called the “fourth leg” of the U.S. nuclear enterprise. In this month’s edition of the TechnologIST, IST Deputy Director of Nuclear Policy Sylvia Mishra joined us for a Q&A about IST’s recent primer on NC3 systems, drafted for a workshop focused on the integration of AI and NC3.
AI
,
NC3
,
nuclear
July 8, 2025
Event
June 3, 2025 10:30 am
Securing Our Autonomy: How AI Fuels Cognitive Warfare – and How We Can Fight Back
In a side event at the SCSP AI Expo, experts on the front lines of policy, industry, and neuroscience unpacked how AI is being weaponized for psychological and influence operations — and how we can build resilience across society.
AI
,
artificial intelligence
,
cognitive warfare
,
disinformation
,
information operations
,
misinformation
,
neuroscience
,
psychological warfare
June 3, 2025
Podcast
The Coming Age of Agentic AI
In this episode of TechnologIST Talks, IST CEO Philip Reiner is joined by Dr. Margaret Mitchell, a computer scientist and researcher focused on machine learning and ethics informed AI development, to discuss agentic, autonomous, and transparent models – and the pathway to truly secure AI.
Agentic AI
,
AI
,
Autonomous AI
,
ethical AI
,
machine learning
May 29, 2025
Previous
Page
1
Page
2
Page
3
Page
4
Next
Search
Search
MENU
HOME PAGE
About CATALINK
CATALINK Frequently Asked Questions
The CATALINK Brief
Insights
Events
Analysis
Podcasts
Why crisis communications?
Our Team
GET IN TOUCH
Email:
[email protected]
Send us a message:
Contact
JOIN THE CATALINK MAILING LIST
First Name
Last Name
Organization
Email
Subscribe