Skip to content
Sign Up for Our Newsletter
About
Close About
Open About
About the Institute for Security and Technology
Our Team
Board Of Directors
Careers
Contact Us
Featured Events
Cyber Policy Awards
Critical Effect DC
Projects
Close Projects
Open Projects
AI and NC3
Pioneering action-oriented efforts to explore how advanced AI capabilities will be integrated into nuclear command, control, and communications
AI Antitrust and National Security
Exploring how to more effectively account for national security considerations in AI antitrust cases while respecting precedent, scope, and the core principles of antitrust law
AI Risk Reduction Initiative
Assessing the emerging risks and opportunities of AI foundation models and developing risk reduction strategies
AI Chip Export Control Initiative
Safeguarding U.S. national competitiveness by closing critical compliance and enforcement gaps
AI Risk Barometer
Measuring national security professionals’ perceptions of AI futures through a technically-informed survey
CATALINK
Preventing the onset or escalation of conflict by building a resilient global communications system
Energy FIRST
Powering U.S. and allied security & prosperity through a resilient energy future
Ransomware Task Force (RTF)
Combating the ransomware threat with a cross-sector approach
Religious Voices and Responsible AI
Engaging religious communities on safe and beneficial AI
SL5 Task Force
Strengthening AI security through a multistakeholder approach
UnDisruptable27
Driving more resilient lifeline critical infrastructure for our communities
All Projects
» Explore all of IST's projects, past and current
Focus Areas
Future of Digital Security
Geopolitics of Technology
Innovation and Catastrophic Risk
Events
Insights
Contact
Search
Donate
Archive
machine learning
Podcast
The Coming Age of Agentic AI
In this episode of TechnologIST Talks, IST CEO Philip Reiner is joined by Dr. Margaret Mitchell, a computer scientist and researcher focused on machine learning and ethics informed AI development, to discuss agentic, autonomous, and transparent models – and the pathway to truly secure AI.
Agentic AI
,
AI
,
Autonomous AI
,
ethical AI
,
machine learning
May 29, 2025
Blog
Catalyzing Security in AI Governance
At IST we have been observing recent developments in the nascent AI revolution with interest; conversations around AI and governance align with our own mission to harness opportunities enabled by emerging technologies while also mitigating their attendant risks.
AI governance
,
artificial intelligence
,
cybersecurity
,
digital cognition
,
machine learning
June 26, 2023
Event
March 23, 2021 2:00 pm
JADC2: The Complexity of Military Capabilities
Complexity is the enemy of security. So why do we keep building ever more complex systems to solve some of our most sensitive challenges? On March 23, IST hosted a panel discussion on JADC2 and how to secure complex systems of systems.
artificial intelligence
,
Department of Defense
,
JADC2
,
machine learning
March 23, 2021
Event
December 15, 2020 2:30 pm
Cyber Pop-Up: Kubernetes, DevSecOps, and Security in the DoD
What does cybersecurity look like in a legacy warfighter? On December 15, 2020, IST hosted an in-depth discussion with security experts from industry and government on the use of emerging technologies in the DoD, how to bake security into software development, and how to create effective collaborations between public and private partners.
Department of Defense
,
DevSecOps
,
emerging technologies
,
machine learning
December 15, 2020
Op-ed
When machine learning comes to nuclear communication systems
In an op-ed for C4ISRNET, Philip Reiner, Alexa Wehsener and M. Nina Miller underline the importance of credible NC3 systems.
CATALINK
,
crisis communications
,
machine learning
,
NC3
,
nuclear
,
stability
April 30, 2020
Op-ed
The Real Value of Artificial Intelligence in Nuclear Command and Control
In an article in War on the Rocks, Philip Reiner and Alexa Wehsener make the case for a nuanced discussion about the integration of artificial intelligence in nuclear command, control, and communication systems.
artificial intelligence
,
machine learning
,
NC3
,
nuclear
,
stability
November 4, 2019
AAR
,
Report
AI and the Military: Forever Altering Strategic Stability
IST and the Center for Global Security Research in 2018 convened workshops to explore the role of AI in 21st century security context. Should we be concerned about an “AI arms race”? What are the risks of unintended consequences and strategic surprise driven by AI?
arms race
,
artificial intelligence
,
drone
,
electromagnetic pulse
,
machine learning
,
national security
,
strategic stability
February 13, 2019
Search
Search
CATALINK
Home
About
CATALINK BRIEF
FAQ
Our Team
Why do we need crisis communications?
Activities
Events
Insights
Podcasts
Press
Get In Touch