Skip to content
Sign Up for Our Newsletter
About
Close About
Open About
About the Institute for Security and Technology
Our Team
Board Of Directors
Careers
Contact Us
Featured Events
Cyber Policy Awards
Critical Effect DC
Projects
Close Projects
Open Projects
AI and NC3
Pioneering action-oriented efforts to explore how advanced AI capabilities will be integrated into nuclear command, control, and communications
AI Antitrust and National Security
Exploring how to more effectively account for national security considerations in AI antitrust cases while respecting precedent, scope, and the core principles of antitrust law
AI Risk Reduction Initiative
Assessing the emerging risks and opportunities of AI foundation models and developing risk reduction strategies
AI Chip Export Control Initiative
Safeguarding U.S. national competitiveness by closing critical compliance and enforcement gaps
AI Risk Barometer
Measuring national security professionals’ perceptions of AI futures through a technically-informed survey
CATALINK
Preventing the onset or escalation of conflict by building a resilient global communications system
Energy FIRST
Powering U.S. and allied security & prosperity through a resilient energy future
Ransomware Task Force (RTF)
Combating the ransomware threat with a cross-sector approach
Religious Voices and Responsible AI
Engaging religious communities on safe and beneficial AI
SL5 Task Force
Strengthening AI security through a multistakeholder approach
UnDisruptable27
Driving more resilient lifeline critical infrastructure for our communities
All Projects
» Explore all of IST's projects, past and current
Focus Areas
Future of Digital Security
Geopolitics of Technology
Innovation and Catastrophic Risk
Events
Insights
Contact
Search
Donate
Archive
AI
Op-ed
ROOST Reminds Us Why Open Source Tools Matter
In an op-ed for Tech Policy Press, IST Ecosystem Trust and Safety Associate Fatima Faisal Khan explains why the Trust and Safety community needs ROOST, a newly launched non-profit open source tooling hub.
AI
,
artificial intelligence
,
open source
,
open source software
,
tooling
,
Trust and Safety
March 20, 2025
Report
Navigating AI Compliance, Part 2: Risk Mitigation Strategies for Safeguarding Against Future Failures
"Navigating AI Compliance, Part 2: Risk Mitigation Strategies for Safeguarding Against Future Failures" presents 39 risk mitigation strategies co-created by a multistakeholder working group of experts that aim to avoid institutional, procedural, and performance failures of AI systems.
AI
,
artificial intelligence
,
compliance failure
,
infrastructure protection
,
institutional failures
,
performance failures
,
procedural failures
,
risk reduction
March 20, 2025
Blog
Setting the Foundation of a New National Strategy on AI: IST Submits Comments on an AI Action Plan
Institute for Security and Technology (IST) last week submitted comments to the White House Office of Science and Technology Policy (OSTP) in response to its request for information on the development of an AI Action Plan. Informed by eight years of working groups, private roundtables, and other engagements with policy policymakers, technologists, and researchers on AI risk reduction, our response puts forth 6 strategic objectives that would serve as the foundation of a new national strategy on AI.
AGI
,
AI
,
artificial intelligence
,
critical infrastructure
,
genAI
,
national security
,
NIST
,
public safety
March 18, 2025
Blog
Managing Misuse Risk for Dual-Use Foundation Models: IST Submits Comments to a NIST Request for Information
Last week, Institute for Security and Technology (IST) submitted a response to NIST's Request for Comments on the U.S. Artificial Intelligence Safety Institute's draft guidelines for identifying and mitigating the risks to public safety and national security present across the AI lifecycle.
AI
,
artificial intelligence
,
deployment
,
development
,
malicious use
,
misuse
,
national security
,
NIST
,
risk reduction
March 18, 2025
Blog
Q&A: Reflecting on IST’s Mission in Action
In the February 2025 newsletter, on the heels of recent gatherings of the world's most prominent technology and policy leaders, members of IST leadership reflected on what they heard, what seems to be missing from global dialogue, and what IST can do about it.
AI
,
critical infrastructure
,
cyber
,
cybersecurity
,
Munich Cyber Security Conference
,
Paris Peace Forum
,
Public-private partnerships
,
Ransomware
,
WEF
February 28, 2025
Report
Deterring the Abuse of U.S. IaaS Products: Recommendations for a Consortium Approach
Informed by a year of working group discussions with IaaS providers and other experts, authors Steve Kelly and Tiffany Saade make recommendations for how a consortium to join the Abuse of IaaS Products Deterrence Program, powered by AI and privacy-preserving technologies such as federated learning, could be shaped to best accomplish the government’s overall objective of deterring abuse.
AI
,
cloud
,
federated learning
,
Infrastructure-as-a-Service
,
Trust and Safety
February 27, 2025
Blog
Cybersecurity Awareness Month at IST: Tips, Tricks, and Takes from the Team
Last month, we shared the IST team’s best tips, tricks, and takes in honor of Cybersecurity Awareness Month. IST Comms Associate Lillian Ilsley-Greene sat down with the team to get their insights on the most pressing cyber issues facing our world.
AI
,
cybersecurity
,
incentives
,
MFA
,
online safety
,
Ransomware
,
software
November 4, 2024
Report
The Implications of Artificial Intelligence in Cybersecurity: Shifting the Offense-Defense Balance
Advances in AI present key cybersecurity opportunities, but how might malicious actors utilize the same technology? IST’s latest report investigates the state of existing and potential integrations of AI in cybersecurity based on our research & interviews with industry stakeholders and puts forward 7 priority recommendations.
agentic
,
AI
,
artificial intelligence
,
cyber
,
cybersecurity
,
deepfakes
,
offense-defense balance
,
software
October 10, 2024
One Pager
IST’s Efforts in an Age of AI: An Overview
IST began exploring the transformative impact of artificial intelligence on national security and global stability in 2017. Since then, we have engaged with a diverse array of stakeholders across the AI ecosystem to produce risk mitigation strategies, useful tools and frameworks, and recommendations.
AI
,
national security
,
recommendations
,
risk-mitigation
September 19, 2024
Event
September 9, 2024 3:00 pm
Artificial Intelligence Integration in NC3: Implications for the Global Nuclear Order
IST hosted a side event at the 2024 REAIM Summit in Seoul, South Korea on the intersection of AI and nuclear weapons and its implications for nuclear deterrence and strategic stability.
AI
,
artificial intelligence
,
decision making
,
deterrence
,
escalation
,
miscommunication
,
NC3
,
nuclear
,
strategic stability
September 9, 2024
Blog
IST unveils new project, engages with White House at “Hacker Summer Camp”
The Institute for Security and Technology (IST)’s Steve Kelly and Josh Corman joined thousands of infosec experts, security professionals, journalists, government officials, and cyber enthusiasts at “hacker summer camp,” the cybersecurity conference trio of Black Hat, BSides, and DEF CON—amongst many other bespoke gatherings and meetings.
AI
,
BSIDESLV
,
critical infrastructure
,
cybersecurity
,
DEF CON
August 22, 2024
Blog
IST’s Open-Source Software Security Initiative Submits Response to Request for Information
This week, the IST team submitted a response to a request for information (RFI) from the Office of the National Cyber Director (ONCD), the Cybersecurity Infrastructure Security Agency (CISA), the National Science Foundation (NSF), the Defense Advanced Research Projects Agency (DARPA), and the Office of Management and Budget (OMB) on areas of long-term focus and prioritization in open-source software security.
AI
,
market incentives
,
ONCD
,
open source software
,
RFI
,
secure-by-design
October 11, 2023
Blog
IST announces Steve Kelly as its first Chief Trust Officer
The Institute for Security and Technology announced today the addition of Steve Kelly as its first Chief Trust Officer. At IST, Steve will establish a new effort to advance the trust, safety, and security of artificial intelligence and help lead other aspects of the organization’s work.
AI
,
announcements
,
hiring
,
Trust and Safety
August 7, 2023
Blog
IST advances Applied Trust and Safety work in partnership with the Patrick J. McGovern Foundation
As the Institute for Security and Technology scales up its work in the field of Applied Trust and Safety, we are thrilled to work in partnership with the Patrick J. McGovern Foundation (PJMF), a global, 21st century philanthropy focused on bridging the frontiers of artificial intelligence, data science, and social impact.
AI
,
announcements
,
privacy
,
technology policy
,
Trust and Safety
December 20, 2022
Report
Forecasting the AI and Nuclear Landscape
This report, the product of a partnership between IST and Metaculus, aims to assess the risks of escalation between the U.S. and China, including by the integration of AI into NC3.
AI
,
China
,
command
,
communications
,
control
,
Innovation and Catastrophic Risk
,
nuclear
September 21, 2022
Blog
Psychological Cyber Warfare: The Human Factor of Cybersecurity Breaches
Hackers and malign actors have extensively used phishing, pretexting, social proof, and other social engineering tools since the early days of the Internet. What is truly pernicious is how social engineering is fusing with ransomware attacks for incredibly destabilizing effects.
AI
,
cybersecurity
,
human error
,
RTF
,
social engineering
September 20, 2021
Previous
Page
1
Page
2
Page
3
Page
4
Next
Search
Search
MENU
HOME PAGE
About
FAQ
The CATALINK Brief
Insights
Events
Analysis
Podcasts
Why crisis communications?
Our Team
GET IN TOUCH
Email:
[email protected]
Send us a message:
Contact
JOIN THE CATALINK MAILING LIST
First Name
Last Name
Organization
Email
Subscribe