We unite technology and policy leaders to create actionable solutions to emerging security challenges.

The Institute for Security and Technology

The Institute for Security and Technology (IST) is the 501(c)(3) critical action think tank. We take collaborative action to advance national security and global stability through technology built on trust, guiding businesses and governments with hands-on expertise, in-depth analysis, and a global network.

 

Critical Effect DC 25

ICS Village, in partnership with the Institute for Security and Technology, Crowell LLP, the National Security Institute, and the Wilson Center, proudly presents Critical Effect DC! Formerly known as Hack the Capitol, Critical Effect DC is a multi-track conference committed to bridging the gap between technical experts and policy professionals. Join us June 12-13, 2025 in Washington, D.C. Registration is now live and the CFP call ends on April 4.

 

 

Navigating AI Compliance, Part 2: Risk Mitigation Strategies for Safeguarding Against Future Failures

In this second report in a two-part series, authors Mariami Tkeshelashvili and Tiffany Saade present 39 risk mitigation strategies for avoiding institutional, procedural, and performance failures of AI systems co-created by a working group of experts. These strategies aim to enhance user trust in AI systems and maximize product utilization. AI builders and users, including AI labs, enterprises deploying AI systems, as well as state and local governments, can use and implement a selection of the 22 technical and 17 policy-oriented risk mitigation strategies presented in this report according to their needs and risk thresholds. 

 

 

A new IST podcast: Introducing TechnologIST Talks

Technology revolutionizes the way we live. Insecure, negligent, or exploitative advancements, however, can threaten global security & stability. TechnologIST Talks is dedicated to bringing conversations on these risks–and opportunities–to the forefront. In the season premiere, CEO Philip Reiner asks the question, does the U.S. have what it takes to win the techno-industrial competition with China? IST Senior Fellows Michael Brown & Pavneet Singh say yes. By harnessing the power of venture capital, they have a plan to help build critical technologies essential for national security, economic prosperity and global leadership.

 

Exploring Generative AI’s Impact on Cognition, Society, and the Future

AI has surged to the fore, and GenAI represents a profound evolution in tech that can affect and manipulate cognition, and outsource cognitive functions. The Generative Identity Initiative’s inaugural report asks the question: How will this emerging tech affect social cohesion? With the generous support of Omidyar Network, GII engaged 25+ working group members and contributors from across industry, academia, and civil society over the course of seven months. The report presents a comprehensive research agenda, noting 27 areas of exploration for addressing these challenges.

 

Announcing UnDisruptable27: A new IST effort driving resilience of lifeline critical infrastructure

The systems that we rely on every day for everything from water and food to emergency medical care and power are subject to escalating harms by accidents, bad actors, and nation-state adversaries. Preventing these harms is a matter of public safety — and we must act now. UnDisruptable27 will prioritize the safety, security, and resilience of four lifeline basic human needs: water and wastewater, emergency medical care and hospital services, food supply chains, and power. The new project will be led by Joshua Corman, Executive in Residence for Public Safety & Resilience, with support from Craig Newmark Philanthropies. 

 

How can you support the Institute for Security and Technology?

 

NEWS

The NatSpecs Blog

Back to Blog
Artificial Intelligence

Setting the Foundation of a New National Strategy on AI: IST Submits Comments on an AI Action Plan

Last week, Institute for Security and Technology (IST) submitted a response to NIST’s Request for Comments on the U.S. Artificial Intelligence Safety Institute’s draft guidelines for identifying and mitigating the risks to public safety and national security present across the AI lifecycle.

Read More
Artificial Intelligence

Managing Misuse Risk for Dual-Use Foundation Models: IST Submits Comments to a NIST Request for Information 

Last week, Institute for Security and Technology (IST) submitted a response to NIST’s Request for Comments on the U.S. Artificial Intelligence Safety Institute’s draft guidelines for identifying and mitigating the risks to public safety and national security present across the AI lifecycle.

Read More
Artificial Intelligence

Patrick J. McGovern Foundation Renews Commitment to Supporting IST’s AI Risk Reduction Efforts

Over the last two years, with the support of the Patrick J. McGovern Foundation, the Institute for Security and Technology (IST) has been on a mission to assess the risks and opportunities associated with the development and deployment of cutting-edge AI foundation models. IST is again excited to announce renewed support from the Patrick J. McGovern Foundation to further advance this vital work.

Read More
Future of Digital Security

Hit ‘Em Where it Hurts: Understanding and Disrupting the Resourcing Phase in the Ransomware Payment Ecosystem

Since April 2021, IST’s Ransomware Task Force has investigated how to disrupt the infrastructure that ransomware groups rely on to receive payments. Trevaughn Smith presents new strategies for targeting the resourcing phase of the #ransomware ecosystem.

Read More
25days