In 2022, IST’s Digital Cognition & Democracy Initiative (DCDI) published its cumulative report, Rewired: How Digital Technologies Shape Cognition and Democracy, which explored the effects of digitally influenced and impaired cognition on the independent, critical thinking minds of individuals. The DCDI team, drawing on collaborative insights from the working group, attributed these effects to two forms of digital technologies: those that affect and manipulate cognition and those that outsource cognitive functions.
However, it has become increasingly clear that Generative AI (GenAI) represents a significant leap forward in both of these categories. Large language models (LLMs), the foundation of GenAI conversational agents, are now being harnessed in ways that amplify these effects. On one hand, malicious actors can exploit these models to produce content at an unprecedented scale, heightening the risks of misinformation and fueling polarization campaigns. On the other hand, these models are being fine-tuned to mimic human-like speech and emotional capacity with remarkable accuracy, enabling them to serve as convincing emotional companions. The applicability of this technology presents newfound opportunities, efficiencies, and collective productivity. However, it also raises concerns about the potential for this technology to reshape human cognition on an unprecedented scale, potentially altering how individuals process information, fundamentally relate to one another, and ultimately affect the collective reality that binds us together.
In response to these emerging realities, and with generous support from the Omidyar Network, IST is building upon our initial DCDI effort with the Generative Identity Initiative—a coalition of dedicated members from academia, industry, and civil society—who have been meeting to discuss how generative AI, particularly conversational agents, might impact social cohesion and the protection of public interest in the face of these challenges.
Featured Content

The Generative Identity Initiative: Exploring Generative AI’s Impact on Cognition, Society, and the Future
AI has surged to the fore, and GenAI represents a profound evolution in tech that can affect and manipulate cognition, and outsource cognitive functions. Building on the findings of IST’s Digital Cognition and Democracy Initiative, GII’s inaugural report asks the question; How will this emerging tech affect social cohesion? With the generous support of Omidyar Network, GII engaged 25+ working group members and contributors from across industry, academia, and civil society over the course of seven months. Author IST Policy Analyst for Technology and Society Gabrielle Tran and principal investigator Senior VP for Special Projects Eric Davis present a comprehensive research agenda, noting 27 areas of exploration for addressing these challenges.
December 2024 | Report
AI, Therefore I Am: Exploring Cognition in the Age of GenAI
“Advanced chatbots are challenging our perceptions of learning, relationships, and even the boundaries between life and death.” IST Policy Analyst Gabrielle Tran shares a first look at findings from the Generative Identity Initiative (GII). In this first installment in the GII series, she explores two of the key cognitive implications that GII Working Group members identified as central to GenAI’s impact on social cohesion: challenges in metacognition and the modulation of the socialization process.
September 2024 | Blog

IST launches Generative Identity Initiative with support of Omidyar Network
With the generous support of Omidyar Network, IST in January announced the launch of the Generative Identity Initiative, a new effort to address the complex questions around generative AI’s impact on social identities, norms, and belonging.
January 2024 | Announcement
About the Initiative
GenAI is moving at a rapid pace; we have limited time to anticipate both the risks and the opportunities. We must therefore ask the right questions to ensure policy decisions have the greatest benefit.
Based on working group meetings with experts from across academia, industry, government, and civil society and staff research, GII will publish an open access peer-reviewed report with a literature review of current research on the socio-psychological effects of generative technologies. With the input of our working group members, the report will also include a research agenda designed to more precisely explore how GenAI will influence social structures and public welfare, and how to shape potential policy and/or private sector responses.
In response to the main research inquiry–how GenAI will affect social cohesion–the GII team and working group members identified five high-level themes:
- The metacognitive challenges posed by GenAI
- The modulations in the traditional socialization process caused by GenAI
- GenAI’s effect on social trust
- Shifts in institutional responsibilities pertaining to GenAI
- Technical complexities in achieving socially beneficial GenAI
Over subsequent meetings, the group expanded on these themes, developing insights and actionable strategies to address the multifaceted implications of GenAI on society. The resulting report and research agenda will serve as a roadmap for policymakers, industry leaders, and researchers to navigate the rapid advancement of generative technologies. By fostering an informed, collaborative approach, IST aims to shape the future of GenAI in a way that prioritizes societal well-being, strengthens public trust, and ensures that the benefits of these innovations are widely and equitably shared.
GII Working Group Members and other contributors
Thank you to our dedicated working group members as well as other participants who, over the course of seven months, have provided invaluable and novel insights to our initiative. Their expertise and willingness to contribute have been critical to the depth, rigor, and interdisciplinary nature of the inaugural GII report. While each individual does not necessarily endorse everything written in this report, we extend our gratitude.
Nichole Argo
Strategy & Research Consulting on Belonging & Democracy
Chloe Autio
Founder & CEO, Autio Strategies
Michelle Barsa
Principal, Belonging, Omidyar Network
Rachel Bowen
Senior Technical Advisor for Technology Facilitated Gender-Based Violence, IREX
Lauren Buitta
Founder & CEO, Girl Security
Adam Fivenson
Senior Program Officer for Information Space Integrity, International Forum for Democratic Studies, National Endowment for Democracy
Shuman Ghosemajumder
CEO, Reken
Olya Guervich
Co-founder, Stealth
Jodi Halpern
Chancellor’s Chair and Professor of Bioethics, UC Berkeley, Co-Founder and Co-Director, Kavli Center for Ethics, Science and the Public
Maxi Heitmayer
Assistant Professor, University of the Arts London
Bernie Hogan
Associate Professor, Oxford Internet Institute
Mounir Ibrahim
Vice President of Public Affairs & Impact, Truepic
Vaishnavi J
Founder, Vyanams Strategies
Herb Lin
Senior Research Scholar, Center for International Security and Cooperation, and Hank J. Holland Fellow in Cyber Policy and Security, Hoover Institution at Stanford University
Megan McBride
Senior Research Scientist, CNA’s Institute for Public Research
Amanda McCroskery
Applied AI Ethics and Governance Researcher, Google Deepmind
Mickey McManus
Senior Advisor, Boston Consulting Group
Vivienne Ming
Founder and Executive Chair, Socos Labs
Sarah Papazoglakis
Public Policy, Privacy Policy, and Product Strategy
Michael Parent, PhD, MBA
Principal researcher at Hopelab
Beatrice (Bea) Reaud
Senior Advisor, USAID
Michael Rich
Associate Professor Pediatrics, Harvard Medical School and Director & Founder of Digital Wellness Lab at Boston Children’s Hospital
Henry Roediger
James S. McDonnell Distinguished University Professor of Psychological & Brain Sciences, Washington University
Aaron Schull
Managing Director & General Counsel, Centre for International Governance Innovation
Andrea Stocco
Associate Professor, University of Washington
Sherry Turkle
Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology in the Program in Science, Technology, and Society, MIT