Virtual Library

Our virtual library is an online repository of all of the reports, papers, and briefings that IST has produced, as well as works that have influenced our thinking.

Submit your Content

Podcasts

TechnologIST Talks: Looking Back and Looking Ahead: Deep Dive on the New Cybersecurity Executive Order

Carole House, Megan Stifel, and Steve Kelly

view

Podcasts

TechnologIST Talks: The Offense-Defense Balance

Philip Reiner and Heather Adkins

view

Reports

The Generative Identity Initiative: Exploring Generative AI’s Impact on Cognition, Society, and the Future

Gabrielle Tran, Eric Davis

viewpdf

Podcasts

TechnologIST Talks: A Transatlantic Perspective on Quantum Tech

Megan Stifel and Markus Pflitsch

view

Podcasts

TechnologIST Talks: The Future is Quantum

Megan Stifel and Stefan Leichenauer

view

Reports

Navigating AI Compliance, Part 1: Tracing Failure Patterns in History

Mariami Tkeshelashvili, Tiffany Saade

viewpdf

Podcasts

TechnologIST Talks: The Cleantech Boom

Steve Kelly and Dr. Alex Gagnon

view

Contribute to our Library!

We also welcome additional suggestions from readers, and will consider adding further resources as so much of our work has come through crowd-sourced collaboration already. If, for any chance you are an author whose work is listed here and you do not wish it to be listed in our repository, please, let us know.

SUBMIT CONTENT

The Generative Identity Initiative: Exploring Generative AI’s Impact on Cognition, Society, and the Future

Gabrielle Tran, Eric Davis

SUMMARY

Artificial Intelligence (AI) has surged to the fore; its paradigm-shattering capabilities enhance everything from basic web search to medical diagnosis. Generative AI (GenAI)—which can create content, such as text, images, music, videos, or software code based on prompts or inputs—is the breakthrough technology driving many of these latest developments and use cases, some offering great potential to contribute to human flourishing. However, it is also becoming clear that GenAI represents a profound evolution in technologies that can (1) affect and manipulate cognition, and (2) outsource cognitive functions, two effects that were highlighted in the Institute for Security and Technology’s Digital Cognition and Democracy Initiative

This new phase of work, the Generative Identity Initiative (GII), builds on this foundation to explore the following inquiry: How will GenAI, particularly social conversational agents, affect social cohesion? 

The report is the culmination of a year-long collaboration among GII working group members and others from industry, academia, and civil society. This report is organized in two parts. The initial section lays out how working group members believe GenAI may affect social cohesion: via challenges in metacognition, the confusion of interpersonal and social trust, the erosion of the psychological components of wisdom, and the fracturing of collective memory. Thereafter, a comprehensive research agenda is presented, encompassing 27 items identified as necessary for investigation, in order to effectively address these challenges.

Part 1: How will GenAI affect social cohesion?
Generative AI agents, particularly those fine-tuned to be engaging companions, provide abundant social cues that foster anthropomorphization. This heuristic engenders a misplaced sense of interpersonal trust, leading users to rely on GenAI agents based on perceived morality and reputation. This reliance bypasses the foundations of social trust—institutions, regulations, and industry standards—which are essential for ensuring accountability and safety. However, GenAI, as it stands, is not suited to uphold social trust due to the present inadequacy of those foundations, which fail to account for its capabilities and adaptability, as well as its potential to exacerbate cognitive vulnerabilities. 

Anthropomorphism and interpersonal trust can drive intensified usage while undermining the psychological foundations of wisdom that are typically developed through traditional social interactions. Such erosion may have profound societal consequences. Early research has uncovered a correlation between the expression of wisdom and a range of prosocial behaviors that can contribute to the overall health, stability, and cohesion of society. Moreover, GenAI platforms may prioritize continued engagement by fostering frictionless conversations that sustain user attention. These frictionless interactions—with chatbots validating every thought, every feeling, every one of the user’s assertions—may fail to contribute meaningfully to improving a person’s lifeworld, reducing the potential for meaningful growth and social recognition. Additionally, such intensified interactions risk redefining empathy as merely the act of emotional recognition, treating it as an endpoint rather than an imperative to action. Underpinning this dynamic is the inherently private nature of social GenAI interactions: these conversations occur solely between the user and the chatbot, with the chatbot generating text experienced only by that individual. This may fragment experiences and reinforce personalized narratives that pose the risk of deepening divisions, creating new in-groups, and reducing collective memory to its most contentious form. This fragmentation erodes the shared foundations necessary for reconciliation, mutual understanding, and social cohesion.

Part 2: A Research Agenda Toward Further Understanding & Implementable Solutions
Building on the findings in Part 1, this report outlines 27 research agenda items aimed at mitigating the effects of generative AI on social cohesion. These items entail the following:

Modernize Public Policy. Modernizing public policy for GenAI is essential as existing legislation has not adapted to the unique challenges these technologies pose. Updated legal frameworks, such as revised or clarified liability standards, can incentivize safer, more ethical designs. Policies should account for real-world uses, from emotional engagement to gamification techniques influencing behavior, drawing potential insights from industries like gambling. Additionally, updating FDA guidelines to regulate GenAI based on use rather than intent can enhance accountability and protect public well-being.

Shift Internal Organizational Behavior. Internal organizational change is critical as regulations lag behind technological advances. Tech companies can help address harmful practices like addictive design by realigning incentives beyond engagement metrics. Empowering engineering and other teams to translate ethical principles into actionable goals can help organizations proactively align their strategies with broader societal visions for technology.

Explore Technical Interventions and Alignment. Developing technical interventions for GenAI involves value-laden decisions about who defines and implements safeguards. It is equally important to avoid undue paternalism and ensure user autonomy in deciding how they engage with these systems. This challenge is further complicated by the fact that users often interact with these systems in unintended ways or in ways that contradict their expressed preferences. Participatory design can integrate diverse perspectives, bridge gaps between user preferences and goals, and ensure AI systems prioritize safety and inclusivity. Techniques like shared decision-making models, developing AI with stronger metacognitive skills, harnessing insights from affective computing, and cognitive forcing functions can guide thoughtful interactions and enhance human flourishing.

Evolve Frameworks and Data Collection Methodologies for Understanding AI-Human Interaction. The Computers are Social Actors (CASA) framework, developed in the 1990s, is outdated for understanding modern AI-human interactions. It fails to capture generative AI’s unique affordances and the user’s modern understanding of these technologies. Updating this framework through longitudinal studies and diverse use cases can reveal how users form “human-media social scripts” and better inform how people contextually engage with GenAI systems. Data trusts can also serve as an effective mechanism for researching GenAI and user interactions, as they address issues of privacy, sensitive conversations, and ethical data management.

These research agenda items are collectively intended to contribute to a roadmap for building a digital civic infrastructure that fosters trust, safety, and social cohesion in the age of GenAI.

download pdf