Imagine you have a virtual companion at your side, at all times, available with the swipe of a finger or voice prompt. What if they could chat with you, offer advice, serve as a sounding board for your thoughts and feelings, give health-related guidance, or even offer a virtual relationship? This is the promise—and potential peril—of social generative AI (GenAI), which uses advanced large language models to engage in human-like interactions with social or emotional dimensions. Perhaps more importantly, social GenAI can be purposefully designed for these purposes, or arise from unintended uses of general-purpose AI companions, like ChatGPT and Claude.
IST’s Generative Identity Initiative (GII) spent the last year working with tech developers, psychologists and behavioral health experts, policymakers, and civil society leaders to understand how GenAI affects cognition, society, and the future.
In December, GII published its cornerstone report, investigating the impacts of GenAI and putting forward a research agenda with 27 recommendations aimed at building a digital civic infrastructure that fosters trust, safety, and social cohesion in the age of GenAI.
To learn more, I sat down with report author Gabrielle Tran and principal investigator Eric Davis. How did they conduct their research? What’s at stake? And what should industry, civil society, and government be doing to modernize public policy and develop a bottom-up ethical approach?
Walk us through the process of convening the working group. What perspectives did you want to make sure to incorporate? How did you “convene”? What came out of it?
Gabrielle: We started with a simple premise: bring together a mix of voices that could challenge, refine, and elevate our inquiry into the impacts of generative AI on social cohesion. We wanted technologists, policy experts, academics, and civil society leaders at the table—folks who could represent diverse disciplines and lived experiences. This interdisciplinary blend was crucial because the questions we’re tackling about trust, socialization, and wisdom in a GenAI-driven world don’t have easy answers or fit neatly into one domain.
The process kicked off with a plenary session, which set the tone and identified key themes. From there, we dove into structured working group meetings, each tackling a specific question, like: What metacognitive challenges does GenAI introduce? How might it modulate socialization processes? And what does this mean for trust? Later sessions shifted toward solutions, asking how laws, institutions, and technical systems could adapt to address these challenges.”
Eric: “When convening a working group, you’re also curating, trying to assemble the right combination of talent, expertise, and personalities. That last part is especially important when there are so many powerhouses involved. We were very fortunate not just with the extraordinary cross-section of talent and expertise, but also with the chemistry of the group, which was so conducive to thoughtful, rich discussion. I miss the working group sessions!
With this group, even the discussion’s rabbit holes were fascinating. Consequently, one of my challenges in moderating the discussions was maintaining that balance of keeping the conversation on track while ensuring people had enough latitude to go in unexpected, rewarding directions.
In the first part of your report, you investigate the question “how will GenAI affect social cohesion”? What did you find?
Gabrielle: We’ve identified several ways GenAI may significantly impact social cohesion, or the way that we as a society relate to one another, hold shared values, and trust one another. Each of these impacts carries implications for our cognitive security—maintaining agency over the way we think, trust, and make decisions.
First, GenAI presents challenges to our metacognitive processes. Its human-like interactions are often fine-tuned to feel warm and affirming, making it easy for users to anthropomorphize these systems. This can lead people to overestimate–and even become reliant on–GenAI. Many users may even trust these AI companions on a personal level, forming emotional attachments and mistaking their programmed “empathy” for genuine care. Compounding this issue, GenAI is often perceived as more objective than it really is, and users may unquestioningly accept its responses as the “right” answer, even when it is not.
This dependency raises concerns. Many AI companions prioritize engagement, and are therefore designed to offer only surface-level affirmations. Enough interaction with the model can erode critical thinking, as users increasingly outsource reflection to GenAI’s homogenous ideas. Over time, this reliance could replace traditional social interactions that are essential for building empathy, being open to the possibility that beliefs, assumptions, and biases may be wrong, handling diverse perspectives, and the ability to engage with “productive friction.” Without exercising these traits, users risk overconfidence that leaves them more vulnerable to disinformation and less open to considering contrary arguments and views.
Finally, hyper-personalized content poses another challenge. Traditional social media algorithms exposed us to content that was “curated for you,” showing us content that may have been biased but at least formed a part of a shared reality. However, GenAI is able to generate hyper-personalized content that is not only “created for you,” but is seen by only you. These tailored realities risk altering our collective memory, shared narratives that are essential for reconciliation and unity. This undermines cognitive security, creates siloed realities that deepen societal divides, and makes resolving conflicts or building a collective identity increasingly difficult.
What is at stake? Where are we in the GenAI development process? What might happen in a worst case scenario?
Eric: First – and it’s critical to underscore this – GenAI is just a technology. It’s neutral. Our report is not a critique of GenAI; rather, the report is concerned with the risks of particular ways we’re using the tech to get what we want. Nor are we critiquing our very human inclination to anthropomorphize or reflexively build connections.
Discussions of AI risks or threats often gravitate towards dramatic, hollywood-esque culminations that you can see coming from a mile away. The GenAI risks we describe have enormous implications but are comparatively “low and slow,” to borrow a cybersecurity term. They sneak up on us — what’s wrong with having a companion bot that validates everything you say and mirrors whatever you like? What’s wrong with being the hero of every story?
Over time our brains adapt – not just to information filter bubbles but also, effectively, to behavioral ones – diminishing our capacity for nuance, disagreement, and engagement with the world outside our experiences. Consequently, in an era where discourse is often along the lines of I’m right and you’re evil, forming relationships, fostering social cohesion, and engaging in civic life are profoundly impacted.
What do you see as some of the most pressing or impactful research agenda items to tackle next? Why?
Eric: Public policy needs to catch up, especially in the United States. And it’s in the private sector’s interests to move rapidly to avoid broad-brush regulations. It’s troubling to see some AI relationship platforms making dubious assertions about mental health benefits or how they use or secure user data. There’s a lot of potential for abuse, especially emotional manipulation of users by certain companies.
Gabrielle: I agree–our current laws haven’t caught up with the realities of GenAI.
For instance, regulations like those promulgated pursuant to Section 230 of the Communications Decency Act in the United States don’t clearly address who’s responsible when AI-generated content causes harm. Research needs to focus on updating these frameworks to clarify where the accountability lies for making sure that AI systems are designed with safety in mind. Another big issue is how some GenAI systems use manipulative techniques, like gamification, to keep users hooked. These need to be regulated to protect users and maintain trust in these technologies.
Another area that I think is important to focus on is developing a bottom-up ethical approach, which here refers to empowering engineers and other frontline practitioners to integrate ethical considerations in their work from the outset, rather than rely solely on top-down directives from management or policymakers. This is essential for bridging the gap between the rapid deployment of GenAI and the slower pace of regulation. One such approach, the Moral Imagination exercise, is flexible, respecting the entrepreneurial and autonomous nature of engineering teams while embedding ethics into daily workflows. By aligning incentives with societal well-being, fostering participatory design, and training teams to translate principles into action, companies can proactively address risks like emotional dependency and cognitive security. This helps organizations contribute to trust and safety while maintaining innovation.
Eric: As a first step toward progress on our recommendations, and recognizing that some policy changes will take much longer than others, I’d look towards quicker wins, such as clear disclosure and transparency measures. The leadership could come from industry, policymakers, or both. For example, we recommend that platforms distinguish AI companion content and interactions designed to influence user purchases, opinions, or other behavior beyond the app’s stated purpose. Additionally, a viable risk and impact assessment framework that specifically accounts for AI companions would be a valuable step forward.
What’s next for the Generative Identity Initiative?
Gabrielle: At this stage in GenAI’s development, the choices we make will define its impact. If we act now, we have the opportunity to guide GenAI toward enhancing human flourishing, strengthening communities, and fostering a future where this technology supports—not undermines—our shared humanity.
Currently, we’re engaging with stakeholders and actively discussing strategy with our partners to determine the best path forward for turning the report’s findings into actionable outcomes. By outlining a structured agenda, we hope to focus collective efforts on the most pressing questions, ensuring that research, policy, and innovation keep pace with the tech’s rapid development. This agenda serves as a guidepost, helping direct attention and resources toward areas with the greatest potential for impact. Stay tuned—we’re just getting started!
Eric: It’s also important to note that this remains a collective effort – the solutions will not all come from a single source or approach (and we don’t claim to have all the answers!). Additionally, there are no panaceas. We highlighted areas that warrant more research, which others can build on as well. Private sector approaches such as moral imagination can be adapted and iterated right now by individual platforms to fit their company culture and development environment.