Artificial Intelligence

AI, Therefore I Am: Exploring Cognition in the Age of GenAI

By Gabrielle Tran on September 24, 2024

In 2020, Jason Rohrer asked OpenAI to increase the bandwidth of his GPT-3 API due to increased traffic he was receiving on his site, Project December. But by 2021, the tech company terminated his API altogether, stating that Rohrer violated their safety policies and failed to implement required changes. The issue? Rohrer’s Project December enabled users to create unrestricted personal chatbots… even of their deceased loved ones.

The “digital afterlife” is a growing industry that harnesses the power of generative AI (GenAI) to emulate the distinct likeness of a deceased person based on their data and digital footprints. This technology is part of a broader trend in AI-powered conversational systems that are rapidly integrating into various aspects of our daily lives. From using ChatGPT to prepare a study guide for an upcoming college exam to chatting with Replika, a virtual AI companion that can serve as a friend, romantic partner, or even therapist, these advanced chatbots are challenging our perceptions of learning, relationships, and even the boundaries between life and death. As GenAI blurs these lines, it compels us to reconsider what it means to be human in a world where digital personas can replicate our interactions, shaping how we remember, connect, and understand our own identities.

In 2022, IST completed its work under the Digital Cognition and Democracy Initiative (DCDI), which explored how digital technologies impact cognition, individuals, and society, and what these implications might ultimately mean for the future of democracy. Building on this foundational work, and in response to the emerging realities posed by GenAI, IST, and with the continued generous support of the Omidyar Network, IST launched the Generative Identity Initiative. Over the last six months, a coalition of dedicated contributors from academia, industry, and civil society have been meeting to discuss how generative AI, particularly conversational agents, might impact social cohesion and to develop approaches to protect the public interest in the face of these challenges. 

This blog post explores two of the key cognitive implications our GII working group members identified as central to GenAI’s impact on social cohesion: metacognition and modulation of the socialization process. In the next installments of our GII series, we will delve into the societal-level implications of these observations and examine the institutional and technical mechanisms proposed to address these challenges. Later this fall, these insights will be published in a peer-reviewed publication which will provide a comprehensive literature review of the socio-psychological effects of generative technologies and detail a research agenda for further study.

Challenges in metacognition 

“An LLM simply does not possess agency. In fact, one of the metacognitive flaws people have in thinking about LLMs is ascribing them agency and intentionality–the way we ascribe even to a caterpillar.”

– GII Working Group member

Large language models (LLMs) are AI systems trained on the patterns and relationships of vast amounts of text data, enabling them to generate coherent human-like text by predicting the most likely next word in a sequence. LLMs form the foundation of advanced chatbots and developers are purposely fine-tuning their output to mimic human interaction and produce text that is naturally fluid, personable, and helpful. In the case of chatbots designed for emotional companionship, their text may be fine-tuned to be particularly affirming, interactive, gendered, and inquisitive. And because users instinctively—or as Clifford Nass describes it, ‘mindlessly‘—respond to these human-like social cues, GII working group members noted that these chatbots are consequently more likely to be anthropomorphized. More specifically, this manifests as the ELIZA effect: the inclination to attribute human qualities—such as knowledge, empathy, or semantic understanding—to computer programs. When users describe a chatbot’s processes with words like “thinking,” “knowing,” or “understanding,” they demonstrate the ELIZA effect in action. 

Such inclinations may be a harmless case of effectance motivation. In other words, users may be using this anthropomorphic language as a metacognitive strategy to better understand a complex process. But as the working members discussed, part of the problem with such language is that it categorically confuses how these machines actually work for something akin to human reasoning and subsequently conceals their limitations. This misconception leads to a critical metacognitive error: the belief that current generative AI systems possess even trace amounts of agency. In reality, their outputs are the result of pattern recognition and statistical prediction, not conscious decision-making or purposeful action. This observation is often referred to as LLMs being ‘stochastic parrots‘—they do not comprehend the meaning of their outputs, but rather “parrot” back the patterns from their training data. 

Importantly, while words like “thinking” or “understanding” do not necessarily imply that users think chatbots are genuine social actors, the issue is that people react as if they are. In fact, Nass observes that while people often reject the idea they are anthropomorphizing a computer agent, their actions contradict this belief, exhibiting social patterns typically reserved for human interactions (such as displaying politeness, expecting reciprocity, and even stereotyping), while overlooking cues that underscore “the essential asocial nature” of the interaction. This projection of human-human interactions, which may seem presently harmless, provides the incorrect mental scaffolding upon which we construct our ideas, expectations, and comprehension of GenAI systems–a topic we explore in this blog series and in subsequent GII work. 

Modulating the traditional socialization process 

“I was wondering what remains of the human side of the human element [with GenAI in social interactions] … my feeling is that with this we’ll see a devaluing of the human contribution in the long run.”

– GII Working Group member

Interestingly, this anthropomorphic heuristic does not only affect our understanding of GenAI; it also impacts our own cognitive processes. Working group members identified four metacognitive skills, typically developed through traditional social interactions, that may be distinctly affected by these conversational agents. These include epistemic humility, the preference for compromise, relativism/context adaptability, and the acknowledgement of uncertainty and possibility of change. These mechanisms also happen to be known as the psychological foundations of wisdom.

While wisdom may seem like an abstract concept, its empirical research coincides with the “morally-grounded” use of metacognition—that is, the application of self-reflective reasoning and problem-solving skills in the context of social challenges. Insofar as its applicability to everyday life, wisdom equips an individual with tools that facilitate attention to the broader context of a situation and enables the balancing of complicated trade-offs. Our members suggested that as GenAI becomes an actor in the socialization process (albeit in a range of different capacities), the development of the four psychological components of wisdom will subsequently be modulated. Startlingly, we are confronted with the possibility that the essence of wisdom—once considered uniquely human—may be influenced, atrophied, or even redefined by non-human entities: 

  • Epistemic Humility:  Epistemic humility is the acknowledgement of the limits of one’s own knowledge, experiences, and cognitive abilities. As one working group member pointed out, this is typically matured via active learning methods like Socratic dialogue, where students are challenged to debate ideas and question their assumptions. In contrast, the instantaneous, almost certain, and affirming responses provided by GenAI bypass the productive friction that would otherwise nurture this humility. This friction, characterized by deliberate critical thinking, is essential for developing a more nuanced understanding of complex issues, as learners are able to cultivate a deeper appreciation for the intricacies and ambiguities inherent in many fields of study and facets of life. Further compounding the problem, working group members highlighted that individuals tend to perceive generative AI as inherently more objective than both themselves and society at large. This belief in AI’s impartiality often goes unchallenged, despite the observation that these systems can, and often do, inherit and amplify biases present in their training data. Thus, not only does this belief reduce productive friction and the nurturing of epistemic humility, but it also encourages unwarranted and unchallenged epistemic trust in GenAI’s outputs. 
  • Preference for compromise: A lack of intellectual humility also fosters overconfidence, rendering people less receptive to contrary opinions. This ultimately fractures the tolerance and empathetic curiosity which is thought to facilitate social cohesion. This overconfidence isn’t confined to academic realms; it can permeate various domains of thinking, leading to unwarranted certainty about the outcomes of situations, people’s intentions, emotions, and probable reactions. With the fine-tuning, personalization, and one-on-one interactive nature of GenAI, group members also added that individuals may feel less and less inclined to accommodate different views, backgrounds, and norms. Why put in the effort to find common ground with others when your AI companion offers frictionless agreement?
  • Relativism/context adaptability: Individuals who exhibit relativism or context adaptability demonstrate a greater appreciation of broader contexts and relativism in one’s values, norms, and experiences. Not only is this socialized through the exposure to different opinions and the effort to reach a compromise, but it can be mediated through exposure to different paradigms of thinking. In this case, working group members specifically investigated the actual interface and semantics of a GenAI’s output, particularly troubled by how the context of its creation (largely by adults in a Western world for adults in a Western world) may narrow a user’s understanding of relativism and context adaptability to just the Western paradigm. Most concerningly, users, especially younger ones, could accept the AI’s responses as universally applicable, when in fact they may be heavily influenced by Western cultural norms, values, and thought patterns. 
  • Recognition of uncertainty and change: Wisdom emerges from the recognition that the actions, motivations, and outcomes of both ourselves and others—as well as the nature of any given situation—are inherently uncertain and dynamic. This understanding acknowledges that circumstances may unfold in unforeseen ways, constantly subject to change and reinterpretation. Interestingly, working group members noted how users are using GenAI to remove or minimize this uncertainty and mediate processes that are typically accomplished with high levels of unpredictability or disorder. For example, group members brought up the benefits that GenAI chatbots have for neurodiverse peoples by giving them the space to practice interactions to mitigate social anxieties. However, they also explained that while this may provide short-term solutions, it may inadvertently bypass the natural processes of conflict and resolution that typically strengthen relationships and self-esteem over time.

Although wisdom may appear abstract or intangible in the context of social cohesion, early research has uncovered a correlation between the expression of wisdom and a range of prosocial behaviors. These behaviors, as noted by researchers, include voting, volunteering in their community, donating blood, and giving to charity. This correlation suggests that wisdom, and its foundational metacognitive functions, play a crucial role in fostering behaviors that contribute to the overall health, stability, and cohesion of society. As we become more dependent on AI for information, socialization, and decision-making, we risk diminishing the processes that traditionally foster metacognitive skills that correlate with such prosocial behaviors. Indeed, the modulation of the social processes atrophying functions of wisdom by GenAI may prove to have second order effects that extend beyond how we understand interpersonal exchanges. 

Towards Macro Implications

The appeal of the digital afterlife and companionship is understandable. Whether it’s the prospect of conversing with a loved one after their passing, or simply chatting with an avatar that seems to genuinely understand your emotions, GenAI conversational agents tap into deep-seated human desires for knowledge, connection, and closure. Given the current loneliness epidemic, it’s no wonder that many believe chatbots will help alleviate symptoms of isolation. 

While these interactions may seem comforting, they introduce a slippery slope. This trend, as we’ll explore in the next blog post, risks replacing the social trust appropriate for GenAI systems with a misplaced and unwarranted interpersonal trust. Such a shift raises important questions about the broader societal implications of relying on GenAI for profound emotional, social, and informational needs. We’ll also examine the regulatory gray areas surrounding these chatbots and consider the broader societal implications of GenAI-driven content creation, which risks fragmenting the collective memory that binds us together via questions about the nature of authenticity.

A recent study found that AI bots were successful in making a human user feel “heard,” (in some cases, even more “heard” than a human did) but this feeling decreased when they found out the response came from GenAI. This hints at something a leading medical practitioner expressed in our working group: “We have mistakenly assumed that the kind of connections we make online are equivalent to real-life connections.” That is, while GenAI may offer a myriad of benefits, it is still categorically different from human-to-human relationships. While our human-to-human relationships might not be designed for “maximum efficiency,” they are characterized by the vulnerable moments of rupture and repair that facilitate stronger bonds, self-esteem, and trust—something AI, despite its ability to simulate connection, struggles to capture.