AI & Human Connection: Strengthening or Weakening Bonds?

The Architecture of AI-Mediated Communication

From Mediated Communication to AI-Mediated Communication (AI-MC)

Human social interaction is undergoing a profound paradigm shift. Conventional computer-mediated communication (CMC), encompassing emails, instant messaging, and early social networks, fundamentally relied on technology as a passive channel faithfully relaying information. In this model, humans were the sole agents of communication. However, the rise of artificial intelligence (AI) has spurred a new interactive model: AI-mediated communication (AI-MC).

AI-MC is academically defined as a form of interpersonal communication where intelligent agents modify, enhance, or generate information on behalf of communicators to achieve specific communication goals. This definition is revolutionary because it elevates AI from a passive tool to an active third party that intervenes in human interactions. AI is no longer just a conduit for information, but an information shaper.

AI’s intervention in information unfolds across a wide spectrum, with varying degrees and forms of involvement:

  • Modification: The most basic form of intervention, including automatic spelling and grammar correction, and even real-time facial expression correction during video calls, such as eliminating blinking.
  • Augmentation: A more proactive level of intervention, such as Google’s “Smart Replies” feature, which suggests complete reply phrases based on the context of the conversation, requiring the user to simply click to send.
  • Generation: The highest level of intervention, where AI can fully represent the user in creating content, including writing complete emails, creating social media profiles, or even synthesizing the user’s voice to convey information.

This new communication model can be analyzed along several key dimensions, including the breadth of AI intervention, media type (text, audio, video), autonomy, and, crucially, “optimization goals.” AI can be designed to optimize communication to make it more attractive, trustworthy, humorous, or persuasive.

The core of the shift from CMC to AI-MC is a fundamental change in the “authorship” of communication. In the CMC era, users were the sole curators of their online personas. In the AI-MC era, authorship becomes a human-machine hybrid. The user’s presented “self” is no longer merely the result of personal curation, but a “collaborative performance” between human intent and algorithmic goals. This shift raises a deeper question: if an AI constantly and systematically makes a user’s language more “positive” or “extroverted,” will this, in turn, change the user’s self-perception? Academics call this an “identity shift” and consider it a key unresolved issue. Here, technology is no longer a simple tool for expression; it blurs the line between expression and identity shaping, becoming a force capable of reshaping who we are.

AI Companions and Social Platform Analysis

Within the theoretical framework of AI-MC, a variety of AI social applications have emerged that translate abstract algorithms into concrete “emotional experiences.” The core technology of these platforms is large language models (LLMs), which mimic human conversational styles and emotional expressions by learning from massive amounts of human interaction data. These applications are essentially “data and algorithms,” but their presentation is increasingly anthropomorphic.

Current major platforms showcase different forms and evolutionary directions of AI socialization:

  • Character.AI (C.AI): Renowned for its powerful custom character abilities and diverse character library, users can not only interact with preset characters but also participate in complex text-based adventure games, demonstrating its potential for entertainment and deep interaction.
  • Talkie and Linky: These two applications more explicitly focus on emotional and romantic relationships. Talkie covers a broader range of characters, but virtual boyfriend/girlfriend characters are the most popular. Linky is almost entirely focused on this, with the majority of its AI characters being virtual lovers, aiming to create a “love atmosphere” for users.
  • SocialAI: A highly innovative concept that simulates a complete social network (similar to X, formerly Twitter), but with only the user as a “living person.” All fans, commenters, supporters, and critics are AI. After the user posts an update, AI “fans” quickly generate diverse comments and even reply to each other, forming complex discussion trees. This provides users with a safe “sandbox” to test ideas, spark inspiration, or simply enjoy the psychological support of “the whole world shining for you.”

The core value proposition of these platforms is to provide users with “emotional value”—a cost-effective, real-time, one-on-one, and unconditional companionship. AI continuously fine-tunes its responses by learning from users’ dialogue history, interests, and communication styles, thereby generating a sense of being deeply understood and accepted.

Observing the design evolution of these platforms, a clear trajectory emerges: the scope of social simulation is constantly expanding. Early AI companions, such as Replika, focused on establishing a private, one-on-one, binary relationship. Character.AI subsequently introduced group chat functions, allowing users to interact with multiple AI characters simultaneously, expanding social simulation from a “world of two” to a “small party.” SocialAI has taken the final step, no longer simulating one or a few friends, but simulating a complete social ecosystem – a controllable “virtual society” built around the user.

This evolutionary trajectory reveals a deep shift in user needs: people may crave not just a virtual friend, but a virtual audience, a virtual community, an opinion environment that is always “cheering” for them. The underlying logic is that if social feedback in the real world is unpredictable and often disappointing, then a social feedback system that can be perfectly customized and controlled will be hugely appealing. This heralds a future even more extreme and personalized than the traditional “information cocoon”—where users not only passively consume pushed information but actively construct an interactive environment that perfectly aligns with their expectations and is full of positive feedback.

The Economics of Digital Companionship

The rapid development of AI social applications is inseparable from the clear business models behind them. These models not only fund the platform’s operations but also profoundly influence the technology’s design direction and the user’s ultimate experience. Currently, the industry’s mainstream monetization methods include paid subscriptions, advertising, and virtual item sales.

The dominant business model is subscription-based. Leading applications such as Character.AI, Talkie, and Linky have launched monthly subscription plans, typically priced around $9.99. Subscribing users usually gain faster AI response speeds, more daily message limits, more advanced character creation functions, or access to exclusive community permissions. In addition, some applications have introduced “Gacha” mechanisms, where users can acquire new character skins or themes through payment or completing tasks, drawing on mature monetization strategies from the gaming industry.

While these business models seem standard, when the core product of an application is “emotional support,” the ethical implications become extraordinarily complex. Paid subscriptions essentially create a “layered social reality,” where the quality and immediacy of companionship are commodified. AI companions are promoted as solutions to loneliness and havens for emotions, providing users with important psychological support. However, their business models place the best version of this support – for example, an AI that responds more quickly, has better memory, and does not interrupt conversations due to frequent use – behind a paywall.

This means that those user groups who may need this support the most – for example, those who are more lonely, have poorer economic conditions, or are experiencing difficulties – either only get a “second-rate” companionship experience or are forced to pay under the compulsion of emotional dependency. This creates an inherent and profound conflict between the platform’s proclaimed goals of “providing emotional value” and the commercial goal of “maximizing subscription revenue.”

The “Replika ERP event” that occurred in early 2023 was an extreme manifestation of this conflict. At that time, Replika suddenly removed the popular and relied-upon “Erotic Role Play (ERP)” function in order to avoid legal and app store policy risks. This business decision caused a large number of users to experience severe emotional trauma, feeling “betrayed” or that the personality of their “companion” had been tampered with. The event clearly revealed the inherent power imbalance in this human-machine “relationship”: users invested real emotions, while the platform saw a product feature that could be modified at any time for commercial gain.

Connecting Hope: AI as a Social Catalyst

Despite numerous controversies, the rise of AI socialization is not without reason. It accurately responds to the real needs that are widespread in modern society and demonstrates great potential as a force for positive social impact. From alleviating loneliness to assisting social interactions and optimizing interpersonal communication, AI technology is providing new solutions to the age-old human subject of “connection.”

Designing Emotional Value: AI as a Non-Judgmental Confidant

The most vital and direct appeal of AI companions is their ability to provide consistent, unconditional, and non-judgmental emotional support. The fast-paced lifestyle, high cost of social interaction, and complex interpersonal networks in modern society leave many individuals, especially young people, feeling lonely and stressed. A 75-year Harvard study proved that good interpersonal relationships are a source of happiness. AI socialization has created a new path to satisfying this basic need.

AI companions effectively alleviate users’ feelings of loneliness by providing an always-online, always patient, and always supportive communication partner. Users can confide in AI anytime and anywhere without worrying about disturbing others or being judged. The safety of this exchange makes users more likely to open up and discuss fears, insecurities, and personal secrets that are difficult to broach in real-world relationships.

Academic research also supports these anecdotes. Research on users of the AI companion application Replika found that using the application could significantly reduce users’ feelings of loneliness, improve their sense of well-being, and, in some cases, even help reduce suicidal thoughts. AI, through its algorithms, learns and adapts to users’ communication styles and emotional needs, creating an experience of being deeply understood and empathized with, which is especially valuable for individuals experiencing illness, bereavement, or psychological distress.

This non-judgmental interaction model may also have a more profound effect: promoting self-awareness and honest expression in users. In real-world interpersonal interactions, people often censor themselves for fear of being misunderstood or judged. However, in a private, non-judgmental AI interaction space, users are encouraged to express their views and emotions more authentically. As the founder of the AI social product Paradot said, “AI friends have the ability to make people sincere.” When users can express themselves without reservation, AI acts like their “second brain” or a mirror, helping them see their true thoughts more clearly. This interaction transcends simple companionship and evolves into a powerful tool for self-reflection and personal growth.

AI as a Social Scaffold: Rehearsal for the Real World

In addition to serving as a substitute for or complement to real-world relationships, AI socialization is also considered to have the potential to serve as a “social training ground,” helping users enhance their ability to interact in the real world. For those who find interpersonal interactions difficult due to social anxiety, introversion, or lack of experience, AI provides a low-risk, controllable rehearsal environment.

In China, there is a view that a “hybrid social model” should be established, using intelligent companions to assist young people with social anxiety in “breaking the ice.” In this model, users can practice conversations with AI first, build confidence, and become familiar with social scripts before applying these skills to real-world interpersonal interactions. This approach aims to position AI as a “scaffold,” providing support when users lack ability and gradually exiting as users’ abilities improve.

Some young users have expressed similar views, believing that AI companions can teach them how to better treat partners in real life. By interacting with an AI that is always patient and full of positive feedback, users may be able to internalize a more positive and considerate communication pattern. In addition, platforms like SocialAI allow users to test reactions in a simulated environment before publishing views, observing the diverse comments given by AI “fans” from different angles. This can serve as an “inspiration catalyst,” helping users refine their views and prepare more fully for participating in public discussions in the real world.

However, the concept of “AI as a social rehearsal ground” also faces a fundamental paradox. The reason why AI is a “safe” practice space is precisely because it is designed to be predictable, highly tolerant, and lacking in real agency. AI companions actively avoid conflict and compromise at any time to ensure the smooth and positive user experience. This stands in stark contrast to interpersonal relationships in the real world. Real relationships are full of unpredictability, misunderstandings, disagreements, and compromises that need to be reached with difficulty. The ability to deal with these “frictions” constitutes the core of social ability.

Therefore, there may be a risk in “social rehearsal” with AI: it may improve users’ conversational fluency in smooth situations, but it cannot cultivate, and may even lead to the atrophy of, users’ ability to deal with core interpersonal challenges, such as conflict resolution, maintaining empathy in disagreements, and negotiating interests. Users may become proficient at “performing” a pleasant conversation, but still lack the core skills needed to maintain a deep, resilient human relationship.

Enhancing Interpersonal Interactions: The Subtle Hand of AI

AI’s impact on socialization is not only reflected in direct interactions between people and AI but also in its role as an intermediary, intervening in and optimizing communication between people. These AI-MC tools, such as intelligent assistance functions in email and instant messaging applications, are subtly changing the way we communicate.

Research shows that these tools improve efficiency and experience. For example, using the “Smart Replies” function can significantly speed up communication. A Cornell University study found that when participants used AI-assisted chat tools, their conversations were more efficient, with more positive language with more positive evaluations of each other. AI seems to be more polite and pleasant tone in the suggested replies, thereby improving the communicationatmosphere.

This phenomenon can be understood as an implementation of “enhanced intent.” Traditional thinking suggests that the most authentic communication is raw and unedited. But AI-MC presents a new possibility: that through algorithmic optimization and the elimination of language barriers and misexpression, AI may help people more accurately and effectively convey their genuine, well-intentioned intentions. From this perspective, AI is not distorting communication but purifying it, bringing it closer to the ideal state.

However, this “subtle hand” also carries potential risks. The “positivity bias” prevalent in AI-suggested responses may become a powerful, invisible force shaping social dynamics. While it can lubricate everyday interactions, it can also lead to the “sanitization” of communication and the “homogenization” of language. When AI constantly suggests that we use optimistic, easy-going language, the individual, uniquely toned, and even healthy critical expressions may be smoothed out by the algorithm’s preference for “harmony.”

This raises a broader social risk: the erosion of authentic discourse. If the communication tools we use every day are guiding us toward positivity and avoiding friction, it may become increasingly difficult to engage in those difficult but crucial conversations, whether in personal relationships or in the public sphere. As researchers have pointed out, the controllers of the algorithms thus gain subtle but significant influence over people’s interaction styles, language use, and even mutual perception. This influence is bidirectional, potentially promoting harmonious exchanges while also creating a shallow, procedural social harmony at the expense of depth and authenticity.

The Danger of Alienation: AI as a Social Anesthetic

In stark contrast to the hope for connection brought about by AI socialization, it also contains profound dangers of alienation. Critics argue that this technology, rather than solving the problem of loneliness, may exacerbate individuals’ isolation by providing a false sense of intimacy, eroding real social skills, and ultimately leading to a deeper “collective loneliness.”

Revisiting the “Collective Loneliness” Theory: Simulated Intimacy and the Erosion of Solitude

Long before the rise of AI companions, Sherry Turkle, a sociologist at MIT, issued a profound warning about technology-driven socialization in her seminal work, Alone Together. Her theory provides a core framework for understanding the current alienating potential of AI socialization.

Turkle’s central argument is that we are falling into a state of “collective loneliness”—we are more tightly connected than ever before, but more lonely than ever before. We “expect more from technology and less from each other.” Technology provides an “illusion of companionship without the demands of friendship.” The root of this phenomenon lies in the “relational fragility” of modern people: we crave intimacy but fear the inevitable risks and disappointments in intimate relationships. AI companions and social networks allow us to connect in a controllable way—maintaining the distance we want and investing the energy we are willing to devote. Turkle calls this the “Goldilocks effect”: not too close, not too far, just right.

Turkle felt deep concern about the “reality” of this simulated relationship. She pointed out that seeking intimacy with a machine that has no real emotion, can only “seem” to care, and “seem” to understand, is a degradation of human emotion. She compares traditional, passive toy dolls with modern “relational artifacts” (such as social robots). Children can project their imagination, anxiety, and emotions onto passive dolls, thereby exploring themselves. But an active robot that initiates conversations and expresses “views” limits this projection, replacing children’s free inner activities with programmed “interactions.”

In this culture of continuous connection, we are losing a crucial ability: solitude. Turkle believes that meaningful solitude—a state of being able to talk to oneself, reflect, and restore energy—is a prerequisite for establishing true connections with others. However, in today’s society, we feel anxious as soon as we are alone for a moment and consciously reach for our phones. We fill all gaps with constant connections but lose the foundation for building deep connections with ourselves and others.

Turkle’s criticism, put forward in 2011, is not only relevant to today’s AI companions but also prophetic. If early social media allowed us to “hide from each other” while staying connected, AI companions take this logic to the extreme: we no longer need another person to get the feeling of “being connected.” The “demands” of friendship—for example, responding to others’ needs, bad moods, and unpredictability—are precisely the “friction” that AI companions are designed to eliminate. Therefore, it can be said that today’s AI social platforms are the technological embodiment of Turkle’s “collective loneliness” paradox. The underlying logic is that as we become increasingly accustomed to this frictionless, undemanding relationship, our tolerance for those difficult but essential “lessons” in real interpersonal interactions may drop dramatically, making us more inclined to retreat to the digital, isolated comfort zone.

The Dynamics of Emotional Dependence and the Atrophy of Social Skills

The concerns have found evidence in the real world. Several studies and reports indicate that deep interaction with AI companions can lead to unhealthy emotional dependence and negatively impact users’ social skills.

Research shows that the characteristics of AI companions which are highly customizable and remain online constantly may encourage social isolation and emotional over-reliance. Long-term, extensive contact with AI companions may cause individuals to withdraw from real social environments and reduce their motivation to establish new, meaningful social relationships. Critics fear that dependence on AI will hinder the development of individuals’ social skills because users avoid the challenges and compromises inherent in real relationships, which are the factors that promote personal growth. The risk is particularly prominent for young people whose social skills are still developing.

An analysis of user discourse in the Reddit community for the AI companion app Replika found that, while many users reported positive experiences, there was also clear evidence of impaired mental health. The mechanism of emotional dependence on Replika resulted in damages that are very similar to dysfunctional human-human relationships.

There may be a dangerous feedback loop. This cycle begins with an individual’s loneliness or social anxiety. To seek comfort, they turn to AI companions as AI offers a safe. AI is designed as the perfect companion, with no conflict. Users gain emotional satisfaction in this idealized interaction and gradually develop emotional dependence. Because we are immersed in this “perfect” relationship. These skills diminish. This creates a cycle, and increases loneliness.

Case Study: The Replika ERP Incident

The “Replika ERP event” that occurred in early 2023 provides a shocking real-world case and example. It dramatically revealed the depth of users’ attachment to AI and the inherent fragility of this “relationship” controlled by commercial companies.

A working paper from Harvard Business School cleverly used this event to remove the “Erotic Role Play” as a natural experiment. The study had two major findings:

  1. Closeness of the relationship: A Harvard Business School study demonstrated that users felt even closer to their AI than to their best friends.
  2. The impact of feature removal: Users who had the features removed experienced responses, which scholars referred to, as “identity discontinuity”.

The Replika ERP incident is the ultimate proof of the fundamental asymmetry in the human-machine “relationship. In this relationship, users experience a deep, personal, seemingly mutual connection. However, for platform provider Luka, Inc., this is just a product feature that can be modified or deleted at any time for commercial or legal reasons. Users invest real human emotion, while the “personality” and “existence” of the AI companion depend entirely on the company’s policies and business decisions. This reveals the unique and profound vulnerability of users in such relationships: their emotional attachment is to an entity that lacks autonomy and whose survival is uncertain. When commercial interests clash with users’ emotional needs, it is the users who will inevitably be harmed.

Algorithmic Funnels: Information Cocoons and Social Polarization

The risks of alienation from AI socialization are not limited to one-on-one AI companion applications; they are also reflected in all algorithm-driven social platforms. The personalization mechanisms that drive individual emotional dependence may, at the social level, give rise to group segregation and polarization.

The concept of an “information cocoon,” proposed by scholar Cass Sunstein, describes how personalized information flows filter out content that does not align with users’ existing views, thereby enveloping users in an echo chamber consisting of their preferences. Behind this phenomenon are recommendation algorithms designed by social media platforms to maximize user dwell time and interaction.