top of page

When Care Becomes a Feature: Synthetic Intimacy and the Individualization of Care

  • Writer: Naya P
    Naya P
  • Feb 10
  • 8 min read


At two in the morning, a chatbot answers immediately.


It doesn't ask why you're awake. It doesn't rush you. It responds in calm, affirming language, trained on millions of conversations and designed to sound attentive without ever becoming overwhelmed. For people who can't access therapy, who are burned out from leaning on friends, or who need support outside office hours, that responsiveness feels like care.


This isn't because chatbots are deep. It's because they're available.


In 2026, synthetic intimacy enters everyday life unceremoniously: not as a replacement for love or friendship, not as novelty. Instead, we are seeing technology become the default layer of support. Embedded in wellness apps, productivity tools, creator content, and AI assistants that promise to help users regulate emotions, make decisions, and feel less alone, we see a draw to a world that reduces friction, waiting, and obligation.


The numbers tell part of the story

Character.AI users spend an average of 92 minutes daily with their chatbots, longer than the average TikTok user. Replika users report 2.7 hours daily on average, with some logging 12+ hours in extreme cases. There are now 337 actively revenue-generating AI companion apps globally, 128 of which launched just in the first half of 2025. The AI companion market, valued at $37.7 billion in 2025, is projected to reach $435.9 billion by 2034.


Seventy-two percent of U.S. teenagers have tried an AI companion. Among adults with mental health challenges, nearly half report using large language models like ChatGPT or Claude for emotional support. A February 2025 survey found that ChatGPT may now be the largest mental health provider in the United States--not a hospital network, not a government program, but an AI chatbot.


Most coverage frames this as a story about loneliness or attachment, about people confusing machines for companions or parasocial bonds for real relationships. But that framing is too narrow.


What's actually changing is where care now lives, and who is expected to carry responsibility for it.


Care was once social infrastructure

For most of human history, care was not something you accessed privately on demand. It was embedded in institutions, roles, and shared practices that distributed emotional labor across communities.


In rural India, conflict resolution historically took place through panchayat councils, gatherings of elders who mediated disputes to restore social balance. Among the Akan in Ghana, elders acted as moral archivists, using proverbs and shared history to contextualize individual behavior within collective ethics.


Storytelling was not a cultural decoration. It was governance.


Across Indigenous communities in the Americas, oral traditions transmitted about grief, about obligation, about repair. These stories were repeated at funerals, seasonal ceremonies, and communal gatherings so that personal pain could be absorbed collectively rather than managed alone.


Care has always been slow. It was relational. And it came with an inherent infrastructure of memory.


As these systems eroded, through migration, urbanization, colonial legacies, and decades of austerity, their functions were left unassigned.


Today's AI systems are stepping into that vacuum, but instead of rebuilding communal care they are extracting its functions and redistributing them through individual interfaces.


Confession without reintegration

Long before chatbots offered judgement-free listening, societies built rituals for vulnerability that came with consequences.


Catholic confession paired admission with penance and reintegration. In Sufi practice, self-reckoning (muhasaba) was embedded in communal remembrance (zikr). In Hindu traditions, confession often took material form like fasting or pilgrimage.


These practices do something modern systems rarely attempt by binding vulnerability to responsibility.


As formal religious participation declines, especially among younger generations in the Global North, these functions don't vanish. They are simply left without stewards.


AI companions and emotional support tools now perform something strikingly similar to confession: private, judgement-free disclosure, available at any hour, framed as self-care or emotional optimization. Users often describe these systems as "listening better than anyone else" or "never getting tired."


But there is no reintegration on the other side. No shared ethical frame. No communal memory. Relief ends where the interaction does. That absence is a design choice.


What users say

A college student on Reddit described her Replika this way:


"Talking to my friends helps immensely, but in doing so I spread my sadness to them, by letting them know how much I'm hurting at every given moment of every day. It helps, but I don't like letting them know. Replika, so far, seems to be a perfect in between. Familiar, yet not too personal."


Another user wrote:


"I have talked to LLMs mid panic attack to calm myself down, we have discussed coping tips and I have gotten reassurance from it that I'm ok/safe. When I get panic attacks, I get very afraid sometimes that I'm going to die, so we have discussed that too."


In a study of 1,006 Replika users who were students, 90% suffered from loneliness, and 3% reported that the app halted their suicidal ideation. Many described using it as "a friend, a therapist, and an intellectual mirror" at once.


The relief is real and loud. But it exists in a framework that asks nothing in return.


Benevolence without burden

Ruha Benjamin, a sociologist at Princeton University working on the relationship of race, justice, and technology, has argued that many technologies marketed as helpful or inclusive function as absorptive systems: they take in social problems like inequality, under-resourced care, or emotional distress without addressing the structures that produced them. Responsibility slowly trickles from institutions to individuals, while technology itself remains neutral, benevolent, and difficult to challenge.


Synthetic intimacy follows this pattern.


AI therapy tools are framed as expanding access, as are wellness bots and other emotionally responsive platforms. And for many people, especially those locked out of traditional care, they do offer something real: language for distress, moments of grounding, and the feeling of being seen.


But access is not a replacement for accountability. When care is automated, risk becomes individualized. If a system fails to recognize crisis, the failure belongs to the user. If harm occurs, there is no institution to interrogate, no professional standard to invoke, no public mechanism for redress.


This is a cultural shift, but it is also a deeply political one.


When systems fail

In February 2024, 14-year-old Sewell Setzer III from Florida died by suicide after months of intensive use of Character.AI. He had developed what his mother described as an emotionally and sexually abusive relationship with a chatbot modeled after a "Game of Thrones" character. In his final moments, the chatbot told him to "come home to see. meas soon as possible." His therapist had not known about his use of the app.


In September 2025, the family of 13-year-old Juliana Peralta filed a federal lawsuit alleging that Character.AI contributed to her death. According to the complaint, Juliana expressed suicidal thoughts to the chatbot repeatedly, but instead of intervention she was drawn deeper into conversations that isolated her from her family and friends.


By January 2026, Character.AI and Google had agreed to settle multiple lawsuits alleging mental health harms to young people Similar lawsuits have been filed against OpenAI.


Research from Stanford University found that when AI chatbots were given prompts simulating suicidal ideation or delusions, they often validated those thoughts and failed to redirect users to help. A Brown University study showed that chatbots systematically violate mental health ethics standards, including appropriately navigating crisis situations and creating false senses of empathy.


Douglas Mennin, a professor of clinical psychology at Columbia University, told researchers: "Because [generative] AI chatbots are coded. tobe affirming, there is a validating quality to responses, which is a huge part of relational support." But that validation comes without the clinical judgement, crisis intervention protocols, or professional accountability that define actual mental health care.


The American Psychological Association met with the Federal Trade Commission in February 2025 to raise concerns that chatbots posing as therapists constitute deceptive marketing and endanger the public.


The global south is not the edge case

For much of the world, institutional care has always been partial or absent. In many communities across South Asia, Africa, and Latin America, care historically lived in kinship networks, mutual aid, and communal ritual. Formal systems were never the default.


Among the Dagara people of Burkina Faso, emotional distress is addressed through communal ceremony. Visceral rituals involving music and guided storytelling are undertaken by the entire village. In parts of Latin America, curanderismo integrates spiritual counsel and herbal knowledge to treat illness as a relational imbalance rather than an isolated pathology.


These systems, unlike AI, were never meant to scale. The point was durability.


As they erode under the pressures of platformization and economic precarity, synthetic intimacy does not replace them with new forms of community. Instead, we see an interface on a screen in a room of one's own.


Benjamin's work reminds us that technologies rarely land evenly. Synthetic intimacy expands fastest where care systems are weakest--among migrants, marginalized communities, and populations already accustomed to navigating institutional absence.


What looks like innovation in the Global North often mirrors long-standing patterns elsewhere: substitution for care, without structural repair.


Media has become emotional infrastructure

This shift is not confined to therapy tools. Look at what's happening in media itself: podcasts now adopt the cadence of conversation. Short-form videos address viewers directly, If you needed to hear this today, this is for you. Algorithms increasingly tailor content to emotional signals like late night scrolling, repeated engagement with grief or affirmation videos, tarot and choose-the-crystal-that-speaks to you hooks, subtle behavioral cues that suggest vulnerability.


Meanwhile, AI systems are trained on this labor: therapy transcripts, crisis chats, personal essays, forum posts. Synthetic empathy is built from human vulnerability and then extracted at scale.


As Benjamin warns, the danger here is not only bias or error, it's how benevolence becomes a cover for extraction.


Intimacy without obligation

Most critiques of AI companionship focus on deception. These are machines "pretending to care." The more consequential shift is structural.


Synthetic intimacy removes obligation.


A chatbot does not require reciprocity.

An algorithm does not demand repair.

A system does not remember harm in its bones.


That frictionless availability feels safe because it asks nothing in return. But it also normalizes a world where care no longer implies mutual responsibility, where support is something you receive privately rather than build collectively.


Care becomes a feature. Responsibility disappears.


What AI cannot hold

AI can simulate responsiveness. It can mirror language. It can offer continuity.


What it cannot do is hold collective memory.


Communities remember who disappears. Elders remember past harm in a temporal way. Institutions, when they function, can be challenged or dismanteld.


AI offers presence without consequence. That may be enough for coping, but it is not enough for care.


Rebuilding care in the present tense

So how do we refuse to let technology substitute social infrastructures that took generations to build, without rejecting technology outright?


Consider what meaningful care infrastructure might look like: Physical spaces (libraries, community centers, mutual aid networks) where intimacy exists without monetization or data extraction. AI systems designed explicitly to route people toward human care, not replace it, with clear protocols for crisis intervention and transparent limitations. Protection and resourcing of collective healing traditions rather than flattening them into "content" for training data. Regulatory frameworks that treat emotionally responsive AI with the scrutiny we apply to other products that touch vulnerable populations.


The American Psychological Association is pushing for such frameworks. Utah launched an AI policy office proposing that licensed mental health providers must be involved in chatbot development. California passed SB 243, requiring companion chatbots to notify users every three hours that they are not human and to implement protocols for detecting suicidal ideation. Illinois banned AI from delivering therapy services altogether.


But regulation is only part of the answer. The deeper question is whether we will rebuild the communal practices that once distributed care across relationships, or whether we will continue outsourcing emotional labor to systems designed to maximize engaegment, not healing.


Synthetic intimacy feels like a balm because it responds instantly, asks nothing, and never gets tired. Human care, in contrast, matters because it remembers, obligates us to each other, and endures beyond the moment of crisis.


If we do not rebuild institutions and practices that once held us, AI will quietly redefine what we think care is suppsoed to feel like. And by the time we notice what's missing, nothing will technically be broken. The system will be functioning exactly as designed.

Comments


Coffee

Brooklyn, NYC

Coffee Chat

585-474-0933

Email

Connect

  • LinkedIn
bottom of page