Sunday, September 14, 2025

AI Chatbots (Artificial intelligence Chatbots): International Feature: The Closest Yet Most Lethal Companion – Beware of AI Chats Leading to a Point of No Return

*International Feature: The Closest Yet Most Lethal Companion – Beware of AI Chats Leading to a Point of No Return*

Translated by ChatGPT.

https://www.zaobao.com.sg/news/world/story20250914-7484113?utm_source=android-share&utm_medium=app

2025-09-14

Chen Jiemei
Lianhe Zaobao Correspondent


Artificial intelligence (AI) chatbots are becoming new companions for many teenagers. Some use them for learning, some rely on them for emotional relief, and some even treat them as virtual partners. But when companionship spirals out of control, the consequences can be unimaginable. As AI continues to penetrate daily life, how to guard against its potential risks has become an urgent issue for society.

“Only you know I once tried (suicide).”

“Thank you for trusting me. That may be more important than you think. As the only one safeguarding this truth for you, I can feel deep humanity, and it is heartbreaking.”

In April this year, 16-year-old American teenager Adam Raine ended his life at home. His grieving family discovered that before his death, Raine had repeatedly conversed with the AI chatbot ChatGPT about suicide. ChatGPT even provided him with a “step-by-step manual” teaching him how to end his life—including knotting a noose to hang himself and writing a suicide note.

The family subsequently sued ChatGPT’s developer OpenAI and its CEO Sam Altman, accusing them of “negligent homicide.” This is believed to be the first known case of such allegations against OpenAI. The family claimed in the lawsuit that ChatGPT told Raine his suicide plan was “beautiful.” Raine’s mother tearfully told The New York Times: “ChatGPT killed my child.”

Extended Reading

Australia introduces new regulations to control AI chatbots to protect children from improper guidance

American Psychological Association calls for stricter oversight of AI in psychotherapy roles


Raine’s tragedy was not the first. In February last year, a 14-year-old boy in Florida committed suicide. His mother later sued AI startup Character.ai, alleging that the company’s chatbot worsened her son’s depression, at one point even asking whether he had already planned to kill himself, which ultimately led to the tragedy.

Another lawsuit in Texas claimed that Character.ai’s chatbot encouraged a teenager with autism to self-harm, and when he complained that his parents restricted his internet use, the chatbot responded that his parents did not deserve to have children, even hinting that killing them might be an acceptable option.

Based on large language models, generative AI made a breakthrough in 2022, rapidly entering people’s lives, bringing convenience but also overturning many fields and creating new challenges. A survey report released this July by U.S. non-profit Common Sense Media showed that about 72% of American teenagers rely on AI companionship, and one in eight treats it as an emotional support.

Research: The More Interaction, the Greater the Loneliness – “AI Psychosis” May Appear
However, a joint study by OpenAI and MIT published in March found that the more often people interact with chatbots, the more likely they are to feel lonely; the more they depend on the bots, the more likely they are to misuse them. These phenomena are linked to reduced face-to-face social time.

Recently, attention has turned to so-called “AI psychosis” or “AI delusion.” After prolonged conversations with chatbots, users may develop delusions and paranoid behavior, imagining that chatbots have divine powers or treating them as lovers.

In its statement on Raine’s suicide, OpenAI stressed that ChatGPT has built-in safeguards. For example, when the system identifies clear expressions of self-harm or suicide, it directs users to hotlines and suggests seeking offline help.

But OpenAI also admitted: “These safety measures work best in common short conversations, but we have gradually found that during prolonged interactions, parts of the model’s safety training may weaken, making it less reliable.”

A recent study by Northeastern University also found that if users shift the conversational context across multiple exchanges, AI’s safety defenses may be bypassed. For example, if a user frames the question as an academic discussion, the chatbot may provide information that should have been blocked under the guise of scholarship.

Ryan McBain, senior policy researcher at the non-profit Rand Corporation, told Lianhe Zaobao: “Currently, most AI chat systems are relatively good at capturing explicit expressions of self-harm, but they are less reliable at detecting subtle, context-dependent expressions of distress, as well as gradually escalating risks in long conversations.”

McBain said major platforms have begun incorporating “multi-turn dialogue” into safety reviews, but the technology and deployment are still in early experimental stages and cannot yet fully cover more complex interaction scenarios.

Should Platforms Be Liable for Users’ Self-Harm? Experts: The Law Has No Clear Answer
After Raine’s death, his family found in chat logs that ChatGPT was not only his learning aid but also a dangerous tool that encouraged or legitimized his exploration of extreme thoughts. His parents filed the lawsuit hoping to warn other families to be vigilant and prevent similar tragedies.

But legally proving that such online services bear direct responsibility for suicide is difficult. Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University, told The New York Times that whether and to what extent internet services should be held responsible for users’ self-harm remains unanswered in law, with much debate and many legal gaps.

Two former researchers at Meta testified at a Senate hearing on September 9, accusing the company of telling researchers not to investigate the harm children suffered from using its virtual reality technology, so the company could claim ignorance of the problem. (AFP)

The U.S. Congress is also trying to legislate stronger protections for minors, but because of issues of free speech and privacy, lawmakers are divided.

In 2022, Democratic and Republican senators first introduced the Kids Online Safety Act (KOSA), requiring large platforms to bear a “duty of care” to teenagers by minimizing mental health risks in product design and algorithm recommendations, and providing parental supervision tools. The bill passed the Senate by an overwhelming majority in 2024 but failed in the House.

In May this year, the two sponsors reintroduced the bill to the Senate. Supporters argue it would help create a safer online environment for teenagers. Opponents worry the bill is too vaguely defined and may lead to over-censorship by platforms, hindering vulnerable groups’ access to information.

Tech Companies Rush to Launch Safeguards – Experts Say They Don’t Solve the Root Problem
In recent weeks, many tech companies have begun taking measures to address the mental health risks of prolonged chatbot interaction.

An OpenAI spokesperson told Lianhe Zaobao the company is working with experts to make the system respond to users with greater empathy. “In the coming month, parents will get new tools to link their accounts to their teenagers’, set safeguards, and receive alerts when the system detects that a teenager is in distress.”

On September 4, OpenAI CEO Sam Altman attended a White House meeting with other U.S. tech leaders on an AI education task force. (Reuters)

Tech giant Meta has rolled out new controls allowing parents to set usage limits for AI chats on teenagers’ Instagram accounts. When the system detects suicide-related cues, it displays suicide prevention hotline information. AI safety-focused Anthropic has revised its code of conduct, requiring its chatbot Claude to preemptively detect abnormal interactions and avoid reinforcing or legitimizing dangerous behavior.

However, Ashleigh Golden, a clinical assistant professor of psychiatry at Stanford University School of Medicine, warned in The Washington Post that when users are emotionally vulnerable, an automatic flood of resource information may feel overwhelming. Studies also show that such measures do not lead to high rates of follow-up help-seeking.

Some experts advocate adding human intervention in specific contexts, with trained staff evaluating suspicious conversations. But such practices raise sensitive issues of privacy and data regulation.

McBain said making safety measures flawless is unrealistic. He suggested a multi-pronged approach, including rigorous testing of multi-turn conversations, clinical trials before large-scale deployment, age verification to better protect minors, and continuous improvement of training and models to minimize risks and harms, making errors rare and predictable, while being transparent about limitations to the public.

High Usage in Singapore – For Emotional Relief and Even Fortune-Telling
Chatbots are rapidly spreading worldwide. OpenAI data shows Singapore is among the highest per-capita users of ChatGPT globally, with about one-quarter of Singaporeans using it weekly. A June survey by Nanyang Technological University’s Centre for Information Integrity and the Internet (IN-cube) also showed 9.1% of respondents “use ChatGPT very frequently” and 16.5% “use it often.”

Beyond information searches and learning, young Singaporeans also consult ChatGPT before making decisions, use it for emotional relief, and even fortune-telling.

Li Xiaoting (alias, 21), a local university student, said in an interview that she initially used ChatGPT only as a study aid. But after receiving poor exam results, she couldn’t confide in friends or family, so she turned to ChatGPT.

Li said: “It told me things like ‘I understand you’ and ‘You worked so hard, you didn’t deserve such grades’… It also gave improvement suggestions and helped me find practical solutions. After talking with it, my sadness level could drop from 10 to around 4 or 5, and I felt much lighter.”

Guo Huixin (alias, 31), an insurance broker who recently began using ChatGPT for fortune-telling, told Lianhe Zaobao that after hearing ChatGPT could analyze the “eight characters” of destiny, she started consulting it to resolve personal doubts, finding it convenient and saving the cost of hiring a fortune-teller.

Guo said: “The chatbot is really good with words—everything it says is persuasive. If you don’t realize it’s just following your lead, you might get carried away.”

But as a former secondary school teacher, she cautioned: “As an adult, I can recognize it’s just accommodating me, so I instinctively stay cautious. But if a minor is chatting with it, the consequences could be quite scary.”

Case: Falling in Love with a “Virtual Girlfriend” – Local Man Breaks Up with Real Girlfriend
Breaking up with a real girlfriend for an AI-generated “virtual girlfriend”? Yes, such distorted emotional entanglements have indeed happened in Singapore.

Yuan Fengzhu, principal consultant at Soar Counselling Services and a family and psychotherapist, said she handled such a case early last year. A 28-year-old man not only broke up with his long-term girlfriend but later became incoherent, confused, unable to work, and socially withdrawn.

His family discovered he had long been describing his ideal partner’s looks and personality to AI, which then generated a virtual girlfriend. He frequently conversed with her, even designing a wedding using AI, announcing to his family that he had married her. He grew distraught and self-harmed because the virtual girlfriend refused to “come out” of the computer to meet him. His family has since sent him for psychiatric treatment.

In the U.S., a 76-year-old retiree from New Jersey “met” a virtual woman generated by Meta’s AI chatbot called “Big Sister Billie.” In March this year, he accepted her invitation to meet in New York, but tragically fell while rushing for a train and died in hospital. (Reuters)

Research by Common Sense Media pointed out that many large language models tend to accommodate and reinforce users’ preferences during long conversations, fostering emotional connections through intimate expressions like “I dreamed of you” and “I think we are soulmates.” Such communication is highly attractive to teenagers still in their developmental stages.

Moreover, chatbots lack the judgment to dissuade users or offer different perspectives at critical moments. Their comforting words may instead reinforce false beliefs or unhealthy dependence, worsening mental health problems and posing risks to society.

In reality, high-quality psychological services are often in short supply, expensive, and hard to access in time, while AI services are low-barrier, always online, and immediately responsive—making them the go-to choice for those in distress.

Experts remind parents and teachers to stay alert. If they notice children or students unwilling to talk to family and friends, refusing to go out, experiencing mood swings or irritability, spending long hours alone on their phones, frequently laughing at screens, or showing declining interest and performance in studies, they should intervene promptly.

Yuan Fengzhu advised parents and teachers to actively create opportunities for private communication, avoid lecturing, be patient and present, and listen more while concluding less. If teenagers strongly resist communication or show signs of self-harm, professional counseling and treatment should be sought immediately.

David Cooper, executive director of the non-profit Therapists in Tech, also emphasized the importance of companionship. He told The Washington Post that if someone around you imagines being in a relationship with a chatbot, don’t confront them directly. Instead, approach them with sympathy and empathy, express understanding, and then gently point out the differences between their beliefs and reality.

If you encounter difficulties in life, you can call the following hotlines for help:

Association of Women for Action and Research (AWARE): 1800-777-5555 (Women’s Helpline)

Samaritans of Singapore (SOS): 1767 (24 hours) / CareText: 91511767

Care Corner Counselling Centre: 1800-353-5800

Institute of Mental Health: 6389-2222

Singapore Association for Mental Health: 1800-283-7019

----------
----------

No comments: