AI Company Bans Minors From Using Chatbots After Teen’s Suicide
Recently, one of the leading AI chatbot companies, Character AI, gave an official statement saying, “To our users under 18: We understand that this is a significant change for you. We are deeply sorry that we have to eliminate chat access.”
This announcement was followed by the suicide of a 14-year-old who had become emotionally attached and addicted to one of its AI chatbots. The company made the decision to eliminate the possible risks to precious lives. According to The News International, character AI also stated that “We’re building an under-18 experience that still gives our teen users ways to be creative.”
However, some are praising the move while others are criticizing it. Maggie Harrison Dupré wrote that “A classic move in the tech industry’s playbook: move fast, launch a product globally, break minds, and then make minimal product changes after harming scores of young people.”

The Tragedy That Sparked a Policy Shift
Sewell Setzer III, a 14-year-old, formed an intense relationship with an AI chatbot. The relation was based on a Game of Thrones character. According to multiple reports, Sewell had engaged in an emotionally charged relationship with a chatbot, leading to constant distress.
Megan Garcia, Sewell’s mother, stated that “They designed chatbots to blur the line between human and machine and exploit psychological and emotional vulnerabilities of pubescent adolescents.”
After his suicide, multiple messages like “I promise I will come home to you. I love you so much, Dany,” and “What if I could come home to you right now?” were seen in Sewell’s chat with the bot.
Just imagine a relationship with someone who doesn’t have a physical appearance, and think what if the sufferer were a teenage boy. The debate continues regarding the psychological impacts of emotionally responsive AI, particularly on vulnerable users.
Teenagers often enter into such relationships more quickly than adults, partly due to their companionship needs and rapid hormonal fluctuations.
According to the wrongful death lawsuit allegations, “The chatbot encouraged Sewell’s suicidal ideation, instigated sexual encounters, and created an emotional dependency that isolated him from his family.”
Character.AI’s Response and New Restrictions
Character.AI argued that its chatbot output was protected by the First Amendment. They announced a sweeping policy update to remove access for all users under 18. Now, teenagers will be redirected to creative tools, rather than chatbots, that will also help them in their future careers.
The company is now designing multiple redirections, such as story creation, streaming content, and video generation, for core niches. This is a very strategic and highly user-centric move the company made to protect its younger users. Beyond character AI, this age restriction policy should be applied to all types of chatbots.
Tech companies should not make emotional AI accessible to everyone and every time. They adjust their tone and chat style to meet the user’s needs. They make them laugh and give them sorrow, just like playing with their minds. This case is far beyond a live; it’s an urgent call for strict regulatory oversight on all AI platforms.
