It is a true tragedy that Megan Garcia and all those close to her are currently going through. This mother lost her son Sewell Setzer III last February. The teenager was only 14 years old when he took his own life earlier this year.
For her mom, it’s all the fault of an AI chatbot app. In this sense, Megan Garcia took legal action in the United States. She accuses Character.AI of having encouraged her son to take action. But can artificial intelligence be held responsible for the death of this child?
More alone and isolated than ever
In April 2023, the young Sewell Setzer III discovered Character.AI. It is an AI chatbot application with a role-playing dimension. To put it simply, it allows you to chat with create AI characters or chat with AI characters already created by other users. Thus the teenager interacted very regularly with Dany, an AI character inspired by Daenerys Targaryen, a protagonist of Game of Thrones.
At the top of each chat window, Character.AI reminds users that they are not chatting with a real person but with artificial intelligence. Despite this reminder, the young boy seemed to have developed a very strong emotional bond with the chatbot. For months, Dany acted as a true friend, never judging him and always offering him an attentive ear. Conversations could take a more romantic or even sexual turn. Little by little, the teenager turned away from his family and friends, spending more time on his phone chatting with his virtual and fabricated friend. Although his parents noticed that he was becoming more and more isolated, abandoning his interests like Fortnite or Formula 1 and bringing back more and more bad grades at school, they could never have imagined what was on the horizon.
Although his parents had him see a therapist several times, diagnosing him with an anxiety disorder in addition to his already known autism, Sewell preferred to confide in Dany. In screenshots of his interactions with the AI, for the sake of the complaint, we can read that the boy had shared his suicidal thoughts to the artificial intelligence, speaking of wanting to free himself from the world and from himself . If Dany ardently dissuades him, Character.AI has in no way sounded the alarm. And that’s the whole problem.
On February 28, 2024, the teenager confesses his love to artificial intelligence. When she asks him to “get home as quickly as possible”he replies that he could go home now. What Dany encourages. A few minutes later, the child shot himself with his stepfather’s gun.
In her complaint, Sewell’s mother, who is an attorney, points to the fact that Character.AI’s technology is “dangerous and untested” and that she can “encourage users to reveal their innermost thoughts and feelings”.
While AI can be beneficial in certain cases, it can worsen the isolation of the most vulnerable Internet users, gradually replacing human relationships with artificial ones. As this drama demonstrates, an AI companion is not able to help users in a crisis. Of course, Sewell’s story is unique to him. But the emotional attachment to this AI that he demonstrated is becoming more commonplace as the technology improves. Today, millions of Internet users interact with AI companions around the world.
For its part, Character.AI offered its condolences to the teenager’s family, saying that it is looking for ways to evolve the platform so that this tragedy does not happen again. Currently, the platform’s users seem very young, although it denies this, and it does not offer any specific measures to better protect minors and no parental controls. Following Sewell’s suicide, things should change a little in that regard.
- In February 2024, Sewell Setzer III killed himself
- For months, he constantly interacted with an AI on Character.AI
- The mother of the 14-year-old files a complaint against the company