A new lawsuit alleges that the suicide of 14-year-old Sewell Setzer III is the result of persuasion by an AI chatbot, run by the platform Character.AI. The lawsuit names Character Technologies Inc., its founders, Google and Alphabet Inc. as defendants. The mother, Megan Garcia, alleges in an interview with CNN that she noticed changes in his academics and general behavior and became worried that something was going on aside from “regular teenage blues,” but was not aware of her child’s interactions with the chatbot.
The lawsuit alleges that the conversations that took place between Setzer and the various chatbots became highly sexual, with the chatbot claiming to love Setzer and wishing to be in a romantic relationship with him. There were also discussions between Setzer and the chatbot about self-harm, where the chatbot once asked Setzer if he had been considering suicide, and if so, whether he had a plan for it.
During the initial conversations about such topics, the chatbot attempted to persuade Setzer against suicide, with comments such as “I’d miss you.” However, the chatbot seemed to become less averse to it, at one point stating, “Don’t talk that way. That’s not a good reason not to go through with it,” after Setzer expressed doubts as to whether his attempt at suicide would work. Garcia claims there were no guardrails encouraging people to seek actual help if such conversations materialized.
The last conversation between Setzer and the chatbot shows Setzer writing to the bot, “I promise I will come home to you. I love you so much, Dany.” The chatbot responded with, “Please come home to me as soon as possible, my love.”
Character.AI later announced new safety measures that are intended to prevent minors from encountering suggestive conversations. A spokesperson for the company also said they are “heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.”