• This Mother Says a Chatbot Led to Her Son’s Death

  • Jan 14 2025
  • Length: 49 mins
  • Podcast

This Mother Says a Chatbot Led to Her Son’s Death

  • Summary

  • In February, 2024, Megan Garcia’s 14-year-old son Sewell took his own life.As she tried to make sense of what happened, Megan discovered that Sewell had fallen in love with a chatbot on Character.AI – an app where you can talk to chatbots designed to sound like historical figures or fictional characters. Now Megan is suing Character.AI, alleging that Sewell developed a “harmful dependency” on the chatbot that, coupled with a lack of safeguards, ultimately led to her son’s death.They’ve also named Google in the suit, alleging that the technology that underlies Character.AI was developed while the founders were working at Google.I sat down with Megan Garcia and her lawyer, Meetali Jain, to talk about what happened to Sewell. And to try to understand the broader implications of a world where chatbots are becoming a part of our lives – and the lives of our children. We reached out to Character.AI and Google about this story. Google did not respond to our request for comment by publication time.A spokesperson for Character.AI made the following statement:“We do not comment on pending litigation.Our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance, as are many companies using AI across the industry. As part of this, we have launched a separate model for our teen users – with specific safety features that place more conservative limits on responses from the model.The Character.AI experience begins with the Large Language Model that powers so many of our user and Character interactions. Conversations with Characters are driven by a proprietary model we continuously update and refine. For users under 18, we serve a version of the model that is designed to further reduce the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content. This initiative – combined with the other techniques described below – combine to produce two distinct user experiences on the Character.AI platform: one for teens and one for adults.Additional ways we have integrated safety across our platform include:Model Outputs: A “classifier” is a method of distilling a content policy into a form used to identify potential policy violations. We employ classifiers to help us enforce our content policies and filter out sensitive content from the model’s responses. The under-18 model has additional and more conservative classifiers than the model for our adult users.User Inputs: While much of our focus is on the model’s output, we also have controls to user inputs that seek to apply our content policies to conversations on Character.AI.This is critical because inappropriate user inputs are often what leads a language model to generate inappropriate outputs. For example, if we detect that a user has submitted content that violates our Terms of Service or Community Guidelines, that content will be blocked from the user’s conversation with the Character. We also have a process in place to suspend teens from accessing Character.AI if they repeatedly try to input prompts into the platform that violate our content policies.Additionally, under-18 users are now only able to access a narrower set of searchable Characters on the platform. Filters have been applied to this set to remove Characters related to sensitive or mature topics.We have also added a time spent notification and prominent disclaimers to make it clear that the Character is not a real person and should not be relied on as fact or advice. As we continue to invest in the platform, we will be rolling out several new features, including parental controls. For more information on these new features, please refer to the Character.AI blog HERE.There is no ongoing relationship between Google and Character.AI. In August, 2024, Character.AI completed a one-time licensing of its technology and Noam went back to Google.” If you or someone you know is thinking about suicide, support is available 24-7 by calling or texting 988, Canada’s national suicide prevention helpline. Mentioned:Megan Garcia v. Character Technologies, Et Al.“Google Paid $2.7 Billion to Bring Back an AI Genius Who Quit in Frustration” by Miles Kruppa and Lauren Thomas“Belgian man dies by suicide following exchanges with chatbot,” by Lauren Walker“Can AI Companions Cure Loneliness?,” Machines Like Us“An AI companion suggested he kill his parents. Now his mom is suing,” by Nitasha TikuFurther Reading:“Can A.I. Be Blamed for a Teen’s Suicide?” by Kevin Roose“Margrethe Vestager Fought Big Tech and Won. Her Next Target is AI,” Machines Like Us
    Show More Show Less

What listeners say about This Mother Says a Chatbot Led to Her Son’s Death

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.