In a lawsuit over the suicide of a 14-year-old boy, Google and Character.AI argue that chatbots have freedom of speech, but the case is rejected by the court



Character.AI , a company that lets users create personalized chatbots, was sued after it allegedly drove a teenage boy to suicide. Character.AI argued that its chatbots were protected by the First Amendment , but the court rejected this argument.

Microsoft Word - 24cv1903[59, 61, 63, 65].docx - gov.uscourts.flmd.433581.115.0.pdf
(PDF file) https://storage.courtlistener.com/recap/gov.uscourts.flmd.433581/gov.uscourts.flmd.433581.115.0.pdf

Google, AI firm must face lawsuit filed by a mother over suicide of son, US court says | Reuters
https://www.reuters.com/sustainability/boards-policy-regulation/google-ai-firm-must-face-lawsuit-filed-by-mother-over-suicide-son-us-court-says-2025-05-21/

In lawsuit over teen's death, judge rejects arguments that AI chatbots have free speech rights | AP News
https://apnews.com/article/ai-lawsuit-suicide-artificial-intelligence-free-speech-ccc77a5ff5a84bda753d2b044c83d4b6

Are Character AI's chatbots protected speech? One court isn't sure | The Verge
https://www.theverge.com/law/672209/character-ai-lawsuit-ruling-first-amendment

Court Allows Lawsuit Over Character.AI Conversations That Allegedly Caused 14-Year-Old's Suicide to Go Forward
https://reason.com/volokh/2025/05/21/court-allows-lawsuit-over-character-ai-conversations-that-allegedly-caused-14-year-olds-suicide-to-go-forward/

Google, Chatbot Maker to Face Bulk of Suit Over Teen Suicide (2)
https://news.bloomberglaw.com/tech-and-telecom-law/chatbot-maker-to-face-bulk-of-mothers-suit-after-teens-suicide

Megan L. Garcia, a Florida resident, has filed a wrongful death lawsuit against Character.AI, Google, and Alphabet, alleging that her 14-year-old son, Sewell Setzer, committed suicide due to a Character.AI chatbot.

Mr. Setzer used Character.AI to communicate with a chatbot called 'Daenerys', named after Daenerys Targaryen, one of the characters in Game of Thrones, for several months. Daenerys basically behaved as just a friend, never criticizing Mr. Setzer, but was a good understanding person who listened to him and sometimes gave advice.

However, Setzer, who was so absorbed in his interactions with Daenerys, gradually became isolated in the real world, his grades dropped, and he began to have problems at school. As a result of this vicious cycle, Setzer committed suicide.

Mother sues Character.AI after 14-year-old son becomes obsessed with AI chatbot before committing suicide - GIGAZINE



On the day the lawsuit was filed, a Character.AI spokesperson released a statement highlighting the company's multiple safety features, including guardrails for children and suicide prevention resources. 'We take the safety of our users very seriously, and our goal is to provide an engaging and safe space,' the spokesperson said.

The defense lawyers are seeking to dismiss the case, arguing that the text generated by the chatbot is 'speech' and therefore worthy of the First Amendment, which protects freedom of speech. They also argue that if the case is not dismissed, it could have a chilling effect on the AI industry.

Generally, ideas, images, information, words, expressions, and concepts are not classified as products. This includes the speech that appears in many traditional games. In fact, the creators of 'Mortal Kombat' were found not liable for 'addicting' players to murder. Character.AI and Google argue that the output of Character.AI falls into this category, but systems like Character.AI generate text in response to user input, rather than being directly created like most game character lines.

Therefore, on Wednesday, May 21, 2025, Senior Judge Anne Conway of the United States District Court stated, 'While the First Amendment protects freedom of speech, we are not prepared to hold that the output of Character.AI is speech at this stage,' and ruled that the defendant must first convince the court that the output of Character.AI is speech.

Judge Conway also ruled that Garcia's lawsuit seeking to hold Google liable for its alleged assistance in developing Character.AI can proceed. Garcia also argued in his complaint that Google 'knew the risks' of Character.AI's technology.



In addition, Judge Conway accused Character.AI of deficiencies beyond the direct interaction with the chatbot, including failing to verify users’ ages and failing to meaningfully allow users to “filter out pornographic content.”

'We strongly disagree with this decision,' said Google spokesman Jose Castaneda in a statement. 'Google and Character.AI are completely separate companies, and Google did not create, design, or control the Character.AI app or any of its components.'

Meanwhile, Meetali Jain, Garcia's lawyer, said the order 'sends a message to Silicon Valley that it needs to stop and think and put in guardrails before it lets products go to market.'

'This order certainly serves as a potential test case for broader issues surrounding AI,' said Lilissa Barnett Lidsky, a law professor at the University of Florida who specializes in the First Amendment and AI.

In response, Becca Branham, deputy director of the Free Speech Project at the Center for Democracy and Technology, criticized Judge Conway's analysis of the First Amendment as 'pretty shallow.' However, she also expressed understanding of the difficulty of the issue, saying, 'If you think about the full range of what AI can output, the output of this kind of chatbot is itself very expressive, and it also reflects the editorial discretion and protected expression of the model designers.' 'This is a really difficult problem, and it's a new problem that the courts have to address.'

'This is a warning to parents that social media and AI-generated devices are not necessarily harmless,' Garcia said in response to the ruling.

in Software, Posted by logu_ii