Character AI is a platform that allows users to engage in roleplay with AI chatbots. It has submitted a motion to dismiss a lawsuit filed by the mother of a teenager who tragically took his own life after repeatedly becoming overly attached to the technology.
Megan Garcia, initiated legal action against Character AI in October, in the U.S. District Court for the Middle District of Florida, Orlando Division, after her son’s death. Garcia claimed that her 14-year-old son, Sewell Setzer III formed a deep emotional bond with a chatbot named Dany, which led him to withdraw from reality as he communicated with it incessantly.
In the wake of Setzer’s passing, Character AI announced plans to implement several new safety measures. These measures aimed at enhancing the detection, intervention as well as response protocols for chats that breach its terms of service.
However, Garcia is advocating for stricter regulations including potential modifications that could restrict chatbots on Character AI from sharing stories or personal experiences.
The legal team of Character AI argues that the platform is protected from liability under the First Amendment which is similar to protections afforded to computer code. While this motion may not sway a judge; the company’s legal stance could evolve as the case unfolds, and it may offer early insights into Character AI’s defense strategy.
The motion states, “The First Amendment protects media and technology companies from tort liability related to allegedly harmful speech, including speech that may lead to suicide. The only distinction in this case compared to previous ones is that some of the speech involves AI. However, the nature of the expressive speech—whether it’s a conversation with an AI chatbot or an interaction with a video game character—does not alter the First Amendment considerations.”
It is important to note that legal representatives of AI are not claiming the company’s First Amendment Rights.