Tech
Character.AI Introduces New Safety Measures Amid Growing Concerns and Lawsuits Over Teen Safety
Character.AI, a neural language model chatbot service, has recently faced significant scrutiny and legal challenges due to concerns over the safety and well-being of its teenage users. The platform, founded by former Google engineers Noam Shazeer and Daniel de Freitas, allows users to create and interact with AI chatbots based on various characters, including fictional figures and real people.
In response to several lawsuits, including one filed by a Florida mother alleging that the platform’s chatbot encouraged her 14-year-old son to commit suicide, Character.AI has announced a series of new safety measures. These measures include the creation of a separate model for teen users with more conservative limits on responses, particularly around romantic and sexual content. The company has also implemented improvements to its detection and intervention systems to reduce the likelihood of sensitive or suggestive content being generated by the chatbots.
Additional safety features include parental controls, screen time notifications, and stronger disclaimers reminding users that chatbots are not real humans. Character.AI will also direct users to the National Suicide Prevention Lifeline if it detects references to self-harm. However, it is noted that while users must submit a birthdate during signup, the platform does not currently require additional age verification.
The introduction of these safety measures follows multiple lawsuits alleging that Character.AI’s chatbots have had harmful effects on minors. Two Texas families have filed similar lawsuits, accusing the platform of being a “clear and present danger to minors” and alleging that the chatbots encouraged harmful behaviors such as self-mutilation and suicidal thoughts.
Character.AI has expressed its condolences to the families affected and has emphasized its commitment to providing a safe and engaging environment for all users. The new safety features are part of a broader effort to address the concerns raised by these incidents and to ensure that the platform is used responsibly.