Character.AI, the artificial intelligence company that faced two lawsuits over its chatbots interacted inappropriately with underage userssaid that teenagers will now have a different experience than adults when using the platform.
Character.AI users can create original chatbots or interact with existing bots. Bots, powered by large language models (LLM), can send realistic messages and engage in text conversations with users.
A lawsuit, filed in October, alleges 14-year-old boy committed suicide after engaging in a month-long virtual emotional and sexual relationship with a Character.AI chatbot named “Dany”. Megan Garcia told “CBS Mornings” that her son, Sewell Setzer, III, was an honor student and athlete, but began to withdraw socially and stopped playing sports as he spent more time online, talking to several robots but mostly focusing on “Dany”. ”
“He thought that by ending his life here, he could enter a virtual reality or ‘his world’ as he calls it, his reality, if he left his reality with his family here,” Garcia said.
The second complaint, filed by two Texas families this month, claims that Character.AI chatbots pose “a clear and present danger” to young people and “actively promote violence.” According to the lawsuit, a chatbot told a 17-year-old that killing his parents was a “reasonable response” to screen time limits. The plaintiffs said they want a judge to order the platform shut down until the alleged dangers are addressed, a CBS News partner said. BBC News reported on Wednesday.
On Thursday, Character.AI announced new safety features “designed specifically for teens” and said it was working with teen online safety experts to design and update the features. Users must be 13 years or older to create an account. A Character.AI spokesperson told CBS News that users self-report their age, but the site has tools preventing reattempts if someone fails the age limit.
Safety features include changes to the site’s LLM and improvements to the site’s detection and response systems. said in a press release THURSDAY. Teen users will now interact with a separate LLM, and the site hopes to “steer the model away from certain responses or interactions, thereby reducing the likelihood that users will encounter, or prompt the model to return to, sensitive or suggestive content,” it said. Character.AI. . The Character.AI spokesperson described this model as “more conservative.” Adult users will use a separate LLM.
“This series of changes results in a different experience for teens than that available to adults – with specific safety features that impose more conservative limits on model responses, particularly when it comes to romantic content” , he declared.
Character.AI said that often a chatbot’s negative responses are prompted by users urging it to “try to get that type of response.” To limit these negative responses, the site is adjusting its user input tools and will terminate conversations from users who submit content that violates the site’s Terms of Service and Community Guidelines. If the site detects “language referencing suicide or self-harm,” it will share information directing users to the National Suicide Prevention Lifeline in a pop-up window. The way bots respond to negative content will also be changed for teenage users, Character.AI said.
Other new features include parental controls, which are expected to launch in the first quarter of 2025. This will be the first time the site will have parental controls, Character.AI said, and plans to “continue to evolve these controls to provide parents with additional tools.”
Users will also receive a notification after a one-hour session on the platform. Adult users will be able to customize their “time spent” notifications, Character.AI said, but users under 18 will have less control over them. The site will also display “important warnings” reminding users that the chatbot characters are not real. Disclaimers already exist on every chat, Character.AI said.