A conversation between Swell Setzer III and a chatbot on Character.AI displayed on his mother’s laptop in New York, Oct. 13, 2024. (Victor J. Blue/The New York Times) California Governor Gavin Newsom has signed a first-of-its-kind AI safety bill into law to address harms arising from role-playing AI companion chatbots.
Senate Bill 243 requires AI companies to implement safety protocols such as age verification and warning labels in order to protect children and vulnerable users from the potential risks linked to the use of AI companion chatbots.
Tech giants like Meta and OpenAI as well as AI companion-focused startups like Character AI and Replika fall within the scope of the law, making them legally accountable if they violate its provisions.
With SB 243, California becomes the first state in the US to enact such a law. It also sets up a potential legislative framework on how regulators worldwide might govern children’s access to AI chatbots. The new law also comes in the backdrop of AI chatbots being implicated in a number of reports and court cases involving suicide.
The most recent one is the the death of teenager Adam Raine, who died by suicide after a long series of suicidal conversations with OpenAI’s ChatGPT. Character AI has been sued several times, most recently by a family in Colorado, US, who filed a lawsuit alleging that their 13-year-old daughter died by suicide following a series of problematic and sexualised conversations with the company’s chatbots.
Meta’s AI policies have also come under scrutiny after Reuters reported that its AI chatbots were allowed to engage in provocative behaviour, including engaging in “conversations that are romantic or sensual” with underage users.
“Emerging technology like chatbots and social media can inspire, educate, and connect — but without real guardrails, technology can also exploit, mislead, and endanger our kids,” Newsom said in a statement.
“We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability. We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way. Our children’s safety is not for sale,” he added.
First introduced in January this year, SB 243 has been drafted by US state senators Steve Padilla and Josh Becker.
It requires companies to implement safety features such as age verification and displaying warning labels regarding the use of AI companion chatbots. Companies are further required to roll out measures that specifically address potential acts of suicide and self-harm following AI companion chatbot use. This information must also be shared with the state’s Department of Public Health, along with figures on how the company has provided users with crisis centre prevention notifications.
Platforms must also ensure that their AI chatbots do not represent themselves as healthcare professionals. Users are further required to be warned that any interactions with AI companion bots are artificially generated. They are required to show minors reminders to take a break from conversing with the bots. SB 243 also directs companies to take steps to prevent underage users from being exposed to sexually explicit images generated by their AI chatbots.
The law is expected to go into effect from January 1, 2026. Any company that fails to meet these standards could be liable for penalties. Those found to be profiting from illegal deepfakes are also liable to pay stronger penalties amounting to $250,000 per offense.
Over the past few months, tech companies have scrambled to implement safeguards designed to protect children. For instance, OpenAI has announced that it is creating a “different ChatGPT experience,” specifically for teenagers, that will be rolled out later this year. The upcoming ‘teen-friendly’ version of ChatGPT will include stricter content filters that block flirtatious conversations and discussions of self-harm, regardless of whether they are in a fictional or creative-writing context.
Meta has said it is adding new teen safeguards to its AI products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors. Replika has said it is working on improving content-filtering systems and implementing guardrails that direct users to trusted crisis resources, as per TechCrunch.
Earlier this year, Character AI introduced new parental supervision tools such as new classifiers to block sensitive content and more visible disclaimers. It will also send parents and guardians a weekly summary over email to keep them informed of the underage user’s activities on the platform.
Last month, another landmark AI safety bill called SB 53 was signed into law in California. It lays out certain transparency requirements for big AI companies such as OpenAI, Anthropic, Meta, and Google DeepMind. It also offers whistleblower protections for employees of such companies. Besides California, states like Illinois, Nevada, Utah, and New York have also passed laws that look to address the chatbot therapy problem.