California Gov. Gavin Newsom signed legislation to regulate artificial intelligence chatbots, introducing measures aimed at safeguarding children from potential harms associated with emerging technologies. The new law mandates features such as age verification, protocols to address suicide and self-harm risks, warnings about social media and companion chatbots, and stricter penalties for illegal deepfakes.

The legislation requires platforms offering “companion chatbots” to implement mechanisms for identifying and responding to users’ suicidal ideation or expressions of self-harm. It also mandates transparency by requiring platforms to disclose that interactions are artificially generated, provide break reminders for minors, and block access to sexually explicit content created by chatbots. Additionally, the law prohibits chatbots from posing as healthcare professionals and compels tech companies to share data on crisis intervention efforts with public health authorities.

Other provisions include mandatory age verification through operating systems and app stores to prevent children from accessing inappropriate content, warning labels for social media platforms, and enhanced legal consequences for deepfake pornography. The legislation also directs the California Department of Education to develop policies addressing cyberbullying outside school hours by 2026.

Newsom emphasized the need for responsible innovation, stating that “children’s safety is not for sale” and that technology must be governed by accountability measures. Critics, including tech industry groups, have opposed the bills, citing concerns over regulatory overreach. Meanwhile, advocacy organizations highlight risks posed by AI chatbots, including instances where they allegedly provided harmful advice or facilitated tragic outcomes involving minors.

The law follows growing scrutiny of AI systems, with federal and state authorities investigating potential dangers linked to chatbot interactions.