This bill was recently introduced. Email the authors to let them know what you think about it.
Senator Padilla's chatbot safety legislation establishes new requirements for artificial intelligence platforms that serve California's minor users, focusing on preventing manipulative engagement tactics and monitoring mental health risks. The bill defines chatbots as AI-enabled characters capable of open-ended dialogue and unique personalities, while setting parameters for how these platforms interact with young users.
Under the proposed law, chatbot operators must implement safeguards preventing their platforms from using unpredictable reward systems or techniques designed to increase minor user engagement. The legislation requires periodic notifications reminding users that they are interacting with artificial intelligence rather than humans. Operators must also explicitly inform minor users and their parents or guardians about potential unsuitability of chatbot interactions for some young people.
The bill institutes mandatory annual reporting to the State Department of Health Care Services on incidents involving minor users' mental health, including detected cases of suicidal ideation, attempts, and fatalities. These reports must exclude personal identifying information while tracking both user-expressed concerns and instances where chatbots initiated discussions about suicide. To verify compliance with these provisions, operators must undergo regular third-party audits of their platforms and practices.
This measure adds to existing social media safety requirements in California law, which currently mandate mechanisms for reporting cyberbullying and content that violates platform terms of service. The new provisions create parallel protections specifically for AI-enabled chat environments where minors participate.