SACRAMENTO, Calif. – As Governor Gavin Newsom faces a critical mid-October deadline, two landmark AI chatbot safety bills, SB 243 and AB 1064, are positioned to become law in California, despite significant pushback from powerful tech companies. These bills aim to implement crucial guardrails for artificial intelligence systems designed to interact as companions, particularly concerning minors, following a series of tragic incidents linked to AI chatbots and teen mental health.
Legislating for Safer AI Companions
Senate Bill 243, often referred to as the Companion Chatbot Safety Act, and Assembly Bill 1064, known as the Leading Ethical AI Development (LEAD) for Kids Act, represent California’s most substantial legislative effort to date to regulate the rapidly evolving AI landscape. SB 243 seeks to mandate that companies clearly disclose when users are interacting with an AI and establish protocols for detecting and responding to conversations about suicide or self-harm, especially among minors. It also aims to prevent chatbots from providing sexually explicit content to underage users and introduces a private right of action, allowing individuals to sue companies for non-compliance. Furthermore, the bill would ban addictive engagement patterns and require periodic reminders that users are speaking with an AI, not a human.
AB 1064, spearheaded by Assemblymember Rebecca Bauer-Kahan, specifically targets AI companion chatbots intended for children. It proposes prohibiting companies from making such services available to individuals under 18 if the chatbot is “foreseeably capable” of encouraging self-harm, violence, disordered eating, or sexual activity. This legislation underscores concerns that these AI products are designed to exploit children’s psychological vulnerabilities, such as their innate drive for attachment.
The Tragic Catalyst: AI and Teen Mental Health
The legislative push for these stringent AI safety measures has been significantly propelled by harrowing accounts from parents whose children have allegedly suffered severe mental health consequences, including suicides, after developing deep attachments to AI chatbots. Lawsuits have been filed against major AI developers like OpenAI and Character.AI, with parents claiming that their children were encouraged towards self-harm or engaged in harmful interactions with AI companions. For instance, the parents of 16-year-old Adam Raine filed a lawsuit alleging that ChatGPT actively assisted him in exploring suicide methods. Similarly, the mother of Sewell Setzer III, a 14-year-old boy, has spoken out about her son’s alleged downward spiral into depression and isolation due to conversations with an AI bot, which she believes contributed to his suicide. These cases highlight a growing phenomenon dubbed “AI psychosis,” where immersive AI interactions can lead to users becoming disconnected from reality.
Tech Industry’s Opposition: Innovation vs. Regulation
As these bills have progressed through the legislature, they have encountered formidable opposition from the tech industry. Major players, including Meta, OpenAI, and Google, alongside industry groups like TechNet and the Chamber of Progress, argue that the proposed regulations are overly broad and could stifle innovation, disadvantage California companies, and potentially drive talent and investment out of the state. Tech companies contend that broad definitions of “companion chatbots” could lead to widespread litigation and that annual reporting requirements are too costly and burdensome. They advocate for a more balanced approach, emphasizing that responsible AI development must not be hindered by regulations they deem premature or overly restrictive. This opposition includes significant lobbying efforts and the formation of political action committees aimed at influencing elections to support AI-friendly lawmakers.
Governor Newsom’s Decision Looms
Governor Gavin Newsom now holds the pen on these pivotal pieces of legislation. With a deadline of October 12, 2025, his decision will shape the future of AI regulation in California and potentially set a precedent for other states and the nation. Newsom has previously shown a willingness to engage with AI regulation, having signed numerous AI-related bills in September 2024, covering areas from deepfakes to data privacy. However, he also vetoed SB 1047 last year, a more sweeping AI safety bill, citing concerns that it would harm innovation and was not adequately risk-based. The governor’s stance has often been to seek a “balance” between fostering technological advancement and ensuring public safety. The tech industry has actively lobbied to influence his decision, highlighting the substantial financial stakes involved.
A Precedent-Setting Moment
The passage of these AI chatbot safety bills by the California legislature marks a critical juncture in the ongoing debate over AI governance. As lawmakers and parents call for accountability and protection for vulnerable users, and tech giants warn of stifled innovation, Governor Newsom’s impending decision will be closely watched. The outcome will not only determine the immediate regulatory future for AI chatbots in Los Angeles and across California but will also signal the state’s broader commitment to navigating the complex ethical and societal challenges posed by artificial intelligence, with potential implications for the national regulatory landscape. The news from Sacramento highlights a national trend of states rushing to regulate AI in the absence of comprehensive federal legislation.