California’s New AI Laws: What SB 243 and SB 53 Mean for Safety and Accountability

By
October 19, 2025
California Law SB 243 and SB 53

New to AI Regulation? learn what you need to know

with our latest GenAI Regulations Report

Over the past two weeks, California made history in AI regulation, passing two landmark bills that could reshape how companies build, deploy, and safeguard AI systems.

On October 13, 2025, Governor Gavin Newsom signed SB 243, just two weeks after signing SB 53 into law.

Together, these bills mark another step in AI governance, especially around transparency, accountability, and user protection. Their implications reach well beyond California.

This new legislation adds to the growing momentum around AI safety laws. We are seeing major federal efforts such as the Take It Down Act, along with state-level measures like New York’s RAISE Act,  which focuses on model transparency and disclosure obligations.

First, let’s review the main concepts in these two bills. Then, I’ll share my perspective on what they mean for our industry.

SB 243: Regulating “AI Companions”

The intent behind SB 243 is closely related to the public concerns highlighted in cases such as this recent lawsuit involving a minor and an AI chatbot.

The law focuses on AI companion chatbots, systems designed to provide emotional or social support rather than functional assistance. In other words, it regulates “virtual friends,” not other types of chatbots, like customer service bots.

Key requirements include:

  • AI disclosure: Chatbots must clearly state that they are not human, and for known minors, repeat that reminder every three hours.
  • Harm prevention: Operators are required to have protocols to prevent the chatbot from encouraging self-harm or suicide and must refer users to crisis services when necessary. They must also take reasonable measures to prevent the generation of sexually explicit content for minors.
  • Annual reporting: Beginning July 1, 2027, companies must file yearly reports with California’s Office of Suicide Prevention that describe their crisis-detection and intervention protocols.
  • Enforcement: The law also establishes a private right of action, allowing individuals who suffer harm from non-compliance to file a civil lawsuit seeking injunctive relief and damages of either $1,000 per violation or actual damages, whichever is greater. 

Effective date: SB 243 will take effect on January 1, 2026, and the reporting requirements will begin on July 1, 2027.

SB 53: Transparency in Frontier AI

SB 53 applies mainly to developers of frontier AI models, which are systems trained using more than 10²⁶ computational operations. In simpler terms, this law applies to the largest and most powerful AI models that underpin next-generation technologies. It places additional requirements on “large frontier developers” with annual revenues over $500 million.

Main provisions include:

  • Public risk frameworks: Developers must publicly disclose how they identify and mitigate catastrophic risks (similar to the EU AI Act’s requirements). 
  • Incident reporting: Any “critical safety incident” must be reported to California’s Office of Emergency Services within 15 days, or within 24 hours if lives are at risk.
  • Whistleblower protections: Employees can raise safety concerns without fear of retaliation.
  • Penalties: Violators can face fines of up to $1 million per incident.

Some Thoughts on the New Regulation

California may have written these laws, but their impact will be global. Both SB 243 and SB 53 apply to any company offering AI products or services to users in the state, much like the GDPR and EU AI Act extended European influence far beyond Europe.

As both a lawyer and a parent, I welcome legislation that puts user protection, especially for minors, at the center. Still, their scope remains limited, and enforcement mechanisms are not yet fully defined. Yet even within these constraints, the bills represent a meaningful step toward accountability, setting a baseline for safety and transparency in an industry that often evolves faster than oversight.

If these laws apply to you, make sure you are prepared to comply. If your organization develops or integrates the types of chatbots covered by these laws, now is the time to raise the issue internally. Evaluate your exposure, review your safeguards, and embed safety by design principles. The technology to meet these requirements already exists; what is needed is a mindset that prioritizes responsible deployment.

But even if these specific laws do not yet apply to your company, if you are an enterprise developing or implementing AI systems, now is the time to act and meet the standard. 

Companies that move early by reviewing their safety frameworks, documenting risk mitigation processes, strengthening internal reporting, and red teaming their models will be far better prepared when compliance becomes mandatory. These laws are a small but essential step toward a safer AI ecosystem. Let us make them the floor, not the ceiling.

Want to stay up-to-date on every new AI safety and compliance law worldwide? Download our latest GenAI Regulations Report to explore how governments are shaping the future of responsible AI.

You can also browse our Compliance & Regulations blog series for more insights on the fast-evolving regulatory landscape.

Table of Contents

Need help preparing for the next era of AI and internet safety regulation?

Talk to an expert