Protect your AI applications and agents from attacks, fakes, unauthorized access, and malicious data inputs.
Control your GenAI applications and agents and assure their alignment with their business purpose.
Proactively test GenAI models, agents, and applications before attackers or users do
The only real-time multi-language multimodality technology to ensure your brand safety and alignment with your GenAI applications.
Ensure your app is compliant with changing regulations around the world across industries.
Proactively identify vulnerabilities through red teaming to produce safe, secure, and reliable models.
Detect and prevent malicious prompts, misuse, and data leaks to ensure your conversational AI remains safe, compliant, and trustworthy.
Protect critical AI-powered applications from adversarial attacks, unauthorized access, and model exploitation across environments.
Provide enterprise-wide AI security and governance, enabling teams to innovate safely while meeting internal risk standards.
Safeguard user-facing AI products by blocking harmful content, preserving brand reputation, and maintaining policy compliance.
Secure autonomous agents against malicious instructions, data exfiltration, and regulatory violations across industries.
Ensure hosted AI services are protected from emerging threats, maintaining secure, reliable, and trusted deployments.
New to AI Regulation? learn what you need to know
Over the past two weeks, California made history in AI regulation, passing two landmark bills that could reshape how companies build, deploy, and safeguard AI systems.
On October 13, 2025, Governor Gavin Newsom signed SB 243, just two weeks after signing SB 53 into law.
Together, these bills mark another step in AI governance, especially around transparency, accountability, and user protection. Their implications reach well beyond California.
This new legislation adds to the growing momentum around AI safety laws. We are seeing major federal efforts such as the Take It Down Act, along with state-level measures like New York’s RAISE Act, which focuses on model transparency and disclosure obligations.
First, let’s review the main concepts in these two bills. Then, I’ll share my perspective on what they mean for our industry.
The intent behind SB 243 is closely related to the public concerns highlighted in cases such as this recent lawsuit involving a minor and an AI chatbot.
The law focuses on AI companion chatbots, systems designed to provide emotional or social support rather than functional assistance. In other words, it regulates “virtual friends,” not other types of chatbots, like customer service bots.
Key requirements include:
Effective date: SB 243 will take effect on January 1, 2026, and the reporting requirements will begin on July 1, 2027.
SB 53 applies mainly to developers of frontier AI models, which are systems trained using more than 10²⁶ computational operations. In simpler terms, this law applies to the largest and most powerful AI models that underpin next-generation technologies. It places additional requirements on “large frontier developers” with annual revenues over $500 million.
Main provisions include:
California may have written these laws, but their impact will be global. Both SB 243 and SB 53 apply to any company offering AI products or services to users in the state, much like the GDPR and EU AI Act extended European influence far beyond Europe.
As both a lawyer and a parent, I welcome legislation that puts user protection, especially for minors, at the center. Still, their scope remains limited, and enforcement mechanisms are not yet fully defined. Yet even within these constraints, the bills represent a meaningful step toward accountability, setting a baseline for safety and transparency in an industry that often evolves faster than oversight.
If these laws apply to you, make sure you are prepared to comply. If your organization develops or integrates the types of chatbots covered by these laws, now is the time to raise the issue internally. Evaluate your exposure, review your safeguards, and embed safety by design principles. The technology to meet these requirements already exists; what is needed is a mindset that prioritizes responsible deployment.
But even if these specific laws do not yet apply to your company, if you are an enterprise developing or implementing AI systems, now is the time to act and meet the standard.
Companies that move early by reviewing their safety frameworks, documenting risk mitigation processes, strengthening internal reporting, and red teaming their models will be far better prepared when compliance becomes mandatory. These laws are a small but essential step toward a safer AI ecosystem. Let us make them the floor, not the ceiling.
–
Want to stay up-to-date on every new AI safety and compliance law worldwide? Download our latest GenAI Regulations Report to explore how governments are shaping the future of responsible AI. You can also browse our Compliance & Regulations blog series for more insights on the fast-evolving regulatory landscape.
Want to stay up-to-date on every new AI safety and compliance law worldwide? Download our latest GenAI Regulations Report to explore how governments are shaping the future of responsible AI.
You can also browse our Compliance & Regulations blog series for more insights on the fast-evolving regulatory landscape.
Need help preparing for the next era of AI and internet safety regulation?
Learn how enterprises can stay ahead of emerging GenAI regulations like the EU AI Act and NIST Framework, with actionable steps for compliance, safety, and responsible deployment.
A federal judge has ruled that AI-generated content is not protected under free speech laws, expanding legal exposure across the AI ecosystem. What does it mean for AI platforms, infrastructure providers, and the future of GenAI safety?
The EU AI Act is the world’s first comprehensive AI law. Enterprises deploying GenAI chatbots and agents must prepare now for compliance. Learn the key requirements, penalties, and how ActiveFence helps you meet them with red teaming, guardrails, and observability.