Protect your AI applications and agents from attacks, fakes, unauthorized access, and malicious data inputs.
Control your GenAI applications and agents and assure their alignment with their business purpose.
Proactively test GenAI models, agents, and applications before attackers or users do
The only real-time multi-language multimodality technology to ensure your brand safety and alignment with your GenAI applications.
Ensure your app is compliant with changing regulations around the world across industries.
Proactively identify vulnerabilities through red teaming to produce safe, secure, and reliable models.
Detect and prevent malicious prompts, misuse, and data leaks to ensure your conversational AI remains safe, compliant, and trustworthy.
Protect critical AI-powered applications from adversarial attacks, unauthorized access, and model exploitation across environments.
Provide enterprise-wide AI security and governance, enabling teams to innovate safely while meeting internal risk standards.
Safeguard user-facing AI products by blocking harmful content, preserving brand reputation, and maintaining policy compliance.
Secure autonomous agents against malicious instructions, data exfiltration, and regulatory violations across industries.
Ensure hosted AI services are protected from emerging threats, maintaining secure, reliable, and trusted deployments.
New to AI Regulation? learn what you need to know
On January 1, 2026, two new California laws will come into force, and with them, a quiet but meaningful shift in how AI systems are expected to behave in the real world.
The state-level SB 243 and AB 489 will become enforceable. These are not policy statements or future frameworks. They impose concrete, enforceable expectations on product behavior that apply de facto to any AI system accessible to users in California, regardless of where the company is headquartered.
For years, AI governance has lived mostly in whitepapers, voluntary standards, and internal risk frameworks. Now, California law moves AI safety out of the abstract and into production. What your system says, how it presents itself, and how it responds to vulnerable users are no longer just design choices. They are legal obligations.
First, let’s review the main concepts in these two bills. Then, I’ll share my perspective on what they mean for our industry.
The intent behind SB 243 is closely related to the public concerns highlighted in cases such as this recent lawsuit involving a minor and an AI chatbot.
SB 243 responds to a growing category of AI systems designed for emotional or social interaction. These are not customer service bots or productivity tools. They are systems that present themselves as companions, confidants, or sources of support.
The law reflects concerns highlighted in recent public cases involving minors and AI chatbots, where users formed emotional reliance on systems that were never designed to manage crisis-level situations.
SB 243 focuses specifically on AI companion chatbots, meaning systems built to meet emotional or social needs. Its key requirements include:
SB 243 makes one thing clear: if an AI system is designed to feel human, it must also be designed to protect humans.
AB 489 addresses a subtle but increasingly common problem in AI-powered health and wellness tools: systems that don’t claim to be medical professionals, yet communicate in ways that feel clinically authoritative.
As AI becomes more embedded in wellness apps, symptom checkers, and health-adjacent chatbots, many systems use confident language, medical terminology, and reassuring design cues to appear helpful. In practice, users often interpret this as expertise, regardless of disclaimers buried in the interface.
Beginning Jan. 1, 2026, AB 489 restricts how AI systems operating in healthcare or wellness contexts can present themselves. These systems may not:
Importantly, enforcement is not limited to consumer protection authorities. Professional licensing boards are empowered to act, and each misleading interaction may be treated as a separate violation.
For teams building health-related AI products, AB 489 turns a long-standing design tension into a compliance issue. Helpful guidance must now be clearly distinguishable from medical advice, and product teams will need to be deliberate about how tone, terminology, and interface choices shape user perception.
What makes SB 243 and AB 489 especially significant is not just what they regulate, but when.
These laws take effect on January 1, 2026. That leaves no room for long-term roadmaps or phased interpretations. AI governance is no longer a future-state discussion mapped through frameworks like the NIST AI Risk Management Framework. It is a production deadline.
For AI builders, like engineers, product managers, and founders, this is not merely a legal concern. It is a fundamental shift in product requirements:
These are now compliance issues.
All of this is unfolding against a growing political tension at the federal level.
On December 11, 2025, President Trump signed an Executive Order titled “Ensuring a National Policy Framework for Artificial Intelligence.” The order frames state-level AI regulation as a potential obstacle to American AI competitiveness and directs federal agencies to scrutinize and, where appropriate, challenge what it characterizes as “onerous” state requirements, particularly those that could constrain model outputs.
The result is a form of product limbo. At the federal level, the signal is to move fast and minimize friction. At the state level, California is making clear that if AI systems cause harm (especially to minors or patients), there will be consequences.
For product teams, the operational reality is straightforward: executive orders do not override state law. Unless and until a federal court invalidates California’s statutes, January 1, 2026 remains the compliance deadline for any AI system accessible to users in California.
California may have written these laws, but their reach is far broader. Like the GDPR and the EU AI Act, they apply to any company offering AI services to users in California, regardless of geography. In practice, they set expectations that will influence product design far beyond state borders.
As both a lawyer and a parent, I welcome legislation that puts user protection, especially for minors, at the center. At the same time, these laws are not perfect. Their scope is limited, and many enforcement details will only become clear through practice. But within those constraints, they represent a meaningful shift: safety and accountability are no longer optional principles. They are becoming enforceable design standards.
The message across both laws is consistent. If an AI system mimics human interaction, it must clearly disclose what it is. If it operates in emotionally sensitive contexts, it must intervene to prevent harm. If it touches health or wellness, it must avoid implying expertise it does not have.
In other words, trust is no longer an outcome to hope for. It is a feature that must be built.
Companies that act early by reviewing safety frameworks, documenting risk mitigation, strengthening internal escalation paths, and stress-testing user-facing behaviors will be far better positioned—not just to comply, but to earn and sustain user trust. These laws are not the ceiling for responsible AI. They are the floor.
–
Want to stay up-to-date on every new AI safety and compliance law worldwide? Download our latest GenAI Regulations Report to explore how governments are shaping the future of responsible AI. You can also browse our Compliance & Regulations blog series for more insights on the fast-evolving regulatory landscape.
Want to stay up-to-date on every new AI safety and compliance law worldwide? Download our latest GenAI Regulations Report to explore how governments are shaping the future of responsible AI.
You can also browse our Compliance & Regulations blog series for more insights on the fast-evolving regulatory landscape.
Need help preparing for the next era of AI and internet safety regulation?
Learn how enterprises can stay ahead of emerging GenAI regulations like the EU AI Act and NIST Framework, with actionable steps for compliance, safety, and responsible deployment.
A federal judge has ruled that AI-generated content is not protected under free speech laws, expanding legal exposure across the AI ecosystem. What does it mean for AI platforms, infrastructure providers, and the future of GenAI safety?
The EU AI Act is the world’s first comprehensive AI law. Enterprises deploying GenAI chatbots and agents must prepare now for compliance. Learn the key requirements, penalties, and how ActiveFence helps you meet them with red teaming, guardrails, and observability.