Real-time visibility, safety, and security for your GenAI-powered agents and applications
Proactively test GenAI models, agents, and applications before attackers or users do
Deploy generative AI applications and agents in a safe, secure, and scalable way with guardrails.
Proactively identify vulnerabilities through red teaming to produce safe, secure, and reliable models.
Protect your brand from GenAI misuse
New York just became the first US state to pass legislation focused squarely on the safety of frontier AI systems. The Responsible AI Safety and Education (RAISE) Act (S6953B/A6453B) targets the biggest players in AI, including OpenAI, Google, and Anthropic. It is a landmark moment for AI regulation in the United States, and signals a shift toward accountability for how high-impact AI systems are developed, deployed, and monitored.
The RAISE Act focuses on frontier AI models trained with over 100 million dollars in compute resources. It requires developers of these models to publish detailed safety and security reports, including reporting to regulators when incidents occur, including unsafe model behavior or theft of model weights that could result in AI-enabled crises.
If a company fails to comply, the Act gives New York’s attorney general the authority to impose civil penalties up to 30 million dollars. By focusing on AI models trained with over 100 million dollars in compute resources, the bill is designed to avoid burdening startups or academic research labs. Instead, it zeroes in on frontier models that could assist in the creation of biological weapons or carrying out automated criminal activity.
The timing of the RAISE Act is no accident. As the pace of AI development accelerates, so do the risks. Leading scientists like Geoffrey Hinton and Yoshua Bengio have warned that powerful models could cause serious harm if not properly governed, a warning proving true in a recent study by ActiveFence evaluating LLM behaviors across chemical, biological, radiological, and nuclear (CBRN) risks.
Despite these concerns, regulatory action in the United States has lagged. As earlier efforts like California’s SB 1047 stalled under industry pressure, New York lawmakers moved quickly to draft a more targeted bill. Their aim is to ensure transparency without stifling innovation.
The RAISE Act strikes a careful balance. It does not impose kill switch requirements or hold companies liable for post-training misuse. Instead, it introduces baseline transparency and gives the public a view into how some of the world’s most powerful models are managed.
For those building frontier models, the bar for safety and accountability is rising. Compliance will require more than publishing model cards or aligning systems post hoc. Developers will need to anticipate and mitigate harms before deployment. This is where red teaming becomes essential. Red teaming reveals how models might be exploited or behave in unexpected ways. When done well, it surfaces real-world risks that safety reports must capture.
For Enterprises deploying AI apps and agents using those models, guardrails and observability tools are key. If a model generates unsafe content, labs need a way to detect, log, and report it. Without visibility into model behavior across environments, companies cannot meet the requirements of the RAISE Act.
To meet this moment, model developers and enterprises working on AI-powered apps and agents need advanced red teaming and real-time observability into AI behaviors.Â
ActiveFence Advanced Red Teaming goes beyond generic prompting with simulations based on real-world threat intelligence gathered in over 50 languages, structured evaluations, and detailed risk reports that support regulatory compliance.Â
ActiveFence Red Teaming also offers observability and threat monitoring solutions that help labs detect emerging risks, flag policy violations, and track abuse vectors over time. Combined with our security expertise, ActiveFence supports seven of ten foundational model providers with the expertise, clarity, and control they need as the regulatory landscape evolves. Enterprises deploying AI-powered apps and agents can also rely on the same assurance to ensure their systems remain safe, compliant, and resilient in the face of growing risks and evolving standards.
The RAISE Act is just the beginning. More jurisdictions will follow New York’s lead. The developers who succeed in this new era will be the ones who invest in safety infrastructure now, not after an incident occurs.
AI safety is entering a new chapter. Transparency is no longer a voluntary best practice. It is becoming a legal requirement for companies operating at the frontier.
The RAISE Act shows what forward-looking regulation can look like. It focuses on the right actors, demands the right disclosures, and leaves space for innovation. But it also raises the stakes for compliance, accountability, and preparedness.
ActiveFence is ready to help you ensure your systems are as safe and resilient as they are powerful. Reach out to learn more about how we can help your team align with the future of AI regulation.
Let’s dig in!
Uncover the safety risks of GenAI chatbots in this travel industry case study—learn actionable insights to mitigate vulnerabilities across industries.
Learn how AI systems misbehave when prompted in one of the most dangerous threat areas: high-risk CBRN. Based on ActiveFence’s internal testing of leading LLMs, the results reveal critical safety gaps that demand serious attention from enterprise developers.
Learn how enterprises can stay ahead of emerging GenAI regulations like the EU AI Act and NIST Framework, with actionable steps for compliance, safety, and responsible deployment.