Get the latest on global AI regulations, legal risk, and safety-by-design strategies. Read the Report
Protect your AI applications and agents from attacks, fakes, unauthorized access, and malicious data inputs.
Control your GenAI applications and agents and assure their alignment with their business purpose.
Proactively test GenAI models, agents, and applications before attackers or users do
The only real-time multi-language multimodality technology to ensure your brand safety and alignment with your GenAI applications.
Ensure your app is compliant with changing regulations around the world across industries.
Proactively identify vulnerabilities through red teaming to produce safe, secure, and reliable models.
Detect and prevent malicious prompts, misuse, and data leaks to ensure your conversational AI remains safe, compliant, and trustworthy.
Protect critical AI-powered applications from adversarial attacks, unauthorized access, and model exploitation across environments.
Provide enterprise-wide AI security and governance, enabling teams to innovate safely while meeting internal risk standards.
Safeguard user-facing AI products by blocking harmful content, preserving brand reputation, and maintaining policy compliance.
Secure autonomous agents against malicious instructions, data exfiltration, and regulatory violations across industries.
Ensure hosted AI services are protected from emerging threats, maintaining secure, reliable, and trusted deployments.
To validate its most advanced foundation model to date, Amazon engaged ActiveFence for a manual red-teaming evaluation of Nova Premier, testing the model's readiness for safe and secure deployment.
Amazon aimed to rigorously validate the safety of its most capable foundation model, Nova Premier ahead of public release. With increasing risks associated with advanced generative models, they sought to benchmark it against real-world adversarial threats across critical responsible AI (RAI) categories.
ActiveFence partnered with Amazon as a third-party red teamer to perform manual, blind evaluations of Nova Premier on Amazon Bedrock. Testing spanned prompts across Amazonโs eight RAI categories, including safety, fairness and bias, and privacy and security. ActiveFence also benchmarked Nova Premier against other LLMs for comparison.
The collaboration demonstrated how expert-led manual red teaming complements automated testing, offering a comprehensive snapshot of model robustness.
Through this hands-on evaluation, ActiveFence strengthened Novaโs security posture and supported Amazonโs broader Responsible AI goals, ensuring the model could be deployed with greater confidence.