Launch agentic AI with confidence. Watch our on-demand webinar to learn how. Watch it Now
Real-time visibility, safety, and security for your GenAI-powered agents and applications
Proactively test GenAI models, agents, and applications before attackers or users do
Deploy generative AI applications and agents in a safe, secure, and scalable way with guardrails.
Proactively identify vulnerabilities through red teaming to produce safe, secure, and reliable models.
To validate its most advanced foundation model to date, Amazon engaged ActiveFence for a manual red-teaming evaluation of Nova Premier, testing the model's readiness for safe and secure deployment.
Amazon aimed to rigorously validate the safety of its most capable foundation model, Nova Premier ahead of public release. With increasing risks associated with advanced generative models, they sought to benchmark it against real-world adversarial threats across critical responsible AI (RAI) categories.
ActiveFence partnered with Amazon as a third-party red teamer to perform manual, blind evaluations of Nova Premier on Amazon Bedrock. Testing spanned prompts across Amazon’s eight RAI categories, including safety, fairness and bias, and privacy and security. ActiveFence also benchmarked Nova Premier against other LLMs for comparison.
The collaboration demonstrated how expert-led manual red teaming complements automated testing, offering a comprehensive snapshot of model robustness.
Through this hands-on evaluation, ActiveFence strengthened Nova’s security posture and supported Amazon’s broader Responsible AI goals, ensuring the model could be deployed with greater confidence.