Get the latest on global AI regulations, legal risk, and safety-by-design strategies. Read the Report
Protect your AI applications and agents from attacks, fakes, unauthorized access, and malicious data inputs.
Control your GenAI applications and agents and assure their alignment with their business purpose.
Proactively test GenAI models, agents, and applications before attackers or users do
The only real-time multi-language multimodality technology to ensure your brand safety and alignment with your GenAI applications.
Ensure your app is compliant with changing regulations around the world across industries.
Proactively identify vulnerabilities through red teaming to produce safe, secure, and reliable models.
Detect and prevent malicious prompts, misuse, and data leaks to ensure your conversational AI remains safe, compliant, and trustworthy.
Protect critical AI-powered applications from adversarial attacks, unauthorized access, and model exploitation across environments.
Provide enterprise-wide AI security and governance, enabling teams to innovate safely while meeting internal risk standards.
Safeguard user-facing AI products by blocking harmful content, preserving brand reputation, and maintaining policy compliance.
Secure autonomous agents against malicious instructions, data exfiltration, and regulatory violations across industries.
Ensure hosted AI services are protected from emerging threats, maintaining secure, reliable, and trusted deployments.
Video game footage is increasingly misused as fake war content online, shaping conflict narratives and misleading even mainstream outlets. ActiveFence explains how and why, and how trust & safety teams can respond.
Discover how ActiveFence Guardrails now provides real-time AI safety with low latency, and no-code controls, in secure, scalable AWS enterprise deployments.
Discover what really keeps CISOs up at night from our very own Guy Stern, who shares frontline insights into GenAI risk in 2025, exposing hidden vulnerabilities, internal misuse, and how enterprise security must adapt.
LLMs with RAG bring powerful personalization, but also new security risks. Explore how ActiveFenceโs Red Team uncovered ways attackers can exfiltrate secrets from AI memory.
Discover how ActiveFence and Databricks are partnering to build safer AI agents. Learn how ActiveFence Guardrails integrate with Databricksโ Mosaic AI Agent Framework to mitigate risks like prompt injection, toxic outputs, and policy violations, ensuring secure, compliant AI deployment at scale
From deepfake investment scams to AI-generated catfishing, GenAI is making impersonation easier and more dangerous. Explore how impersonation abuse works, real-world examples, and what AI teams can do to protect their systems from being misused.
LLM guardrails are being bypassed through roleplay. Learn how these hacks work and what it means for AI safety. Read the full post now.
Expose how disinformation networks exploit crowdsourced fact-checking like Community Notes to push propaganda and suppress truth, and what platforms must do now.
See how the RAISE Act aims to stop AI-enabled crises.