Protect your AI applications and agents from attacks, fakes, unauthorized access, and malicious data inputs.
Control your GenAI applications and agents and assure their alignment with their business purpose.
Proactively test GenAI models, agents, and applications before attackers or users do
The only real-time multi-language multimodality technology to ensure your brand safety and alignment with your GenAI applications.
Ensure your app is compliant with changing regulations around the world across industries.
Proactively identify vulnerabilities through red teaming to produce safe, secure, and reliable models.
Detect and prevent malicious prompts, misuse, and data leaks to ensure your conversational AI remains safe, compliant, and trustworthy.
Protect critical AI-powered applications from adversarial attacks, unauthorized access, and model exploitation across environments.
Provide enterprise-wide AI security and governance, enabling teams to innovate safely while meeting internal risk standards.
Safeguard user-facing AI products by blocking harmful content, preserving brand reputation, and maintaining policy compliance.
Secure autonomous agents against malicious instructions, data exfiltration, and regulatory violations across industries.
Ensure hosted AI services are protected from emerging threats, maintaining secure, reliable, and trusted deployments.
Ready to test your AI like an attacker would?
Every year, people go to Black Hat to build something, break something, or to figure out whatโs going on in the industry and this year was no differentโฆ almost. I donโt think anyone was surprised by the amount of AI content at Black Hat, but rather how AI was showing up and where.
Over my week at Black Hat it became very real how the AI landscape has shifted – from generative to agentic AI. Where not too long ago we were concerned with AI that generates content, we are now talking about AI that creates outcomes. And as AI makes more decisions on its own, engineers are finding themselves with less busy work, and more time to dig deep.
But the question on my mind, which Iโll dig into in this blog, is how agentic AI has changed other landscapes, namely the threat landscape. Which threats are becoming more grave, how solutions are evolving, and how it has changed the security practitioner environment as a whole.
One thing is sure, this year it wasnโt enough just to watch the talks online. The insights Iโll share here are gleaned from talks, panels, and mostly – informal conversations with real practitioners living the agentic AI security space.
The buzz has moved on from generative AI and has shifted to a new target: agentic AI. This second wave of AI hype has been a driver of many exciting and worrisome conversations. As with anything new and shiny, it seems like everyone, from enterprises to vendors, is racing to enable AI workflows.
The real issues with AI are still foundational security problems, echoing the early days of cloud adoption. We are laser-focused on emerging threats like deepfakes, synthetic identity, and prompt injection, but long-standing vulnerabilities like Role-Based Access Control (RBAC) gaps, API security issues, and poor session management have only worsened under AI implementations. Attackers are actively exploiting them, just as they did during the early cloud boom.
As enterprises continue to onboard third parties that are using AI in one form or another, supply chain security has never been more critical. This fourth-party risk category continues to expand and is top of mind for many professionals I have spoken with. As your favorite vendorโs vendor starts to use AI, itโs important that customers remain critical during the review process.
One of the more exciting parts of the conference was seeing the massive strides AI red teaming has taken in the last year. I am not just talking about vendors. There were a ton of excellent presentations that demonstrated how AI is used in offensive testing and what automated attack chains look like. One of my favorite talks covered how โvirtual twinโ organizations could be used to simulate attacks against an enterprise, testing policies, identity and access management configurations, and infrastructure. While we are still far from AI running a full penetration test, this creates a lot of opportunity for red teaming to shift from a quarterly event into a more continuous process.
Something that stuck with me throughout the week came from the AI Summit on Tuesday: AI does tasks, not jobs. It excels at specific functions like tearing through massive datasets for insights, identifying anomalies humans might miss, and categorizing data with incredible accuracy. However, we are still several quarters away from AI matching a tier-1 SOC analyst in the holistic job of threat detection and incident response.
This matters because it addresses an elephant in the room: the widespread belief that AI will replace junior engineers and entry-level positions. I donโt think this is true, nor do I think it should ever be. While we should continue making tasks easier for entry-level and junior talent, we cannot skip foundational information security training in favor of automation. Just as most of us had to write regex to sift logs or create custom Burp Suite extensions, we must ensure that incoming talent fully understands the โwhyโ behind automation and the significance behind patterns and findings. On the flip side, weโve seen senior engineers given dozens of agents to โmanageโ as part of their team. While this direction makes sense from a productivity standpoint, it creates a missed opportunity if this same engineer is not able to mentor and upskill more junior talent.
There was even a discussion at Black Hat highlighting that these tools donโt necessarily make engineers more productive. It can feel akin to โwalking through mudโ due to the need to perform more frequent debugging, spend more time reviewing PRs, and occasionally take the time to work on the agents themselves.
During the AI Summit, one panel raised an interesting point: in security, we often refer to humans, our first line of defense, as the weakest link. Humans get phished, jot passwords on sticky notes, and get tailgated. Much of our security budget goes towards training humans to be better at keeping data safe and following security policies.
Yet in 2025, โhuman-in-the-loopโ is being touted as our strongest defense against AI risks. I am skeptical, especially at the speed and scale we are adopting AI, for three main reasons:
Threat modeling is now more complex and necessary than ever before. As someone with a product security background, I have never been more excited about the need for rigorous testing requirements and architecture security reviews. Incredible frameworks such as MAESTRO, NISTโs AI Risk Management Framework, and OWASPโs Agentic Security Top 10 project, which launched during Black Hat, are quickly becoming essential tools.
The dark cloud looming over all of this is that everyone is using AI, and security teams are working to the bone to ensure theyโre enabling teams to do their best work safely. Security can no longer be a rubber stamp at the end of a process, itโs time for it to become an active design partner, working alongside teams to create new products and features.
I donโt believe weโre in an age where โshift leftโ is agile enough to ensure we are building secure products. The concept of โearlierโ shouldnโt exist at a time like this. Security needs to be continuous, embedded, and present, transparently providing guardrails for teams to engage and create AI solutions unencumbered.
As I reflect on my week at Black Hat, itโs clear that the organizations that will succeed in this AI-driven future are not the ones with the most advanced AI capabilities or the coolest demos. They are the ones that have secure foundations while reaching for the sky, the ones applying hard-learned lessons about security principles to this new paradigm.
This is why Iโm so glad to be part of ActiveFence, where we arenโt just slapping AI into a product. We are using our long-established expertise and working with the biggest teams in the industry to consider AI architecture and design in a holistic way. Our experts are working with teams like Amazon to ensure that models are secure from day one – paving the way for better, safer products. At the end of the day, humans will continue to create opportunities for other humans, and Iโm excited to be working on products that use high-quality, human data to create better AI solutions.
Strong foundations are what let you move fast without breaking the things that matter. And from what I saw, the smartest orgs are already treating security and safety like a feature, not a fix.
Think your AI stack is secure? Letโs find out
ActiveFence is expanding its partnership with NVIDIA to bring real-time safety to a new generation of AI agents built with NVIDIAโs Enterprise AI Factory and NIM. Together, we now secure not just prompts and outputs, but full agentic workflows across enterprise environments.
GenAI-powered app developers face hidden threats in GenAI systems, from data leaks and hallucinations to regulatory fines. This guide explains five key risks lurking in GenAI apps and how to mitigate them.
Learn how ActiveFence red teaming supports Amazon as they launch their newest Nova models.