ActiveFence at Hacker Summer Camp: Black Hat 2025 Key Takeaways

By
August 13, 2025
Team of ActiveFence leadership in front of blackhat sign, conference cyber security 2025

Ready to test your AI like an attacker would?

Get your AI risk scored.

Black Hat 2025 in Vegas was wild. I was there with the ActiveFence crew, camera rolling, caffeine running low, adrenaline through the roof. From non-stop hallway chats and late-night tool demos, this wasn’t just another industry event. It felt like hacker summer camp turned real world war room, where the next wave of AI threats was being built, tested, and torn down in real time.

We linked up with red teamers, cyber ops geeks, and AI builders who are deep in the trenches; heard stories of shadow agents, red‑teaming chaos, and zero‑click attack chains. 

Here’s what really stuck with me from the week. The moments, insights, and ideas that made me stop, think, and see where the future of AI security is actually heading.

Agentic AI and MCP Are Accelerating Everything

The buzz has moved on from generative AI and has shifted to a new target: agentic AI. This second wave of AI hype has driven many exciting and worrisome conversations. As with anything new and shiny, everyone, from enterprises to vendors, is racing to enable AI workflows.

New Dog, Same Problems

The real issues with AI are still foundational security problems, echoing the early days of cloud adoption. We are laser-focused on emerging threats like deepfakes, synthetic identity, and prompt injection. Still, long-standing vulnerabilities such as Role-Based Access Control (RBAC) gaps, API security issues, and poor session management have only worsened under AI implementations. Attackers are exploiting them as they did during the early cloud boom.

An Explosion of Fourth-Party Risk

As enterprises onboard third parties using AI in one form or another, supply chain security has never been more critical. This fourth-party risk category continues to expand and is top of mind for many professionals I have spoken with. As your favorite vendor’s vendor starts to use AI, customers must remain critical during the review process.

Red Teaming Painting the Town… Red

One of the more exciting parts of the conference was seeing the massive strides AI red teaming has taken in the last year. I am not just talking about vendors; several excellent presentations demonstrated how AI is used in offensive testing and what automated attack chains look like. One of my favorite talks covered how “virtual twin” organizations could simulate attacks against an enterprise, testing policies, identity and access management configurations, and infrastructure. While we are still far from AI running a full penetration test, this opens the door for red teaming to shift from a quarterly event into a continuous process.

That said, the best results still come from blending machine speed with human intuition. Automation can scale the noise, but human experts still catch what AI misses, especially when you’re testing complex, evolving systems. At ActiveFence, this is exactly how we’ve built our hybrid red teaming approach: part human ingenuity, part automation muscle, and fully aligned with how real adversaries operate. It’s not just about coverage. It’s about testing like it matters.

“AI Does Tasks, Not Jobs”

Something that stuck with me throughout the week came from the AI Summit on Tuesday: AI does tasks, not jobs. It excels at specific functions such as tearing through massive datasets for insights, catching anomalies humans might miss, and categorizing information accurately. However, we are still several quarters away from AI matching a tier-1 SOC analyst in the holistic job of threat detection and incident response.

This matters because it challenges the belief that AI will replace junior engineers and entry-level roles. Automation can make their work easier, but skipping foundational security training is a mistake. Just as many of us learned by writing regex to sift logs or building custom Burp Suite extensions, new talent must understand the “why” behind automation. On the other hand, giving senior engineers a fleet of agents to manage may improve productivity, but it also reduces opportunities for mentoring and skill-building. Black Hat discussions highlighted that these tools do not always make engineers more productive. Sometimes, they slow progress with extra debugging, more PR reviews, and the need to fix the agents themselves.

The Human Paradox

During the AI Summit, a panel raised interesting tensions. In security, humans, our first line of defense, are often seen as the weakest link. Humans get phished, jot passwords on sticky notes, and get tailgated. Much of our budget goes to training them to do better.

Yet in 2025, “human-in-the-loop” is being touted as our strongest defense against AI risks. I am skeptical, especially at the speed and scale we are adopting AI, for three main reasons:

  • Fatigue from constant AI-generated alerts and decisions
  • False confidence when we shift the burden instead of solving the problem
  • Persistent cognitive biases in AI-human collaboration, such as confirmation bias, automation bias, and cognitive overload

But here’s the thing: it’s not that humans don’t matter. They do. But it’s the combination of human judgment and smart automation that actually works.

That’s why at ActiveFence, we don’t pick sides. Our whole approach is built on fusion;  from security tools that blend context-aware automation with expert review, to red teaming that pairs AI-driven attack chains with human-led strategy

When the stakes are high, “human in the loop” is only half the loop. You need the right loop.

 

So What’s Next?

Threat Modeling Has Never Been More Critical

Threat modeling is more complex and more necessary than ever. As someone with a product security background, I have never been more excited about the need for rigorous testing requirements and architecture security reviews. Frameworks such as MAESTRO, NIST’s AI Risk Management Framework, and the kick-off of OWASP’s Agentic Security Top 10 (launched at Black Hat) are quickly becoming essential tools.

Security Must Be Continuous, Not Sequential

Everyone is using AI, and security teams are working to the bone to keep it safe. Security can no longer be a rubber stamp at the end of a process and must be an active design partner role. Traditionally, this has relied on teams. 

We are past the point where “shift left” is enough. The concept of “earlier” no longer applies. Security must be continuous, embedded, and transparent, providing guardrails so teams can build AI solutions without friction.

 

Final Note

After a week neck-deep in Black Hat chaos, one thing’s obvious: the winners in this AI race aren’t the ones with the flashiest demos or the most polished decks. It’s the teams that build with security at the core. The ones who think like hackers, plan like engineers, and test like something’s always broken – because it probably is.

Strong foundations are what let you move fast without breaking things that matter. And from what I saw, the smartest orgs are already treating security like a feature, not a fix.

Catch you next year in the desert.

Table of Contents

Think your AI stack is secure? Let’s find out

Book a demo today.