
Agentic risk is what happens when AI systems start making decisions—and security teams aren’t prepared to audit or contain them.
Unlike traditional automation, agentic systems can act independently, adjust their behavior in real time, and influence outcomes across your entire security organization.
That level of autonomy introduces a new layer of operational risk that most compliance frameworks weren’t built to address.
In our latest webinar, Jolie Summers (Senior Director of AI Implementation) and Richard Hills (VP of Advanced Technology) shared what they’re seeing in the field as AI adoption accelerates.
Agentic systems require oversight before they ever touch sensitive data or facilities. Jolie and Richard outlined how to approach AI tool evaluation, how to identify red flags in the testing stage, and why traditional access policies might not be enough once AI agents are involved in decision-making.
If your team is facing pressure to adopt AI—or you’re already seeing security gaps emerge—this conversation offers a path forward grounded in governance rather than just adaptation.
Explore our top takeaways below, and watch the full recording here at any time.
Agentic AI: Risks that move faster than policies
Security programs weren’t built for systems that change their behavior on the fly. But that’s exactly what agentic AI introduces.
These systems learn from each interaction, adjust their responses, and can even escalate decisions without direct human input. As Richard put it:
“They’re dynamic, they change, they learn from your defenses and they evolve.” This means the threat landscape is less static than ever before.
Agentic systems can create risk from inside the perimeter, operating in environments where small decisions can have major consequences.
As Jolie explained, the speed and autonomy of AI agents raise critical questions:
- What can they access?
- What are they allowed to decide?
- And how do you know when they’ve crossed the line?
Static policies don’t hold up in a system that evolves daily
The key distinction Jolie and Richard drew was between managing AI risk and controlling for it.
Many organizations still lean on reactive methods: identify a risk, create a mitigation plan, and handle issues case by case. That approach doesn’t scale when AI tools are constantly adapting.
As Jolie said, “Controlling for risk means creating systems and processes that allow you to continuously evolve and adapt” alongside the deployed systems. That includes real-time audits, short feedback loops, and processes that update as the agent’s behavior changes.
AI can enforce guardrails (if you tell it to)
One of the key takeaways was that AI agents can be both the risk and the solution.
Sign In Solutions’ team built a customer intelligence agent that helps employees query internal data to better understand client needs. But behind it sits a second agent—built specifically to monitor that interaction.
If the conversation crosses a line, the guardrail agent shuts it down. As Richard explained,
“That guardrail has the capacity to cut off and kill any conversation that it believes is inappropriate or is releasing information that it shouldn’t.”
That same thinking applies in spheres like visitor management, where AI agents are not able to catch inconsistencies that would otherwise go unnoticed. A mismatch between names and license plates or an employee repeatedly sponsoring high-security visits without context.
Humans could catch these, but only if they had time to review every interaction. AI agents can do it continuouslynon-stop and escalate only what matters.
AI evaluation is the cost of safe adoption
“The day you set your AI agent live is the worst performing day that agent will have”, Richard shared.
Unlike traditional software, AI tools change with every interaction. You’re practically introducing a system that learns in delivery. That learning curve makes pre-adoption evaluation critical.
Before deployment, teams need to scope what the agent can assess, how it integrates, and which decisions it’s allowed to make. Skipping that work introduces risks that show up weeks later, when the agent has already touched sensitive data or made calls that are hard to trace back.
AI evaluation is the cost of safe adoption
Audit trailers are foundational to every compliance framework. But many organizations haven’t yet applied that level of discipline to AI-generated decisions. And it could be a mistake.
AI models generate reasoning paths. Not capturing and reviewing those paths can make you blind to how decisions are made. Worse, they can prevent you from explaining or defending them during an audit.
Richard recommended treating AI like any strategic system: capture reasoning, rerun outputs under different conditions, and evaluate consistency over time. That’s how you can build confidence in how your AI tools are behaving.
“You can’t really sit an AI agent down with an auditor and have it explain your process. You need documentation”, Jolie stated.
She made clear that documentation—what the agent does, how it does it, and under what circumstances it’s allowed to act—isn’t a formality. In fact, it’s the only way to maintain visibility, especially when the system adapts over time.
Human oversight shouldn’t be optional with agentic AI
Agentic systems will handle more of the operational load. But strategic decisions, accountability, and course correction still sit with people.
As Richard put it, “The airline can’t blame AI for offering the wrong ticket. Human authority over decisions remains paramount.”
Organizations that succeed with agentic AI will have to stay in control while moving forward.
Want to dive deeper into how agentic AI is reshaping security and compliance?
Watch the full webinar with Jolie Summers and Richard Hills.