For decades, enterprise security was built around predictable systems: static applications, clear user boundaries, and deterministic behavior.
For decades, enterprise security was built around predictable systems: static applications, clear user boundaries, and deterministic behavior. But over the past year, AI has broken these
assumptions at an unprecedented pace.
What started with employees experimenting with chat interfaces has become something far more consequential: teams using AI for sensitive workflows, and autonomous agents beginning to take action across systems. This shift changes where risk originates, how it propagates, and how it must be governed.
Many enterprises are discovering a hard truth: their existing security stack was never designed to understand or control AI behavior. Gartner’s September 2025 report “Innovation Insight for AI Usage Control,” by John Watts and Jeremy D’Hoinne captured it well:
“Existing controls, such as firewalls, web proxies, and endpoint protection platforms, provide a basic way to block high-risk AI applications used by end users but are generally not optimized for specific AI threats or controls.”
Let’s break down where traditional controls fail and what the new architecture must provide.
The Firewall Fallacy
Firewalls assume that risk can be managed by controlling where traffic comes from or goes. This model worked when behavior was tied to domains, applications, and ports.
AI breaks this model immediately:
- Risk lives in interactions, not destinations
- Safe and risky AI usage share identical endpoints
- Sensitive content can flow through approved services
- Firewalls cannot infer what content means
- Firewalls operate on technical source/destination rules
Here’s an example: A developer pastes proprietary authentication logic into a browser-based AI coding assistant. The firewall sees a request to an allowed domain but cannot determine whether the content was sensitive, appropriate, or whether the output introduces vulnerabilities.
AI risk is semantic. Firewalls govern the transport, not the meaning.
Web Proxies Can’t Interpret Prompts
Proxies were designed to filter URLs, classify sites, and apply compliance rules. None of these capabilities map cleanly to AI behavior.
AI breaks proxy assumptions because:
- Risk depends on what the user is doing, not the site being accessed
- Proxies cannot interpret semantic meaning in prompts or model outputs
- They cannot confidently identify sensitive concepts (PHI, unreleased financials, source code)
- They cannot assess intent, topic, or context
- They cannot enforce role-based AI usage policy
Take this example: A healthcare employee uploads patient case notes to an AI tool to “summarize findings.” Even with full payload inspection, a proxy cannot determine:
- whether the content contains regulated PHI,
- whether the employee is authorized to share that specific information,
- whether the interaction aligns with policy,
- or whether the generated output introduces compliance risk.
Traditional proxies see content but cannot understand the meaning or risk within it.
The Agent Problem
Agents represent the largest architectural break. They are neither users nor applications, and their behavior does not follow predictable patterns.
Agents can:
- read, write, and update data
- orchestrate multi-step tasks across systems
- take actions without human clicks
- operate asynchronously
- make decisions based on context
This is where traditional tools break down:
- EDR relies on binary execution and user-driven events - agents generate API-driven workflows.
- UEBA models human behavior - agents have no stable baseline and are designed to act dynamically.
- IAM assumes static permissions - agents’ decisions evolve across steps.
- SIEM logs individual events - it cannot infer the reasoning or intent behind a sequence of actions.
Take this scenario as an example: An onboarding agent updates Salesforce, drafts welcome emails, creates channels in Slack, and pulls data from internal systems.
Traditional tools cannot determine:
- whether the sequence of actions matches the intended onboarding workflow,
- whether the agent misunderstood the task,
- whether cross-system interactions are sensitive under this specific context,
- or whether the behavior is appropriate given the user’s role and permissions.
The problem isn’t data movement alone. It’s whether the interactions and decisions behind that movement are appropriate and safe.
Why This Is Happening: AI Introduces New Actors
This challenge isn’t a gap in features. It’s a fundamental architectural mismatch.
Traditional security was built to secure systems and users.
AI introduces new actors - models and agents -that interpret, decide, and act.
These actors break core assumptions:
- They are non-deterministic
- They behave based on semantic meaning
- They blend content, context, and actions across multiple systems
- They create workflows not visible to infrastructure-level tools
- They operate in layers where network, endpoint, and identity tools lack visibility
Legacy controls model access, events, traffic, and systems.
AI requires modeling intent, sensitivity, reasoning, workflow chains,and emergent behavior.
When the entity performing the action changes, the entire architecture must evolve.
What a Modern AI Security Architecture Must Provide
Enterprises now need a dedicated AI security layer. One that is able to understand and govern how AI is used, rather than how traffic flows.
Such an architecture must:
- Understand the content, context, and intent behind prompts, outputs, and agent actions
- Assess risk semantically in real time
- Govern AI interactions across all applications and agents
- Provide accountability for autonomous behaviors
- Operate without endpoint dependence
- Scale across thousands of AI tools and modalities
- Enforce policy based on meaning and intent and not just on technical patterns
This is the foundation enterprises need to adopt AI safely and at scale.
Why I Started Lumia
Across my career, from the Israel Defense Force’s Unit 8200 to leading innovation at Team8 VC, I’ve seen how security architectures must evolve to match shifts in technology. AI represents the most significant shift yet: a move from application behavior to autonomous decision-making.
We built Lumia to give enterprises a security layer designed for this reality so that they can adopt AI and agents responsibly, confidently, and at scale.
AI is accelerating.
Our architecture must meet it.
Exclusive Webinar with Admiral Mike Rogers
“Admiral Mike Rogers on the New AI Reality: Control, Risk, and Resilience”, January 8, 2025
At 11am EST/ 8am PST.
Join us as we host Admiral Mike Rogers, former Director of the NSA and Commander of U.S. Cyber Command, to discuss how enterprise leaders can stay in control as AI accelerates.
Register here.

.png)