During our recent webinar, The New AI Reality: Control, Risk & Resilience, we were joined by Admiral Mike Rogers, former Director of the NSA and Commander of U.S. Cyber Command, for a discussion on how enterprises can stay in control as AI adoption accelerates.
Several thoughtful questions came in from the audience. Many of them centered on how organizations govern AI when sensitive data, from source code to customer information, is increasingly flowing into AI systems. We didn’t have time to go deep on all of them live, so we’re addressing some of the most common ones here.
1. How do you design AI governance that accelerates innovation?
AI governance slows innovation only when it’s designed as a gate.
When governance focuses on approvals, static rules, and upfront restrictions, teams work around it. That’s when AI usage goes underground and risk increases. Effective governance operates in the flow of work, not against it.
The goal isn’t to pre-approve every use case. It’s to define clear boundaries around how AI can be used, where autonomy is acceptable, and when human judgment is required. When expectations are clear, teams move faster because they don’t have to guess what’s allowed.
In practice, this means being explicit about sensitive data. Teams need clarity on what data can be shared with AI, what must never leave enterprise boundaries, and how AI-generated outputs can be reused or retained. When those rules are clear, innovation speeds up because uncertainty disappears.
2. What signals show whether leaders are truly “in control” of AI usage?
Control isn’t measured by how many tools are blocked. It’s measured by how few surprises leaders encounter.
Some practical signals:
- Leaders can clearly explain where AI is being used in critical workflows
- There’s clarity around which decisions AI can influence or execute
- Accountability for AI-driven outcomes is explicitly owned by humans
- Policy violations are identified early, not after damage is done
- Leaders know what types of sensitive data are being shared with AI systems, and under what conditions
When organizations can’t answer basic questions about how sensitive data flows into and out of AI, control is already compromised.
3. Why do AI usage policies fail in practice?
Most AI policies fail because they’re disconnected from reality.
They’re written as static documents, while AI usage is dynamic, contextual, and embedded in daily work. Employees rarely intend to violate policy. They simply operate faster than policy enforcement can keep up.
This gap becomes most visible around sensitive information. Policies often say “don’t share confidential data,” but provide no guidance on what that means when employees paste code, credentials, or customer data into AI tools as part of daily work.
Without visibility, reinforcement, and feedback, policies become aspirational instead of operational.
4. Where should organizations start rolling out AI governance?
Start where AI already influences outcomes, not where it’s easiest to control.
For many organizations, that means:
- Engineering and development
- Data, analytics, and research teams
- Business operations and automation workflows
These are also the areas where sensitive data and business logic most often intersect with AI, making early governance critical. Governance can expand over time, but it should begin where AI already has agency.
Want to go deeper?
Watch the full webinar recording to hear how Mike Rogers and Omri Iluz discussed AI control, accountability, and the growing challenge of governing sensitive data as AI adoption accelerates.

