AI Risk Governance: 10 Hard Questions CISOs Should Be Asking Now
TL;DR — Your AI tools are live. Do you know how they’re governed? AI moves fast, but most organizations haven’t embedded governance: only ~7% have...
AI governance is having a moment.
Boards are asking for it. Regulators are expecting it. Policies, principles, and frameworks are being written at speed.
And yet, many organizations quietly sense a problem:
Despite all this activity, AI still feels under‑governed in real work.
This is not because governance frameworks are weak. It is because governance, on its own, does not shape how work actually happens.
This article explains why AI governance fails when it lives only in policy, why Human–AI Work Design is the missing execution layer, and why HR, IT, and Security must co‑own governance if AI is to be used safely, responsibly, and at scale.
Most organizations are early in their AI governance maturity journey.
Recent studies suggest that while many large organizations have begun establishing AI principles and aligning to frameworks such as NIST, OECD, or ISO, relatively few have progressed beyond foundational steps. Research from Gartner and Deloitte indicates that the majority of enterprises remain in the initial or developing stages of AI governance maturity, focused primarily on policy definition, oversight structures, and high‑level controls rather than operational execution. For example, Gartner’s analysis on AI governance maturity notes that most organizations are still establishing foundational principles and oversight, with limited embedding into day‑to‑day operations (see: https://www.gartner.com/en/articles/ai-governance-why-it-matters).
As a result, governance often appears robust on paper — even while practical implementation is still emerging.
But when governance lives primarily as documentation, it can create a sense of control that hasn’t yet made its way into day-to-day work.
Traditional governance assumes that once rules are set, people will follow them.
That assumption is fragile in any environment.
It is especially fragile in AI‑enabled work, because AI does not behave like traditional technology. Unlike systems we have governed in the past — which followed deterministic rules, required explicit human input, and failed in relatively predictable ways — AI systems adapt, learn, recommend, and act with a degree of autonomy. That fundamentally changes the nature of control, shifting governance away from static rules and toward how humans interpret, trust, challenge, and act on AI outputs in real time.
AI systems operate inside workflows, not outside them. They influence decisions in real time, under pressure, and often invisibly. When governance is not translated into roles, decision boundaries, escalation paths, and verification behaviors, it has little impact on day‑to‑day outcomes.
Governance rarely struggles because people deliberately ignore policy.
More often, it struggles because policy is too distant from the moment where decisions are actually made — especially in fast‑moving, AI‑enabled work, where judgment is exercised under pressure and outside formal review points.
Human–AI Work Design is the practical layer that sits between governance intent and operational reality.
It answers questions governance frameworks deliberately leave open:
These are not compliance questions. They sit closer to disciplines organizations already know, but rarely connect: human factors engineering, safety management, change management, and operating‑model design. In high‑reliability industries like aviation, healthcare, and industrial safety, similar challenges are addressed through structured approaches that make human judgment, escalation, and accountability explicit under real conditions. The same logic applies here — organizations need a deliberate methodology to design how humans and AI share work, rather than assuming governance will naturally translate into practice.
They are work questions — the kind that show up in the flow of real work, not in policy reviews.
And that naturally raises the next question leaders ask: where do we start?
For most organizations, the answer is not to design everything at once. It is to identify where AI is already shaping decisions, accelerating work, or operating with limited visibility — and to focus first on the highest-risk moments: where judgment matters most, where escalation is unclear, and where errors would have the greatest human, operational, or reputational impact. These are the seams where governance gaps surface earliest, and where work design delivers the fastest return.
Effective AI governance is not centralized — it is distributed but designed.
Distributed, because AI-enabled decisions happen everywhere work happens: across teams, functions, and roles. Designed, because without clear intent, those local decisions quickly diverge, creating uneven risk and invisible gaps. The challenge for leaders is not choosing between central control and local autonomy, but ensuring that distributed decision-making follows shared principles, expectations, and guardrails that hold under real working conditions.
It lives inside:
When governance is embedded here, it shapes behavior.
When it is not, it depends heavily on good intentions — which are rarely enough on their own.
Frameworks like the NIST AI Risk Management Framework define outcomes.
They intentionally avoid prescribing how work should be designed.
Human–AI Work Design is how those outcomes become real — translating governance principles into everyday decisions, behaviors, and accountability.
Without that translation, frameworks remain aspirational.
AI governance cuts across domains no single function fully owns — and that insight tends to emerge naturally as organizations move further along the journey. Frameworks define intent. Policies set direction. Work design determines whether any of it survives contact with reality.
When any one of these is missing, governance weakens.
As organizations move forward with AI, many are discovering that governance works best when it is approached as a shared learning effort, not a compliance exercise. The practical question leaders are beginning to explore is not simply whether policies are in place, but how governance is experienced in everyday work — in workflows, roles, decisions, and behaviors. Seen this way, AI governance becomes a genuinely holistic and multifunctional endeavor, drawing on HR, IT, and Security, while staying grounded in the realities of how humans and AI work together. When governance is present at the point of work, it quietly guides behavior; when it is distant, its impact is harder to sustain.
Because AI influences decisions inside workflows, where policy rarely reaches without deliberate work design.
AI governance must be co‑owned by HR, IT, and Security, each contributing their unique perspective on systems, risk, and human behavior.
No. Embedded governance enables safer, faster adoption by reducing rework and downstream risk.
This article is part of the AI Workforce Transformation series. Next:
Because in practice, AI is governed less by what is written —but by how work actually happens.
TL;DR — Your AI tools are live. Do you know how they’re governed? AI moves fast, but most organizations haven’t embedded governance: only ~7% have...
12 min read
AI governance is no longer just about policies, controls, and compliance checklists. For many organizations, that early phase is already behind us....
6 min read
If you don’t give people clear, usable, safe ways to use AI at work, they will create their own. That’s Shadow AI.
3 min read
Subscribe to our newsletters for the latest news and insights.
Stay updated with best practices to enhance your workforce.
Get the latest on strategic risk for Executives and Managers.