Skip to the main content.
The AI Operating Model Problem

The AI Operating Model Problem

Many organizations believe they are building an AI operating model — and that belief is understandable.

After all, AI tools are being deployed, pilots are running, policies are emerging, and governance conversations are happening. But an AI operating model is more than deploying technology. It requires redesigning how decisions, escalation, accountability, and verification actually work when AI becomes part of everyday operations.

This gap is subtle, dangerous, and increasingly common. While AI adoption accelerates, decision-making, accountability, escalation, and verification often remain anchored in pre-AI assumptions. The result is not transformation — it is silent risk accumulation.

This article explores why AI operating models fail when the human layer is ignored, why this failure is rarely visible until something goes wrong, and why CISOs, CIOs, and Human Risk Management leaders must treat work design as core infrastructure — not an afterthought.


The AI Operating Model Problem Nobody Wants to Admit

There is no shortage of AI activity inside enterprises today. Recent surveys underline just how widespread this has become: McKinsey reports that nearly 90% of organizations now use AI in at least one business functionIDC estimates global spending on AI will exceed $500 billion annually by 2027, while Gartner forecasts sustained double‑digit growth in enterprise AI investment over the next several years (MIT Sloan Management Review; IDC Worldwide AI Spending Guide; Gartner AI market forecasts).

Tools are licensed. Pilots are launched. Policies are drafted. Dashboards light up. Executives speak confidently about “becoming AI-first.”

And yet, when you look closely at how work actually gets done, a different picture emerges.

Decisions still rely on lots of informal judgment. Escalation paths can be unclear. Verification is inconsistent. Accountability is often assumed, not designed.

Most organizations are not running an AI operating model. They are running legacy operating models with AI bolted on.

Quoteable takeaway: You can’t modernize work by modernizing tools alone.


What an Operating Model Really Is (and Why AI Breaks It)

An operating model defines how an organization turns strategy into action. It is the practical bridge between ambition and execution — the way decisions are made, work is coordinated, and risk is managed day to day. As Michael Porter famously put it, “Operational effectiveness is not strategy,” but without a coherent operating model, strategy never makes it out of the boardroom.

In practice, an operating model determines:

  • Who makes decisions

  • How work flows

  • Where authority sits

  • How risk is detected and managed

  • How issues escalate when something goes wrong

Traditional operating models assume:

  • Humans perform the work

  • Humans generate judgment

  • Humans notice failure

AI quietly violates all three assumptions.

Outputs appear instantly. Confidence is high. Errors are plausible. And responsibility often sits somewhere between the system, the individual, and “the process.”

This is not a tooling issue. It is a design mismatch.

As Peter Drucker put it decades ago, “The best way to predict the future is to create it.” Operating models are how organizations do exactly that — by deliberately shaping how work, decisions, and accountability function in practice. AI simply accelerates the consequences when that work is left unfinished.


Why AI Operating Models Fail in Practice

When organizations struggle with AI at scale, the root causes are often less about chips, models, or data-center cooling (fascinating as all of that is) and more about execution: operating model design, workflow upheaval, and human adoption.

Multiple research programs point to the same pattern:

In other words: automation and early digital transformation often took existing workflows and made them faster, cheaper, and more trackable. AI is different. It changes how decisions get made, how exceptions are handled, and how accountability is assigned — which means real value usually requires workflow redesign, new decision rights, and new human norms. Senior leaders consistently say the same thing: the technology works, but the organization does not move with it.

MIT Sloan Management Review research shows that most AI initiatives stall not because of model performance or infrastructure limits, but because workflows, decision rights, incentives, and ways of working are not redesigned. McKinsey echoes this, noting that successful AI transformations require fundamental changes to end‑to‑end processes and operating models — not just faster or more automated versions of existing workflows.

This is where AI diverges sharply from earlier waves of automation. Digital transformation made existing workflows faster and more efficient. AI reshapes who decides, who verifies, and who is accountable. That level of change creates organizational upheaval — and it is precisely where most initiatives slow down or fail.

Instead, failures cluster around the human layer:

1. Decision Ownership Becomes Ambiguous

Who owns a decision that was informed, shaped, or generated by AI? Without explicit design, accountability diffuses:

  • Individuals assume the system is right

  • Teams assume someone else is responsible

  • Leaders assume controls exist

They often don’t.

2. Escalation Stops Working

AI systems do not escalate concerns — people do.

But when humans are unsure whether to challenge AI outputs, escalation slows or disappears entirely. Issues surface late, if at all.

Quiet risk is the most dangerous kind.

3. Verification Becomes Optional

In many organizations, verification shifts from required behavior to personal preference.

Some teams check. Others trust. Few align.

This inconsistency is not a training failure — it is an operating model failure.

4. Risk Accumulates Where No One Is Looking

AI-related incidents rarely announce themselves.

They emerge through:

  • Small judgment errors

  • Subtle data misuse

  • Gradual overreliance

  • Cultural drift

By the time they are visible, the damage is already done.

Gartner, NIST, and the Illusion of Structure

Frameworks matter. Gartner, NIST, and others provide essential guidance for AI governance and risk management.

But frameworks do not design operating models. They assume organizations will translate principles into real work — into roles, behaviors, and escalation paths. (In theory, that translation happens. In practice? Rarely at the speed or depth required.)

Without intentional work design:

  • Governance remains abstract

  • Controls remain aspirational

  • Risk remains human and unmanaged

The Missing Human Layer in AI Operating Models

An effective AI operating model answers uncomfortable questions upfront:

  • When must a human intervene?

  • Who can challenge an AI output?

  • What happens when AI is wrong — quietly?

  • How do we preserve judgment, not bypass it?

These are not technical questions. They are human risk questions. And they sit squarely at the intersection of security, IT, and culture — which is precisely where Human Risk Management as a function and program has been evolving.

What began years ago as cybersecurity awareness training and phishing simulations has steadily matured into something far more strategic: the quantification of human factors and risk, the understanding of cultural drivers, and the orchestration of organizational change required to make behavioral, knowledge, and psychological shifts stick.

Modern Human Risk Management is no longer about telling people what not to do. It is about understanding how people actually behave, why they behave that way, and what level of human risk an organization is willing to accept — then designing systems, workflows, and culture to make that risk visible, recognized, and resilient over time.

Why CISOs and CIOs Can’t Solve This Alone

CISOs see the risk. CIOs enable the platforms.

In most organizations, the question of how humans behave with technology — and the risk inherent in that behavior — has always been a shared concern across security, IT, and the business. 

Depending on culture, norms, and leadership styles, this shared ownership can become a powerful accelerator for AI adoption and transformation. When information security, IT, and HRM are aligned and sympatico, organizations move faster with clearer accountability and fewer blind spots. When they are misaligned — or working at cross‑purposes — friction increases, progress slows, and unseen human risk quietly accumulates.

That responsibility increasingly and logically falls to Human Risk Management — not as awareness training, but as operational design — supported by business architects (BAs), security and business information security officers (BISOs), and adjacent transformation roles that help translate intent into execution.

  • Defining decision boundaries

  • Normalizing verification behaviors

  • Designing escalation norms

  • Aligning accountability with reality

What a Real AI Operating Model Looks Like

It is important to be explicit about where the market actually is.

There is no evidence that a large — or even representative — proportion of organizations are operating this way yet. What we are describing here is emergent, not mainstream. It reflects early patterns observed across a small set of more mature organizations, specific functions within enterprises, and forward-looking teams experimenting at the edge.

In other words, this is not the current norm. It is a direction of travel.

Organizations that are beginning to make AI work at scale do not look radically different on the surface. What changes — often quietly — is how work is designed:

  • Decision points become explicit rather than implicit

  • Verification is expected, not optional

  • Escalation is safe, fast, and culturally reinforced

  • Accountability remains human, even when execution is not

AI accelerates execution — but humans still own outcomes.

Why This Matters for AI Workforce Transformation

AI workforce transformation fails when organizations focus on tools, skills, and adoption metrics alone — a lesson many large enterprises learned the hard way during earlier waves of digital transformation.

Previous digital initiatives often stalled not because the technology was inadequate, but because organizations underestimated the upheaval required to rework end‑to‑end workflows, decision rights, incentives, and ways of working. AI amplifies this challenge. Where automation largely made existing processes faster and more efficient, AI — and especially agentic AI — introduces new forms of autonomous decision‑making, judgment delegation, and compounding risk.

Senior leaders increasingly acknowledge that pushing for AI tool adoption alone, without deliberate and wise redesign of how work, accountability, and escalation function, creates exposure that is difficult to see and harder to unwind. In this context, value creation, risk management, and competitive advantage hinge not on speed of deployment, but on the discipline of work design.

The operating model determines whether AI creates:

  • Competitive advantage

  • Silent fragility

  • Or cultural erosion

This is why Human–AI Work Design sits at the center of successful AI transformation.

(If you haven’t read the foundational piece on AI Workforce Transformation and Human–AI Work Design, start there — it frames the system this operating model depends on.)


Frequently Asked Questions

What is an AI operating model?

An AI operating model defines how decisions, accountability, escalation, and verification work when AI systems are embedded into everyday workflows.

Why do AI operating models fail?

They fail when organizations deploy AI tools without redesigning how humans interact with those tools in real work.

Who owns the AI operating model?

Effective AI operating models are co-owned by IT, security, and Human Risk Management — because human behavior determines outcomes.

How does this connect to AI governance frameworks?

Frameworks define principles. Operating models make those principles executable.


Where This Series Goes Next

This article is part of our AI Workforce Transformation series. Next, we explore:

Each article links back to the foundational pillar — because work design is where AI either succeeds or quietly fails.


AI doesn’t break operating models. It exposes the ones you never finished designing.

More from the Trenches!

AI Risk Governance: 10 Hard Questions CISOs Should Be Asking Now

AI Risk Governance: 10 Hard Questions CISOs Should Be Asking Now

TL;DR — Your AI tools are live. Do you know how they’re governed? AI moves fast, but most organizations haven’t embedded governance: only ~7% have...

12 min read

Ai Workforce Transformation: Why Human–AI Work Design Is The Missing Control

Ai Workforce Transformation: Why Human–AI Work Design Is The Missing Control

AI is no longer a future-of-work discussion. It is actively reshaping how decisions are made, how accountability works, and how risk accumulates...

9 min read

What Is Cognitive Operations? The Human Competency for Safe AI

What Is Cognitive Operations? The Human Competency for Safe AI

We’ve spent years building IT operations, security operations and now AI operations. But there’s a missing layer: the operational capability that...

4 min read

We've Got You Covered!

Subscribe to our newsletters for the latest news and insights.