The Mapping Problem: Why Enterprise AI Fails Before It Starts

A field experiment from INSEAD and Harvard Business School proves the bottleneck in AI adoption is not the technology — it is discovering where to deploy it. 515 startups, causal evidence: 44% more use cases, 1.9x revenue, 39.5% less capital. Why this compounds in enterprises with organizational inertia, compliance constraints, and legacy data estates.
Nidhi VichareApril 4, 2026
10 min read
The InferenceAI StrategyEnterprise AICDOLeadershipData StrategyData GovernanceHealthcareRetail
The mapping problemWhy enterprise AI fails before it starts
+44%AI use cases
1.9xrevenue
+18%customers
-39.5%capital needed
INSEAD + Harvard · 515 firms · RCT · Kim, Kim & Koning (2026)

The mapping problem is not a technology problem. It is a leadership problem. And it starts with walking the full chain.

TL;DR

The bottleneck in AI adoption is not the technology. It is the discovery of where to deploy it. A randomized controlled trial of 515 startups found that firms shown how to search broadly across their production chain discovered 44% more AI use cases, completed 12% more tasks, were 18% more likely to acquire paying customers, and generated 1.9x higher revenue — while demanding 39.5% less external capital.

For enterprises, this problem compounds. Organizational inertia, compliance constraints, legacy data estates, and partial automation all make the mapping problem harder. Discovery without governance is just faster experimentation with unauditable results.


The research

The paper, "Mapping AI into Production" by Hyunjin Kim, Dahyeon Kim, and Rembrand Koning (INSEAD / Harvard Business School, March 2026), studied 515 high-growth startups in a randomized controlled trial. The treated group received one thing the control group did not: case studies showing how other firms had reorganized their production processes around AI.

The results should change how every enterprise leader thinks about AI investment.

The five headline results from the INSEAD/HBS field experiment on AI mapping

And here is the number that will change boardroom conversations: treated firms demanded 39.5% less external capital than the control group, with no change in workforce size. They achieved more with less.

The researchers call the core friction the mapping problem: the challenge of discovering where and how AI creates value within a firm's production process. It is not an access problem. Both groups had the same tools, the same API credits, the same technical training. The difference was that treated firms were shown how to search more broadly across their production chain for where AI could reorganize work — not just assist with individual tasks.

The mapping problem: narrow search vs broad mapping across the production chain

Two firms with identical tools, training, and budgets can realize vastly different returns — if one searches more broadly across its production process for where AI creates value.


Why this is harder in enterprises than startups

The paper studied early-stage startups, where organizational inertia is low. The researchers acknowledge this directly: "if these frictions bind even in early-stage ventures with minimal organizational complexity, the challenge is likely greater still for established firms."

Having spent 20 years building enterprise data platforms at Cisco, Samsung, and across healthcare and retail clients, I can confirm: the mapping problem in enterprises is not just harder. It compounds.

Four compounding factors that make the mapping problem harder in enterprises

Organizational inertia multiplies the search cost

In a startup, the founder can rethink the full production chain in a week. In an enterprise with thousands of employees, dozens of teams, and years of established processes, mapping AI requires navigating organizational politics, legacy system dependencies, and teams that have been doing things one way for a decade. The search space is not just vast — it is defended.

Compliance constraints narrow the viable set

In healthcare, I build HIPAA-ready data platforms where PHI-aware data zoning, row-level security, and full audit lineage are prerequisites before any AI workload can run. The mapping problem is not just "where can AI create value?" It is "where can it create value within regulatory constraints?"

Legacy data estates create hidden bottlenecks

At Samsung, the mapping problem started with 70 petabytes of fragmented consumer data across multiple divisions. The AI use cases that eventually enabled $1B+ in ad revenue were invisible until the data foundation existed. You cannot map AI into production when the data underneath is fragmented and untrustworthy.

Partial automation preserves bottlenecks

Automate report generation but leave data collection manual. The report arrives faster, but the data still takes three days to assemble. The bottleneck moved. The outcome did not improve. Only full-chain automation transforms the process.

The FazeShift case: replacing human glue with AI

The paper illustrates this beautifully. In accounts receivable, the conventional process alternates between software systems and human clerks who bridge between them. Automating one step still leaves the clerk manually entering data and sending emails. Only when the full chain is automated does the process transform from a labor-intensive service into a scalable system.

FazeShift case: replacing human glue with AI across the full accounts receivable chain


What mapping actually looks like in practice

The paper names what enterprise architects do intuitively: search across the full production process, not just the obvious pain point. In my work, mapping follows four patterns I have applied at every scale:

PATTERN 1

Walk the full production chain

At Cisco, the highest-value AI application was not in the analytics layer where leadership expected it. It was in the data ingestion layer, where automated quality controls reduced downstream errors by 30%. That use case was invisible until we walked the full chain.

PATTERN 2

Find the "human glue"

In every enterprise, there are people whose job is to bridge between software systems. They are not knowledge workers — they are human middleware. AI doesn't just automate their task. It eliminates the need for the bridge entirely.

PATTERN 3

Test whether the bottleneck actually moves

After deploying AI in one step, measure downstream. If the outcome didn't improve, the bottleneck shifted but didn't disappear. Most enterprise AI projects stall here: they automate a task, declare success, and never measure the firm-level outcome.

PATTERN 4

Build the data foundation first

At Samsung, we built the Snowflake and Databricks lakehouse before any AI workload ran on it. Governed feature stores. Full lineage. That foundation is why the AI use cases that followed were trustworthy enough to attribute $1B+ in causal revenue.

The data platform is the AI platform. If your data is fragmented, ungoverned, and untrustworthy, no amount of AI mapping will produce firm-level gains.


The governance gap the paper does not address

The paper identifies discovery as the bottleneck. I would add a second bottleneck that binds immediately after discovery: governance.

Once a firm discovers that AI can reorganize a production process, it needs to answer a harder question: how do we deploy this safely, auditably, and in compliance with regulatory requirements?

In healthcare, discovering that an AI agent can conduct clinical intake in eight minutes instead of an hour is the mapping problem solved. But deploying that agent requires PHI-aware data zoning, row-level security scoped by provider and site, audit lineage documenting every decision the agent made, and conformance profiles defining what the agent is allowed to do before it runs.

RELATED

Your Agents Need a Contract →

I designed a spec-driven governance framework for enterprise AI agents. Declarative YAML schemas modeled as Kubernetes-style CRDs. Conformance profiles. Policy-as-code enforcement. GitOps deployment. A nine-framework ADK comparison scorecard.

Discovery without governance is just faster experimentation with unauditable results. The mapping problem gets you to the use cases. Governance gets them to production.


What this means for enterprise leaders

The paper's practical implication is clear: "Teaching managers and entrepreneurs how to solve the mapping problem may be at least as important as ensuring they have access to the technology."

For enterprise leaders, I would extend this:

Stop starting with the technology. Do not ask "how can we use AI?" Ask "where in our production process are humans bridging between systems, and what happens if we automate the full chain, not just one step?"

Show your teams how others reorganized. The paper's treatment was simply showing firms case studies of how other firms reorganized around AI. That intervention alone produced 44% more use cases and 1.9x higher revenue. In enterprises, this means sharing concrete examples of AI-driven reorganization across your industry — not generic AI demos.

Invest in the data foundation first. The paper assumes tool access is equal. In enterprises, it is not. If your data is fragmented, ungoverned, and untrustworthy, no amount of AI mapping will produce firm-level gains. The data platform is the AI platform.

Build governance into the mapping process. Every AI use case discovered through mapping should be evaluated not just for value but for deployability: can we govern this? Can we audit it? Can we explain it to a regulator? If not, it is not ready for production regardless of how much value it promises.

Measure firm-level outcomes, not task-level productivity. The paper's most important contribution is showing that task-level gains do not automatically aggregate to firm-level performance. If you are measuring AI success by how fast a report generates or how many emails an LLM drafts, you are measuring the wrong thing. Measure revenue. Measure customer acquisition. Measure capital efficiency. Those are the outcomes that tell you whether your mapping is working.


The mapping problem is not going away

The paper closes with an insight that should keep every enterprise leader awake:

"As models grow more capable, the set of activities they can perform expands, making the search space larger and the mapping problem harder. A firm that has figured out where to use today's AI will likely face the same discovery problem again when next year's models can do more."

This is not a one-time exercise. It is a continuous discipline. The firms that build the organizational muscle to map AI into production, govern it, and measure firm-level outcomes will compound their advantage. The firms that wait for the technology to become obvious will always be one cycle behind.

The mapping problem is not a technology problem. It is a leadership problem. And it starts with walking the full chain.


Further reading


This post is part of "the inference," a series on enterprise AI strategy and architecture.

Build with conviction. Govern with discipline.

Nidhi Vichare