If you have ever run an enterprise AI pilot, you have probably watched it die at day 90. Not because the model stopped working. Because something else gave out first.

The enterprise AI research names the pattern, even if it does not always name the causes. MIT's NANDA project puts the failure rate around 95 percent. S&P Global Market Intelligence puts it at 42 percent of initiatives abandoned in a single year. BCG finds that only 25 percent of firms generate material value. The studies measure different things, but the curve is the same: strong early progress, plateau around day 60, stall around day 90.

The five failure modes below are the ones that show up in nearly every stalled pilot we have been called into. We will name each one, describe what it looks like from the inside, and give the structural fix. Structural, not motivational.

1. The pilot is scoped around a demo, not a workflow

What it looks like: The POC works in a sandbox. Somebody gave a great internal demo. Leadership was impressed. Then the team tries to embed it in the actual workflow and realizes the inputs are messy, the outputs do not route anywhere, and nobody owns the step where a human checks the result.

Why it fails: A demo proves a capability. A workflow proves an outcome. The gap between them is 80 percent of the work and zero percent of the presentation.

The fix: Scope the pilot around a workflow from day one. Before any model is touched, write down the end-to-end path: who triggers it, what data it reads, where the output goes, who reviews it, and what happens on the next iteration. If you cannot describe this in one page, you are not ready to build. You are ready to design.

2. Nobody owns the last-mile integration

What it looks like: The model returns a good answer. The answer has to land in a CRM, a case management tool, an accounting system, a ticketing queue, or a calendar. That last hop is always described as a "small integration." It is never small. The "small integration" is where the project quietly stops.

Why it fails: Integration work requires access, permissions, schema knowledge, error handling, and ongoing maintenance. If no single person has both the authority and the engineering skill to finish it, it will not finish.

The fix: Name an integration owner by name on day one, with budget and calendar time protected. The owner must be an engineer, not a PM. If you do not have an internal engineer who can own it, this is one of the clearest cases where the Applied AI Firm shape wins. We run the last mile as part of the work, not as overhead.

3. The business case was never dollarized

What it looks like: The pilot was justified with words like "efficiency," "productivity," and "insights." Nobody built a number. When leadership asks at day 90 whether the pilot is working, nobody can answer in a way that maps to the P&L.

Why it fails: Pilots that cannot report a dollar impact at day 90 get killed, even if they are working. The reason is structural: CFOs cannot defend a budget line without a number.

The fix: Before kickoff, write a one-paragraph dollarized baseline: how much this work is costing the business today, on what measurement, with what source of truth. Then write the target outcome in the same unit. At day 30, day 60, and day 90, report against that baseline. Even a rough number beats a qualitative anecdote at budget time.

4. Nobody owns operating the system once it ships

What it looks like: The build team finishes the pilot, claims the win, and moves to the next project. Ownership of the running system is vaguely assigned to "the business" or "IT." Within four to six weeks, something upstream changes — a data source, a vendor API, a schema, a policy — and the system quietly stops working. Nobody notices for a while. By the time somebody does, the pilot is labeled a failure.

Why it fails: AI systems are not software in the 1998 sense. They drift. They assume stable inputs. They assume a calibrated human in the loop. They need operational discipline that nobody handed off.

The fix: Treat "operate" as a separate funded phase, not a postscript. Assign a named owner. Instrument the system with enough monitoring to know when it is drifting. Write the runbook before the system ships, not after. This is the third leg of the Applied AI Firm shape — diagnose, build, operate — and it is the leg most delivery models skip.

5. The change management required was never acknowledged

What it looks like: The tool works. The integration is in. The dollar baseline is written. But the team does not use the tool. Or they use it performatively. Or they use it and then do everything the old way afterward, so nothing actually compounds.

Why it fails: AI changes where judgment lives and which tasks carry status. Both are politically charged. A pilot that does not engage directly with the change it is asking of people will not survive contact with the actual team.

The fix: Identify the one person inside the company who owns the change. Not the sponsor. The owner. Someone who will sit with the team, answer questions, adjust the process, and absorb the friction. If you cannot name this person, do not start the pilot. The research is consistent on this: the firms that capture value from AI have internal owners. The firms that do not, do not.

The common thread

Every one of these failure modes has the same shape: a piece of work that is essential to the outcome, not glamorous, not demo-able, and therefore not owned. When something essential is not owned, it does not happen. The pilot stalls.

This is why the Applied AI Firm model we argue for in the Implementation Gap is structured as diagnose → build → operate, as a single engagement with a single owner. The failure modes live in the seams between phases. Collapsing the seams collapses the failures.

If you are running a pilot that looks like it is heading toward the day-90 wall, a short conversation is usually enough to name which of the five is doing the damage. That is a call we are always willing to have.