Why most AI projects fail before a line of code is written
The model is almost never the problem. The problem is almost always the problem.
I've watched three separate teams spend a year on the same AI project and fail for the same reason. Each team blamed something different. One blamed the model. One blamed the data. One blamed "organizational readiness." All three had framed the problem wrong on day one and never went back to fix it.
The hidden cost of a vague brief
Most AI engagements start with a sentence like "we want to use AI to improve customer retention." That sentence has four words doing work — AI, improve, customer, retention — and not one of them is actually defined.
What kind of retention? Logo churn, seat churn, revenue churn? Measured how? Over what window? With what baseline? Would a 1% absolute lift count as success, or a 10% relative lift, and over what cohort? And why do we think AI is the right lever versus onboarding, or product, or pricing?
Most AI engagements start with a sentence that has four words doing work. Not one of them is actually defined.
The easiest thing a team can do in month one is agree to be vague on purpose. The framing is deferred to the engineering team, who — being engineers — focus on what they can control: the model. A year later, the model works. Nobody can prove it made any difference, because the metric was never pinned down.
Where I've seen this go wrong
At a media company I'll leave nameless, we spent eight months on a recommendation engine that shipped a 0.6% lift in click-through on homepage stories. The engineers did good work. The model was sound. But the real problem — that the homepage was the wrong surface to optimize, because 70% of reading happened from email — never got argued out at the brief stage. We were answering a different question than the one that mattered.
At an industrial services company, a team built a fantastic predictive-maintenance model. It was never used. Technicians didn't trust the output because it was delivered as a probability percentage and their language was "red, yellow, green." One framing conversation in week one would have saved twelve months.
How to frame a problem so an AI project doesn't die
Three tests I now run on every engagement before we commit to a build.
- State the metric as a number with a direction and a window. "Net revenue retention, trailing-12-month, from 107% to 115%, by end of FY."
- State the counterfactual. If we don't do this, what happens? If the answer is "same as today" you probably don't have a problem worth AI.
- State who changes behavior when the system is live. If the answer is "nobody, it just runs in the background," great — that's an autonomous agent. If the answer is "a human operator reads the output," then the UX is 60% of the project and you should budget accordingly.
The one rule
Nobody regrets spending more time on the problem statement. Every team that's burned me has burned me by moving off it too fast.
If you want to talk about a problem that isn't framed yet, that's what we're for. Fifteen minutes, no deck, honest answer.