Most AI strategy decks are written backwards
Starting from the technology and working back to the problem is why most AI features miss
The AI strategy deck has a standard format by now. Slide one: some version of "AI is changing everything about [your industry]." Slides two through eight: a list of features: AI-powered search, intelligent recommendations, automated summaries, a conversational interface. Slide nine: a roadmap. Slide ten: projected revenue impact.
What you will not find: a single slide naming a specific problem a specific user has, with a number attached to how often it happens or how much it costs.
The features are described in terms of what the model does, not what the user gets, not what changes for them on a Tuesday afternoon when they are behind on a deadline. The AI is the subject of every sentence.
This is not a presentation problem. It is a thinking problem. The deck reflects how the roadmap was actually built: by asking what AI makes possible, then working backward to justify why it matters.
That ordering (technology first, problem later or never) is why most AI features produce good demos and poor retention. A PwC survey from early 2026 found that only 39% of organisations report any bottom-line impact from AI investment. The issue almost never turns out to be the model. It is the problem selection.
Why the order produces the result
When you start from what the technology can do, you gravitate toward use cases where the technology performs well in a 10-minute pitch. The features you put on the roadmap are optimised for a live demo with a clean input, not for a user who has been on the platform for 90 days with messy, real-world data.
There is a subtler issue: capability-first thinking conflates "the AI can do X" with "users need X done." Autocomplete is technically impressive. Whether it makes users faster depends on whether they are bottlenecked on keystrokes, and most knowledge workers are not.
There is also a selection effect on who builds these features. Teams starting from capability are usually responding to a model release or a demo they saw at a conference. The problem they are solving is often implicit: a vague assumption that users will find a use case once the feature ships. Sometimes that happens. More often, the usage data six months out tells a different story.
Product decisions that stick are defined by a problem, then a desired outcome, then, if warranted, a technology. AI is an implementation detail. A good AI feature should be describable without mentioning AI at all: "users find the answer they need in under 30 seconds", or "managers complete performance reviews in half the time." If the feature description requires you to mention AI to explain why it is valuable, you have described the implementation, not the outcome.
“If the feature description requires you to mention AI to explain why it is valuable, you have described the implementation, not the outcome.”
The tell
Here is a simple test. Take any AI feature on your roadmap and write its success metric without using the word "AI."
- "Users who engage with our AI assistant have higher retention." That is a proxy, not a metric.
- "The AI summarises contracts in under 10 seconds." That is model performance, not a user outcome.
- "Users resolve support tickets without escalating." That is a user outcome.
If you cannot write the metric without referencing the technology, the problem statement is probably still missing.
The second tell: the feature was easy to scope because it was a UI surface. "Add a chat panel on the right." Difficulty in scoping usually tracks how well you understand the problem. Features that felt easy to spec may just be cases where you skipped the hard part.
Two features that went backwards — and one that did not
The backwards one: a legal-tech product adds AI contract review because demo convention has made it a standard feature. The model highlights risk clauses and generates summaries. Users try it, find it confident about clauses it misreads, and quietly stop using it. Nobody had measured how long contract review was actually taking, what specifically users were missing, or whether "faster" and "accurate" were in tension for their case load. They were.
A forward one: an ops workflow product notices from support tickets that users repeatedly ask "why did this automation trigger?" They are not confused about how to use the product — they are confused about their own data, and they want an explanation in plain English. The team writes the success metric first: "users can explain a rule trigger to a colleague without opening a second tab." They build an AI explanation feature aimed directly at that. Adoption is immediate because the problem existed before the solution.
Another forward one: an HR product observes that managers who use review templates complete reviews 40% faster than those who do not, but only 30% of managers use templates — because setting one up takes longer than just typing. They build a feature that drafts a template from calendar data and recent notes. The problem was measured before any model was involved. AI turned out to be the right tool; that decision was made on the problem first.
What the two forward examples share: AI could have been replaced by a different approach — better documentation, a simpler UI, a curated template library — and the team would have been fine with that if it worked. The technology was chosen because it was the best tool for a specific, already-measured problem.
| Dimension | Technology-first | Problem-first |
|---|---|---|
| Starting point | What can the model do? | What is the user struggling with? |
| Feature definition | What the AI does | What changes for the user |
| Success metric | References the technology | Could be met without AI |
| Demo vs. retention | Strong demo, weak retention | Modest demo, high retention |
| Alternative paths | Not considered | Explicitly evaluated before choosing AI |
| Failure mode | Users try it once and stop | Feature takes longer to scope; ships better |
What a forward AI strategy document looks like
It starts differently. Not with what AI can do — with a list of the top reasons users do not get value from the product. Support tickets, churn interviews, session recordings, whatever your data source is.
For each problem: what does 10x better look like? Not "add AI." What does the user's working week look like if this is fully resolved? Write that as a success metric. Then ask: could AI help? Is it necessary, or would a simpler approach work as well? If AI is the answer, what does it need to get right 95% of the time for users to trust it?
The forward AI strategy document is less impressive to read. It does not have slide headers like "The Future of [Product Category]." It has tables: "Problem: managers do not know what to write in performance reviews. Frequency: 60% in our last survey. Current workaround: ask direct reports. Success metric: time to submit a review, down from 45 minutes to under 20. AI needed: yes — the data is unstructured and varies per user."
The board presentation is less exciting. The sprint planning is much cleaner. When a feature is defined by a user outcome rather than a model capability, the acceptance criteria writes itself. You know what done looks like. You know what failure looks like. You do not need to wait six months to find out whether anyone cares.
Less compelling as a deck. More useful as a spec — which is the only document that matters once the quarter starts.
The one question worth asking before the next deck
"If we did not have access to any AI technology, would this feature still be a priority?"
If yes: you have a real problem. AI might make the solution better — or it might not, and you will find that out quickly either way.
If no: you have a demo. It might be a good demo. It might win a deal or get a press mention. It probably will not be something users keep using six months in.
The 39% figure from PwC is not an indictment of the technology. The models are capable enough for most workflows teams want to automate. The gap is in how teams decide which workflows to pick. Start there, and the AI strategy deck mostly writes itself.
Frequently asked questions
Related reading
Bootstrapped or venture-backed: the Indian SaaS calculus in 2026
India hosts the second-largest SaaS ecosystem outside the US. The raise-or-bootstrap question has a different answer in 2026 than it did in 2021. Here's the data behind the shift.
The AI wrapper debate, three years in: what the survivors built
Three years after the GPT-4 wrapper wave, a handful of AI companies are thriving and most are gone. The split was not random — and the pattern tells you something useful about building on top of LLMs in 2026.
You have 30 customers. Don't hire a sales rep yet.
Conventional wisdom says hire when the motion is repeatable. But at 30 customers, most founders can't yet articulate what makes their deals close, and hiring into that gap costs far more than the salary.