A Head of Product I coach ran a two-hour strategic planning session with her engineering directors. She had prepared. Had slides. Had a clear framework for moving from business goals to prioritized work.
It fell apart in the first twenty minutes.
One director questioned whether the business goals were real because leadership had not explained the context behind them. Another wanted to jump straight from goals to solutions, skipping problem identification entirely. A third was visibly disengaged. The session produced two hours of frustration and no decisions.
This is what ideation loops look like inside teams. Not the lone founder spinning at 2am, though that happens too. More often it is a room full of smart people who have lost the thread between thinking and deciding, and nobody has named it yet.
The defeatism trap vs. the skills gap
People resist planning and stall in ideation for two different reasons, and they require completely different fixes.
The first is defeatism. 'Everything changes anyway, why plan?' This is not a planning problem. It is a trust problem. The team has been through enough reorganizations, pivoted priorities, and abandoned roadmaps that planning feels like theater. They are not stuck because they cannot decide. They are stuck because they do not believe the decision will hold.
The second is a genuine skills gap. They do not know how to move from a business goal to a problem statement to a testable hypothesis. They want to jump from 'we need to grow revenue' to 'let's build this feature' because the middle step, identifying the specific friction blocking growth, is hard and uncomfortable and exposes what you do not know.
The tell: if someone says 'we have tried that before,' they are usually expressing defeatism, not a legitimate objection to the approach. If someone says 'I do not understand how this goal connects to what I should build,' that is a skills gap. Both are real. Neither gets fixed by thinking harder.
The 9-month project that worked on paper
I am working with a gaming company on AI integration. Nine months ago, a team ran an AI copilot project. They shipped it. Accuracy came in at 85%. Adoption came in at 20.5%.
The project succeeded on the metric it was built to optimize and failed completely in practice.
The post-mortem revealed the team had been evaluating non-deterministic AI output with deterministic success criteria. They had measured 'is the output accurate?' and stopped there. They had not measured 'does anyone use this, and if not, why?' They ran the ideation loop with the wrong feedback signal, got a green light, and shipped something that did not work.
That is not a story about AI. It is a story about what happens when you optimize for outputs instead of outcomes. The ideation loop runs cleanly. The feedback loop never closes.
What 'editor, not author' looks like
When a team is stuck, spinning, resistant, swimming in abstraction, the worst thing you can do is ask them to start from scratch. Blank documents do not break ideation loops. They deepen them.
What works: give people something at 30% and ask them to bring it to 80%. Advance the document between sessions. Let them edit rather than create. The psychology is different. Editing is concrete. Creating is exposed. A team that is editing a rough draft is generating feedback, not generating ideas, which is what you need to move.
The corollary for individuals: the thing that breaks the loop is almost never another thinking session. It is external contact with reality. A customer conversation, a live test, a decision forced by a deadline.
At EverQuote, when we were building the case for expanding beyond auto insurance, the question 'which vertical is best?' could not be answered from the inside. We ran small paid acquisition tests in home and life before we had built anything, to see whether consumers would engage with us outside auto. The signal from those tests answered in three weeks a question that had been theoretically open for months.
When to slow down
Not everything is reversible. There are decisions where the cost of being wrong is genuinely high and genuinely hard to undo: entering a market that requires regulatory licensing, a platform architecture choice that constrains you for three years, a leadership hire into a role that is hard to unwind.
The problem is not that people are reckless with those decisions. The problem is that they treat reversible decisions like irreversible ones. The analysis applied to 'should we rebuild our core data infrastructure?' gets applied to 'should we test a new onboarding headline?' Those are not the same decision.
Casey Winters, who has evaluated hundreds of marketplace and product businesses, frames it simply: what is the actual cost if this is wrong? If the answer is 'we learn something and adjust,' that is not a reason to keep deliberating. That is permission to move.
The real fix
The Head of Product whose planning session fell apart? We did not run the session again with a better agenda. We invited the company's President to open the next session and present the business goals directly, so the engineering directors heard the context from the source, and my client was positioned as the person closest to leadership rather than a messenger defending goals she had not set.
That single structural move resolved the defeatism problem. The skills gap, how to move from business goals to problem identification to testable work, got addressed in two follow-up sessions.
Find the actual source of the stall. Then fix that specific thing.