The tools are genuinely remarkable. But the bottleneck in most consulting engagements was never the research — it was the decision-making and the buy-in.
The tools are remarkable. Secondary research that used to take two weeks now takes two days. Competitive landscapes, market sizing, segmentation frameworks — all of it compresses when you know how to use what's available. Anyone who tells you otherwise hasn't tried.
But after running engagements with these tools built into the work, here's what I can tell you: the bottleneck in most consulting engagements was never the research. It was always the decision-making and the buy-in. AI hasn't touched that.
The thud-factor trap
It has never been easier to produce an impressive-looking deliverable. A 90-slide deck with a full competitive landscape, a detailed market model, and a polished executive summary — that used to take weeks. Now it takes days.
That's useful. It raises the floor. There's less excuse for thin analysis or loose structure.
But it also makes it easier to confuse output with outcome. What actually matters in any engagement is two things: did you make the right call, and does the team that has to implement it believe in it enough to actually move? A massive handover document doesn't answer either question. It can work against you. Organizations that receive a comprehensive strategy deck without having been part of building it tend to comply with it rather than own it. Compliance and ownership produce very different results over a 12-month implementation.
The engagements I've seen create real value have always required disproportionate time on the alignment work — helping leadership teams pressure-test assumptions together, surface the disagreements nobody's saying out loud, and get to a decision the people in the room are genuinely prepared to act on. No tool compresses that.
Speed is the real shift
AI changes the pace at which organizations can test and iterate. That's the part that matters most to me. Running a pricing experiment, validating a channel hypothesis, stress-testing a market entry assumption — all of it is faster now. The organizations that have internalized this are moving at a different pace than those that haven't.
The uncomfortable implication: if testing is cheaper and faster, the cost of not testing has gone up. Spending six months building conviction through analysis before running a single experiment isn't the cautious approach anymore. It's the slow one. The edge goes to the organizations that can form a hypothesis, run a cheap test, learn something, and go again.
This changes how I scope engagements. Less time on comprehensive market documentation. More time on identifying the two or three assumptions the strategy actually depends on — and figuring out the fastest way to test each one.
What this means in practice
The best use of AI in this work is to compress the parts that were always a means to an end — data gathering, initial structuring, first-draft synthesis — so there's more time for the parts that were always the actual point. Building a clear view of what to do. Creating the conditions for the organization to do it.
The tool is not the work. The work is helping organizations make better decisions faster and build the internal conviction to execute. That hasn't changed. If anything, compressing the research phase makes it clearer than ever where the value actually gets created.
Joe Griffith is the founder of Atalaya Insights, an independent strategy advisory practice focused on commercial due diligence, growth strategy, and value creation for private equity.