The Compounding Problem: How AI Leaders Are Pulling Away While Everyone Else Runs Pilots
EY's survey shows 96% of AI investors seeing gains. The uncomfortable part is what they're doing with them.
The AI flywheel is real, and it’s widening the gap
You can feel it in leadership meetings right now.
Some teams talk about AI like it’s a set of handy tools. A faster way to draft, search, summarize, and move on.
Other teams talk about AI like it’s a new operating system for the company. Something that changes how work moves, how decisions get made, and what the business can ship in a quarter.
EY’s latest AI Pulse Survey lands right on that split. And the most important takeaway isn’t a model name or a trendy use case. It’s the compounding effect. The flywheel.
Companies that are already seeing real gains are taking those gains and feeding them back into more capability, more skills, and more resilience. Which creates more gains. Which funds the next round. Meanwhile, the organizations that are still dabbling aren’t just moving slower. They’re falling further behind because the leaders are speeding up every cycle.
That’s the story. And it’s a little uncomfortable, because it means the “we’ll catch up later” plan is getting more expensive every month.
What the survey is really saying in plain English
A lot of AI coverage gets trapped in tool chatter. This survey is more useful because it’s about behavior. What senior leaders say is happening inside the business. Where the wins are showing up. And what they’re doing with those wins.
The headline is simple: almost everyone investing in AI says they’re seeing productivity gains. Not marginal gains, either. In the survey, 96% report AI-driven productivity gains over the past year, and 57% describe those gains as significant.
But the more interesting detail is what separates the winners. It’s not “we bought more tools.” It’s “we integrated AI into the way work actually gets done.”
That’s a big difference.
You don’t get compounding results from isolated pilots. You get compounding results when AI is embedded in workflows, connected to real data, trained into the habits of teams, and governed well enough that people trust it.
That’s when speed becomes structural.
The flywheel: gains don’t get banked, they get reinvested
Here’s the flywheel in one sentence: productivity gains become fuel.
What surprised me in the survey is where leaders say that fuel goes. The most common destinations for AI-driven gains aren’t layoffs or short-term cost cuts. It’s reinvestment.
Among organizations seeing gains, leaders say they’re putting the benefits back into expanding existing AI capabilities, building new AI capabilities, strengthening cybersecurity, and pushing more into R&D. Upskilling existing employees shows up strongly too, along with hiring external talent.
And only a minority say the gains translated into reduced headcount.
That pattern matters because it explains why the gap widens. Leaders aren’t stopping at “nice, we saved time.” They’re turning saved time into new capacity. New capacity into better service and faster delivery. Better delivery into more growth. And then growth into more investment.
It’s the compounding loop you see in other big shifts: cloud, data platforms, automation, even CRM adoption back in the day. Early movers don’t win because they were early. They win because they keep reinvesting while others hesitate.
Scale matters, and it creates second-order advantage
Another finding that should change planning conversations: the scale gap.
Organizations investing $10M+ in AI are far more likely to report significant productivity gains than those investing under $10M. That’s not a moral judgment about spending. It’s a signal about what it takes to move from experiments to enterprise change.
Below a certain level, most of what you can do is learn. You can run pilots, build excitement, test a few processes, and get some quick wins.
But once you try to make AI real across the business, costs show up fast. Not just model and tool costs. The unglamorous parts:
Clean data and access controls. Security reviews. Legal review. Process redesign. Training. Change management. Measurement. Integration work. Support. Governance. Documentation.
That’s why scale matters. Not because bigger budgets are inherently smarter, but because enterprise AI is not a weekend hackathon. It’s operational change.
And once you do it, you unlock second-order advantages that are hard to copy quickly. Faster cycle times. Lower rework. Better consistency. Better knowledge transfer. More time for senior people to focus on the hard judgment calls.
Those gains then fund more change. Back to the flywheel.
The quiet problem: most companies can’t measure the value cleanly
This is the part I’d underline for any CEO, COO, or CFO.
A huge share of leaders say AI-driven gains are a key metric they’re evaluated on. But many also admit they struggle to tie specific productivity gains directly to AI adoption. And most say they need more training on how to report AI-driven gains in a way that shows real value.
That should set off alarms.
Because if you can’t measure impact, you can’t manage it. And if you can’t manage it, you can’t scale it. The flywheel stalls out, not because AI stopped working, but because the company can’t prove where it’s working.
I’ve seen this play out in real life. Teams feel the difference day-to-day. Fewer stalls. Faster drafts. Better first passes. Shorter cycles. And yet the dashboard still looks the same because no one agreed on what to track.
So the business keeps funding pilots instead of scaling what’s already working.
The fix is not complicated, but it takes discipline. Pick a small set of metrics that connect to how work flows, not vanity stats.
Cycle time. Throughput. First-pass quality. Rework rate. Time-to-decision. Customer response time. Close rate. Claim handling time. Matter turnaround time. Choose what fits your business.
Then run simple before-and-after measurement on a few workflows that matter. Make it boring. Make it repeatable. Let it compound.
The expectation gap is a warning sign for 2026 plans
The survey also shows a planning-to-reality gap that’s worth calling out.
A year ago, leaders expected higher levels of AI investment than what many say they actually made. And looking forward, a much larger share expects a very large portion of total budget to go toward AI next year than is true today.
That gap usually comes from one of three places.
First, readiness got overestimated. Data isn’t ready. Security isn’t ready. Ownership isn’t clear. The organization wants results without changing how it works.
Second, AI got treated like IT spend instead of business change. So the dollars got stuck in procurement cycles, approval chains, and pilot purgatory.
Third, the company got wins but didn’t build the story and measurement to justify scaling. That loops back to the attribution problem.
If you’re planning 2026 right now, this matters. Ambition is not execution. If you want a meaningful jump, you need a plan that includes process change, training, governance, and measurement, not just tool spend.
Build vs buy is shifting, and it changes your talent strategy
One more signal that I think executives should take seriously: the balance between building custom solutions in-house and buying solutions is shifting.
Leaders report allocating a smaller share to building custom AI in-house than they previously expected, and that share has dropped again in the most recent period.
That’s a practical choice. Many companies are deciding they don’t want to become software companies. They want outcomes, and they want them fast.
But that choice comes with a trade: the core skill shifts from “build models” to “run AI well.”
Vendor choices matter more. Integration matters more. Governance matters more. Training matters more. And the ability to set clear rules and standards matters more.
Because buying tools doesn’t create advantage by itself. How you embed them into the business does.
Responsible AI is no longer optional if you want speed
The survey points to a steady rise in Responsible AI training and a growing focus on ethical use and transparency with customers.
This is not a box-check exercise. It’s part of scaling.
The faster you move, the more one incident can freeze adoption. A data leak. A biased decision. A customer surprise. A compliance miss. A bad output that slips through without review.
The companies that move quickly over time aren’t the ones taking the biggest risks. They’re the ones building enough trust to move quickly. People adopt faster when they know what’s allowed, what’s not, and what good looks like.
Rules and guardrails aren’t the enemy of speed. They’re what makes speed sustainable.
What I’d do Monday morning if I were in your seat
If you take the flywheel seriously, it should change what you do in the next 90 days. Not later. Now. 1. Pick 2–3 workflows that sit on your critical path Choose work that directly drives revenue, service delivery, or cost. Not side projects. Critical path. 2. Define “good” with simple metrics before you scale Track cycle time, throughput, quality, and rework. Get a baseline. Then measure weekly. 3. Embed AI end-to-end, not as a bolt-on Connect it to real data, put human checkpoints in the right places, and redesign the workflow so AI is part of the work, not extra work. 4. Train the people who run the business, not just a specialist team Make it role-based. Show what to do, what not to do, and how to review outputs. 5. Decide how you’ll reinvest the gains Don’t let the “saved time” vanish. Make an explicit choice: more capability, more security, more training, more R&D. That’s the flywheel.
The line that sticks
The gap isn’t widening because some companies have better prompts.
It’s widening because some companies are building a compounding system, and others are still running experiments.
The flywheel doesn’t wait.
I write these pieces for one reason. Most leaders do not need another summary of an AI survey; they need someone who will sit next to them, look at how gains are actually flowing through the business, and say, “Here is where your flywheel is working, here is where it’s stalling, and here is what’s worth reinvesting in next quarter.”
If you want help sorting that out for your company, reply to this or email me at steve@intelligencebyintent.com. Tell me what you’re selling, where you’re seeing early AI wins, and where the measurement or scale-up is stuck. I’ll tell you which workflows I’d instrument first, how I’d structure the reinvestment conversation with your CFO, and whether it even makes sense for us to do anything beyond that first assessment.



Great article Steve, inspiring!