Your AI Licenses Aren't the Problem. Your Culture Is.
A million businesses have access to the same AI. Most of them are still stuck at the pilot stage.
What OpenAI’s New Enterprise AI Report Really Means For Your Business
TL;DR:
AI at work is no longer a side experiment, usage is exploding and starting to look like core infrastructure.
The heaviest users are doing more advanced work across more tasks, and they are already seeing real impact on revenue, cost, and speed.
Laggards are not blocked by access to tools, they are blocked by behavior and culture, and that gap is starting to matter.
If you have been wondering whether the AI hype is cooling off or turning into something real inside companies, OpenAI just put out a pretty clear signal.
They released a brand new State of Enterprise AI report, based on usage across their paid business products. It is their own customer base, so it skews toward early adopters, but it is fresh, detailed, and the shape of the data is hard to ignore. The leaders have moved from “trying AI” to “building on AI,” and the usage curves now look a lot more like ERP or CRM adoption than a fun side project.
Let me walk through what stood out and why it matters if you are running a team or a firm.
AI has moved from pilots to core infrastructure
The scale is the first thing that hits you.
OpenAI reports more than 1M businesses using their tools and roughly 7M paid workplace seats. That is not just startups and Big Tech anymore.
Inside that base, usage has gone vertical. ChatGPT Enterprise message volume is up roughly 8x year over year. Reasoning workloads, the heavier thinking jobs like complex analysis and planning, are up around 320x per org.
Seats on the enterprise products are up about 9x. On the API side you see the same story: more than 9,000 organizations have pushed over 10B tokens through the API, and close to 200 of them have crossed the 1T token mark. Those are “this runs in production” numbers.
You can also see the shift in how teams package the tech. Usage of Custom GPTs and Projects, which is how people turn raw models into reusable assistants around their own data and workflows, is up 19x. Roughly 20% of enterprise messages now run through those custom tools instead of straight into the base model.
If your organization is still stuck at “we spun up a pilot and wrote a memo,” a chunk of your peers is already two or three steps ahead of that.
Workers are quietly changing how they work
The worker side of the data is where most executives are still squinting. Is this actually changing how people work, or is it just another tab open next to Slack?
Across almost 100 enterprises and roughly 9,000 workers, about 75% say AI has improved the speed or the quality of their work. On average, people report saving 40–60 minutes per active day. In roles like data science, engineering, and communications, that climbs closer to 60–80 minutes.
That is the obvious part. The more interesting part is what they are doing with that extra time.
Around 75% of workers say they can now complete tasks they simply could not do before. Things like:
Writing and debugging code, even with light or no formal engineering background
Building and fixing spreadsheets that used to sit in the “ask IT when they have time” queue
Drafting SOPs, presentations, policies, and client communications in hours instead of weeks
Coding usage is climbing across the board, not just in engineering. Outside of engineering, IT, and research, coding related work grew around 36% in six months. Once you have a model that can write, explain, and fix code, “I am not technical” becomes less of a hard stop.
In plain English, non-technical teams are starting to absorb little slices of engineering and analytics work, because the model does the heavy lifting and they guide it.
What real enterprise use actually looks like
The case studies make this feel real.
Lowe’s built an assistant called Mylow for store associates and shoppers. It now answers close to 1M questions every month. When customers use it, online conversion more than 2x’s. In store, associates get faster answers and customer satisfaction scores tick up.
Intercom used OpenAI’s realtime API to power voice support. They cut latency for their AI voice agent by about 48%, and that agent now fully resolves roughly 53% of all calls. Those are calls that used to always need a human.
Moderna talks about a core analysis step in their Target Product Profiles, a key part of drug development, that used to take weeks and now takes hours. That is not just “folks make slides faster.” That is timeline risk coming down in a business where every month of delay matters.
When you zoom out, the pattern is consistent. We have moved beyond “write me a better email.” The gains are showing up where P&L owners actually care: revenue lift, cost reduction, shorter cycle times, and better quality.
Leaders are starting to pull away
Now the uncomfortable part.
OpenAI slices usage into “frontier workers,” roughly the 95th percentile of users by adoption, and everyone else. Those frontier workers send about 6x more messages than the median employee. For coding work, the gap jumps to 17x.
At the company level you see the same thing. The most advanced firms send roughly 2x more messages per seat than the median firm, and about 7x more messages to Custom GPTs. They are not just “using AI more,” they are building more internal assistants and using those heavily.
Even inside active deployments there are pockets of under use. Among people who already count as monthly active on ChatGPT Enterprise:
About 19% have never used the data analysis tools
Around 14% have never tried the stronger reasoning models
Roughly 12% have never used search
Daily heavy users look completely different. Once people live in these tools, they start stacking features, and the gap widens.
It is the same picture across sectors and countries. Every industry is moving, but tech and a few other verticals are moving faster. Some countries are growing business usage faster than the global average, even if their total numbers are smaller.
This is not really about who can get access anymore. Most large firms can turn on enterprise access in a week. The split now is about behavior, skills, incentives, and governance.
What this means if you run a company
If I had to boil this report down for a CEO, CFO, or managing partner, it would be this: the tools are here, the productivity gains are real enough to matter, and your main risk now is cultural, not technical.
Here is what I would do next.
Treat AI like infrastructure, not a side project.
Stop thinking in terms of “the AI initiative.” Start asking where models should sit inside core processes. Sales, support, legal review, finance, HR. Anywhere you have structured information and repeatable logic, there is probably a place to plug this in.Measure depth, not just access.
“We rolled out licenses” is the starting line, not the finish line. Track how many people are active, how often they use data analysis, reasoning, and Custom GPTs, and how many key workflows now depend on these tools. That is where the 40–60 minutes a day shows up.Turn one off wins into shared tools.
When someone builds a prompt that saves them 2 hours a week, do not let it die in their notebook. Turn it into a simple internal assistant around your own data and share it. You do not need a giant internal app store. You need a small shelf of tools that everyone actually uses.Invest in skills and change, not just licenses.
The laggards in this report are not short on models. They are short on training, expectations, and psychological safety. People need concrete examples of what “good” AI assisted work looks like in their role, and they need explicit permission to change how they do their jobs.Hunt for pockets of non use inside your own walls.
Look at where usage is low, by team and by role, and ask why. Sometimes there is a valid constraint, like regulatory risk. Often it is incentives, skeptical managers, or people afraid of looking dumb in a new tool.
One caveat. This is OpenAI’s view of their own customers, which means it is the front of the wave, not the whole ocean.
But if you want a preview of where serious companies are heading over the next 12–24 months, this is a pretty solid early snapshot.
The leaders are not waiting for perfect answers. They are learning in public, stacking small wins, and the compounding has already started.
I write these pieces for one reason. Most leaders do not need another summary of OpenAI’s latest report; they need someone who will sit next to them, look at where AI usage is actually stuck inside their company, and say, “Here is why adoption is lagging in these teams, here is where behavior change will unlock the biggest gains, and here is how we build internal tools that people actually use.”
If you want help sorting that out for your company, reply to this or email me at steve@intelligencebyintent.com. Tell me what you sell, how many seats you have deployed, and where usage is falling short of what you expected. I will tell you what I would measure first, which teams I would focus on, and whether it even makes sense for us to do anything beyond that first diagnostic.


