The Great AI Split: Humans Want Advice at 10 PM, Systems Want Automation at Scale
Fresh data from OpenAI and Anthropic reveals writing beats code, context beats price, and your best ROI might be helping people edit emails faster
Today, OpenAI and Anthropic published fresh usage research on ChatGPT and Claude. It is rare to get two large, credible windows into real behavior on the same day, one from a consumer vantage point and one from enterprise rails. For executives, this is not another model scorecard. It is a look at what people actually do with these systems and where the value shows up first.
The short version: AI use is splitting. At home and around the desk, people ask for guidance and help with writing, and a growing share of that activity happens outside work hours. Inside companies, models are wired into workflows as quiet automation, often through APIs. Price plays a role, but the ceiling you hit is context, data quality, and fit to the task. If that sounds familiar, it should. I see it in finance closes, support queues, and proposal edits every week. What follows translates today’s findings into a simple plan you can run this quarter.
People Ask, Systems Act
Three out of four consumer messages are now off the clock, and despite the hype around code, the bulk of them ask for guidance and help with writing. That line from today’s research hit me like a cold splash of water. It fits what I see at work and at home. My friend’s teenager plans a science fair project with ChatGPT. A sales VP I know quietly rewrites tough emails at 10:42 p.m. on a Sunday. This is not fringe behavior anymore. It is habit.
Inside the enterprise, the picture shifts. Claude’s enterprise usage skews toward API calls that get work done behind the scenes. Less chat, more jobs. Fewer questions, more runs. That is a second habit, and it is forming just as quickly.
If you run a P&L, this matters. Two AI economies are emerging. Consumer behavior is an exoskeleton for thinking, and enterprise behavior is invisible automation inside systems.
What People Actually Do: Advice and Editing Beat Code
Let me say the quiet part out loud. Coding is not the killer app at consumer scale. Writing is. On the ChatGPT side, the most common patterns are drafting, revising, and practical guidance. Think outlines, tone shifts, executive summaries, first drafts of memos, then more revisions. In workplace chat, the biggest lift is not green-field generation, it is editing your own words faster and with less stress.
Satisfaction also tilts toward advice. When people ask for judgment, tradeoffs, or explanations, they rate the results higher than when they ask the model to directly do something end-to-end. That makes sense. Advice is legible. It feels collaborative. You can accept it, tweak it, or ignore it.
Another signal: non-work messages now make up the majority of consumer traffic, and they are growing faster than work messages. Do not shrug this off. Home behavior spills into office behavior. If your team learns to outline college essays and vacation plans with an assistant, tomorrow, they will draft sales one-pagers and board pre-reads the same way. Procurement follows habit.
Two more shifts I keep seeing in the data. Usage is moving toward gender parity, and the share of adult messages from users under 26 is substantial, approaching half. Translation: enablement must speak to a broad base, and you should give your Gen Z super-users real responsibility as internal coaches.
Inside the Company: Automation Scales Until Context Fails
On enterprise rails, the story flips. Claude’s API traffic is dominated by automation-style work. Summarize a thousand tickets into themes. Convert PDFs to structured rows. Classify product feedback. Generate baseline code for a specific function. Trigger a workflow in a data pipeline. The user is not a person with a cursor, it is a system with a queue.
But here is the rub I keep seeing in the data and in practice. Success is capped by context, not cost. Teams push longer prompts and bigger inputs, then hit diminishing returns. Without the right scaffolding and high-quality context, the model works hard and still misses the point. Add clean product catalogs, mappings, and identity data, and accuracy jumps. Add retrieval that actually finds the right snippet, and you stop arguing about token prices.
There is also a power-law distribution inside firms. A small set of tasks accounts for a large share of total usage. This is not a bug. It is how value concentrates. If you spread effort across thirty small pilots, you get lots of demos and little impact. Pick three to five blockbusters and wire them in front to back.
One more myth to retire. Price is not the main lever, at least not yet. Capability and task fit explain more of the variance. Commodity runs matter for scale, of course, but the business case is won or lost on use-case quality, data plumbing, and reliability. Stop pitching the CFO on cheap tokens, start pitching on fewer rework hours and faster cycle time on decisions.
The Quarter You Can Win: A Short, Practical Plan
Here is the short version I am using with executive teams.
1) Choose your blockbusters.
List the top 20 recurring tasks by frequency and pain. Circle the five where three things line up: the task is well understood, the needed data exists or can be created, and there is a clear owner who will live with the outcome. Examples I am seeing succeed: first-pass proposal edits, customer email triage with suggested replies, ticket summarization into themes, meeting note cleanup with action extraction, month-end close checklists with variance explanations.
2) Build a decision fabric before chasing full automation.
Most work is not a single click. It is a loop: pull information, interpret it, write it down, decide, then push an action. Productize that loop. Create reusable prompts, checklists, and templates for review and recommend steps. Store them like code. Give people a big blue button that says “Review and advise,” not just “Do it.” The soft stuff, judgment, drives the hard ROI because it reduces rework and escalations.
3) Invest in context pipelines.
Context is the new capex. Treat data enrichment and retrieval as a capital project, not an afterthought. Build or buy a service that assembles each task’s memory: the right documents, IDs, taxonomies, and exceptions. Keep score. Measure answer changes when you update the knowledge base. If accuracy does not move, you are feeding the wrong context.
4) Measure what people feel, not only what the system does.
Track cycle time and defect rates, yes. Also track satisfaction on advice outputs. People are more forgiving of a model that gives clear reasons and a decent draft than of a model that tries to do everything and occasionally makes a quiet error.
5) Design for adoption inequality.
Usage is unequal today, and growth is broadening fast. Create two paths. A quick-wins path for teams ready to go, and a coaching path for teams that need examples, scripts, and office hours. Make the internal champions visible, and give them simple rules of thumb. My favorite: when in doubt, ask the assistant to explain the tradeoffs and write the first paragraph you wish you had.
6) Do not overfit to code.
Yes, coding help has real value. For broad workforce impact, writing is bigger. Most white-collar hours are words. If you can make your company ten percent better at email, one-pager clarity, and meeting notes, you just moved the needle on the whole org.
Say This In Your Leadership Meeting
AI splits in two: a copilot for people, a robot for systems. Consumer behavior is telling you where the comfort zone is, advice, and editing. Enterprise usage is telling you what scales, automation where context is cheap. Your job is to bridge them.
So here is my closing ask. This quarter, pick five blockbusters. Fund the context work. Productize decision support with visible buttons and guardrails. Track satisfaction next to throughput. And when someone asks whether the plan should be chat or API, say yes. People at home are learning a new way to think with machines, and your systems at work are learning a new way to get work done. If you align those habits, the value shows up fast. If you do not, you will be left with pilots that demo well and never move the numbers.
Moving Forward with Confidence
The path to responsible AI adoption doesn't have to be complicated. After presenting to nearly 1,000 firms on AI, I've seen that success comes down to having the right framework, choosing the right tools, and ensuring your team knows how to use them effectively.
The landscape is changing quickly - new capabilities emerge monthly, and the gap between firms that have mastered AI and those still hesitating continues to widen. But with proper policies, the right technology stack, and effective training, firms are discovering that AI can be both safe and transformative for their practice.
Resources to help you get started:
In addition to publishing thought AI leadership on a regular basis, I also work directly with firms to identify the best AI tools for their specific needs, develop customized implementation strategies, and, critically, train their teams to extract maximum value from these technologies. It's not enough to have the tools; your people need to know how to leverage them effectively.
For ongoing insights on AI best practices, real-world use cases, and emerging capabilities across industries, consider subscribing to my newsletter. While I often focus on legal applications, the broader AI landscape offers lessons that benefit everyone. And if you'd like to discuss your firm's specific situation, I'm always happy to connect.
Contact: steve@intelligencebyintent.com
Share this article with colleagues who are navigating these same questions.