AI Workslop isn’t an AI problem. It’s a management problem.
Your team's polished PowerPoints are actually expensive paperweights, and ChatGPT isn't the one to blame
NOTE: I just got back from a fabulous 9-day vacation in Europe - ending with 4 days of celebrating at Oktoberfest. For the most part, I tried to ignore the AI news, but a few things really jumped out at me. This topic was one of them. Enjoy.
I’ll go first. I have shipped workslop. A polished deck, tidy sentences, confident tone, and yet the person on the other end had to do the actual thinking. On my side, it felt like progress. On theirs, it felt like grit in the gears. If you lead teams, you have probably seen the same pattern in your inbox. That is why the topic is everywhere right now.
Here is how I define it in plain language. AI workslop is content that looks finished but does not move the task forward. It arrives with formatting and flourish, sometimes even citations, but it lacks context, a point of view, or a clear ask. The burden shifts to the receiver, who now has to reconstruct the problem, verify claims, and figure out next steps. Multiply that by a few incidents a week, and you end up with real costs, both time and trust. Survey snapshots over the last two weeks keep landing in the same range. Roughly four in ten desk workers say they received this kind of work recently. Teams report that each incident takes close to two hours to unwind. For a 10,000 person company, that rolls up to millions a year in quiet drag.
Why the surge? First, volume jumped. Many companies green-lit broad AI use this summer, which meant more drafts, more slides, more memos. Second, the scoreboard did not change. We still reward visible output, page count, ticket count, meeting count. AI makes it easy to produce more of everything, so people do, and then point to the pile. Third, we trained for prompts and skipped the part that matters, editing and verification. So first drafts, which are impressively clean now, go out dressed up as finals.
This is fixable, and you do not need another license to fix it. You need clearer standards, stronger editing, and different incentives. Said differently, you need management.
Recommendation 1: publish a one-page “definition of done” for AI-assisted work
If you only do one thing, do this. Write down what “done” means in your org and make everyone use it. Mine reads like this when I roll it out with clients. Before you hit send, state the job to be done and who will act on it, list the evidence you used and what is still unknown, name your key assumptions and constraints, say what decision you recommend and what you want the reader to do, then add two or three lines on what you changed from the first AI draft and why. Finish with a simple disclosure, AI-assisted, human-edited by.
Now set the norm that if any of that is missing, the receiver bounces it back without drama. No shame, just a clear, polite return to sender with a note to fill the gaps. In my experience, that one habit change flushes out most of the empty calories within two weeks. People think before they generate. They put their fingerprints on the output. Hand-offs get cleaner. Your best people, the ones who always end up doing last-mile context work, get hours back that month.
Recommendation 2: make editing and verification the core skill
Most AI training programs spend ninety minutes on prompting tips and nine minutes on editing. Flip it. Teach managers how to turn a decent AI draft into a decision-ready artifact from your world. Take a real example and walk through the surgery in front of the team. Cut filler. Keep strong verbs. Trace any number to a source you would stake your name on. Add the policy quirk that changes the answer in your company. Rewrite the opening so a director can decide in three minutes.
Give people a short mental checklist they can carry into every task. Ask if the facts have been checked rather than assumed. Ask if the limits are clear, what is not known, and what risks sit at the edge. Ask if the ask is explicit, not implied. Ask whether a typical peer could act on it quickly. Ask where the human added judgment. Then pair juniors with an “AI editor” for two sprints so they get reps. The change you will notice first is confidence. The change you will notice second is speed, because fewer drafts bounce back and forth.
Recommendation 3: measure value delivered, not volume sent
People do what you count. If you count artifacts, you will get artifacts. If you count clarity and action, you will get outcomes. I like two simple team metrics. Time to clarity, which is the minutes it takes a typical peer to understand and act on your work, lower is better. Rewrite ratio, which is the share of your outputs that receivers had to substantially redo, lower is better. Talk about these in standups and one-on-ones. Recognize the person who sends a one-pager that gets a yes in five minutes. Surface the places where work keeps getting rewritten and either fix the process or delete the task. You will discover that some of what you shipped last quarter did not need to exist at all.
A quick story from my own desk makes the point. I once sent a client a 13-slide deck that read smoothly and matched their brand. The reply was a single line: what do you want us to do in the next fourteen days. That was the tell. I rewrote the whole thing as a two-page brief. One paragraph on the cost of the status quo by Q4, three decisions with cost ranges and owners, and a short table of risks we would watch. Same research, same models, different product. The meeting ran 18 minutes and ended with real commitments. I should have started there.
Leaders, you have to model this. Bring your own before and after to the next team meeting, the raw AI draft on one side, your edited version on the other. Narrate what you cut, what you checked, and where you added company context. Say out loud that speed comes from smooth hand-offs, not from flooding the channel. Then set a two-minute pause rule. Before anyone sends, they glance at the definition of done, fill the gaps, and only then ship. Culture follows what leaders show and what they repeat.
If you want a simple personal playbook to anchor this shift, try starting every AI task by writing the ask first, one line at the top of the doc. Generate a few options rather than one, then keep the best pieces and delete the rest. Spend more time on the edit pass than the first prompt.
Never label something done if you cannot explain the audience, the evidence, the limits, and the next action in a short paragraph. Reward people who make decisions easy for others.
Objections come up quickly. This will slow us down. For a week, maybe two, yes. Then it speeds everything up because the silent rework goes away. My team is already busy. Exactly why you need this. Busy teams drown in slop first. We need better tools. Tools help, I build them for clients, but tool swaps without new norms tend to make the pile taller.
The headline here is not “AI is failing.” The headline is “management sets the conditions for quality.” Treat AI like a strong junior. Juniors draft fast. Seniors edit hard. Teams ship. When you set clear standards, teach the edit, and count what matters, the flood recedes and the real work shows up again. And if you want a one-page version of the definition of done for your wiki, I am happy to share the template I use.
Moving Forward with Confidence
The path to responsible AI adoption doesn’t have to be complicated. After presenting to nearly 1,000 firms on AI, I’ve seen that success comes down to having the right framework, choosing the right tools, and ensuring your team knows how to use them effectively.
The landscape is changing quickly - new capabilities emerge monthly, and the gap between firms that have mastered AI and those still hesitating continues to widen. But with proper policies, the right technology stack, and effective training, firms are discovering that AI can be both safe and transformative for their practice.
Resources to help you get started:
In addition to publishing thought AI leadership on a regular basis, I also work directly with firms to identify the best AI tools for their specific needs, develop customized implementation strategies, and, critically, train their teams to extract maximum value from these technologies. It’s not enough to have the tools; your people need to know how to leverage them effectively.
For ongoing insights on AI best practices, real-world use cases, and emerging capabilities across industries, consider subscribing to my newsletter. While I often focus on legal applications, the broader AI landscape offers lessons that benefit everyone. And if you’d like to discuss your firm’s specific situation, I’m always happy to connect.
Contact: steve@intelligencebyintent.com
Share this article with colleagues who are navigating these same questions.