Your Kid's Degree Won't Teach What Employers Actually Need
AI is absorbing the grunt work that trained every white-collar professional. No one's figured out what replaces it.
If AGI Is Really 12 to 24 Months Away, We’re Lying to Our Kids About College
I’ve been listening to the conversations about AI in Davos, and I keep coming back to a moment that doesn’t look like much.
Late night. Kitchen mostly dark. My daughter Charlotte’s laptop is open on the counter, that little pool of light on the tile. A half-finished application essay in one tab. A job board in another. And a question nobody says out loud, but everyone in my house feels.
Is the path we’re on still the path?
I have a senior in college right now. A junior. And a high schooler watching the whole thing like he’s standing on a shoreline, trying to figure out what kind of ocean he’s about to step into.
If you take the “AGI in 12 to 24 months” timeline seriously (and maybe Dario is wrong and it’s really 5 years), the first thing that breaks isn’t some abstract labor market. It’s the apprenticeship ladder that built every white-collar career I know.
The hidden deal that’s about to collapse
Here’s how prestigious careers actually work. You start at the bottom doing the repetitive stuff. It’s not glamorous. Sometimes it’s mind-numbing. But it trains you. You watch how the next level thinks. You get corrected. You earn your way up.
Law, consulting, banking, marketing, and even psychology. Same structure everywhere.
And most of that bottom-rung work? It’s made of tasks AI is already absorbing.
I know because I’ve watched it happen. Last month, I ran a live demo for a 100+person law firm. We took a real case and had Claude work through the initial research, draft the first memo, and pull the relevant precedents. Stuff that would’ve taken a first-year associate most of a day. The AI did a credible first pass in about eight minutes.
The partner in the room got very quiet. Then she said something I haven’t stopped thinking about: “So what do we hire the new people to do?”
That’s the question.
The training was hidden inside the grunt work
Here’s what most people miss. The entry-level tasks weren’t just labor. They were stealth training.
When a young lawyer grinds through document review, they’re not just checking boxes. They’re learning to spot the thing that doesn’t fit. When a junior consultant builds the same competitive analysis for the twentieth time, they start to see patterns the template can’t capture. When a first-year banker rebuilds a model at 2 a.m. because a partner changed one assumption, they’re learning how small changes ripple through a deal.
If AI absorbs that work, we don’t just lose the labor. We lose the training.
And nobody’s figured out what replaces it.
What actually becomes valuable (it’s not what you’d guess)
I’ve spent the last year testing AI tools on real client workflows. Law firms. Marketing teams. Financial analysis. And I keep noticing the same thing.
AI is shockingly good at the middle of the work. The drafting. The summarizing. The “here are the relevant data points organized in a table.” It’s gotten good enough that a competent professional can complete routine tasks three or four times faster.
But AI is bad at the edges. The beginning, where you have to figure out what question you’re actually trying to answer. And the end, where you have to decide if the answer is right, what’s missing, and what to do about it.
That’s judgment. And judgment is what you develop by doing a lot of work, making mistakes, and getting corrected by people who know better.
So the irony is brutal. The tools that eliminate entry-level work are the same tools that make entry-level judgment more valuable than ever. We just don’t have a good way to build it anymore.
College is teaching for a world that’s already gone
Most colleges are still stuck in the “is this cheating?” phase.
Meanwhile, the job market has already moved to “this is the baseline.” I talked to a marketing director last week who told me she assumes every applicant knows how to use AI. What she’s hiring for is the person who knows when the AI is wrong.
That’s a different skill. And almost no curriculum is designed to build it.
College is still shaped by the assumption that content is scarce. Lectures deliver information. Papers prove you absorbed it. Exams test recall.
But content isn’t scarce anymore. Neither is drafting. Neither is basic analysis.
What’s scarce is the ability to frame a problem worth solving, direct tools toward it, verify the output, and make a decision when the answer isn’t clean.
I don’t see that in most syllabi.
What I actually tell my kids (not the LinkedIn version)
When I talk to Alex, Charlotte, and Max, I skip the inspirational stuff. Here’s what I actually say.
To Alex, the senior: Your first job matters more than your first salary. Find a place where you’ll build judgment fast. A team with real responsibility, where someone will tell you when you’re wrong, and where the work touches real stakes. The brand name means nothing if you spend two years doing work an AI could do.
To Charlotte, the junior: This is your year to break things. Take the class that scares you. Do the internship that has no clear outcome. Build something you can point to. Not a resume bullet. An actual thing. A project with your decisions visible in it.
To Max, the high schooler: Learn to sit with hard problems longer than feels comfortable. The temptation to let AI do your thinking will be everywhere. Use it like a tutor, not like a replacement for your brain. And read books. Real ones. The kind where you have to hold the whole argument in your head for 300 pages. That’s training for the kind of thinking that’s about to get very valuable.
And to all three of them, I say the same thing.
The question isn’t whether you can produce output. AI can produce output. The question is whether you can tell when the output is wrong, fix it, and explain why it matters.
That’s the job now.
The gap between institutions and reality is going to hurt
Universities move slowly for reasons that aren’t stupid. Governance. Accreditation. The real fear that students will replace learning with shortcuts.
But AI isn’t waiting for committee approval.
So we’re going to live in the gap. Institutions will lag. Students will have to figure it out themselves. And the ones who learn to use AI responsibly, who build the verification habit, who develop judgment even without the old apprenticeship path, will pull ahead in ways that won’t show up on transcripts.
The students who avoid AI out of fear, or use it only to cut corners, will graduate looking identical on paper. Then they’ll hit the job market and discover the paper doesn’t match the work.
The uncomfortable truth
I don’t know exactly how this plays out. Nobody does. The 12-to-24-month timeline might be wrong. The tools might plateau. Regulation might slow things down.
But I’ve seen enough to know the direction. And I’d rather prepare my kids for a future that might arrive early than a past that’s already fading.
The old deal was: work hard, follow the path, and the system will take care of you.
The new deal is: build the skills the system can’t automate, because the system is being rebuilt around you.
That’s not a tragedy. It might even be an opportunity. But only if we stop pretending the old path still works.



Stephen, this hit home. I've been fully involved in a daily relationship with pro versions of Gemini, ChatCPT and Google LLM Notebook since their release, while dabbing in a few other commercial generative models. I've got 3 daughters in high-school (Audrey is a Junior and Maddie & Kennedy are Sophomores) and I'm active on a steering committee for Audrey's performing arts charter school and this topic is very much under review and scrutiny with lot's of emotions circling all around it. I love how you landed the article around the new deal we should be clear eyed about - "The new deal is: build the skills the system can’t automate, because the system is being rebuilt around you." Even in a creative arts industry where the fears of AI replacement run most high - this is sage advice. "Preach my brother"!.