Humans Were the Horses. Ken Griffin Isn't Hedging.
The work you thought was safe isn't. The kids you thought would be fine aren't. And the moats you thought protected you just got filled in.
Ken Griffin on AI: Humans Were the Horses
TL;DR: Ken Griffin sat down at Stanford in April and gave the most honest CEO answer on AI I’ve heard yet. Work Citadel used to give Masters and PhDs in finance for weeks is now being done by generic AI agents in hours. He went home one Friday depressed by it. Then he layered the K-12 data on top, and the picture got worse. This is a quick take on why I think his segment matters, and what it should force every leader, parent, and citizen to sit with.
I rarely get genuinely unsettled by a CEO interview.
But I watched one this weekend that I can’t shake. Ken Griffin, founder and CEO of Citadel, sat down with Amit Seru at the Stanford Graduate School of Business. The whole conversation is worth your time. About 19 minutes in, Amit asks him about AI. What Griffin says next is the most honest answer I’ve heard a sitting top-tier CEO give on this topic.
Here’s the video:
The Ferguson Line
Griffin opens by recounting a conversation he’d had the day before with the historian Niall Ferguson. Ferguson walked through the usual story of technology waves. Horse and buggy gets replaced by the car. You know how the narrative goes.
Then Ferguson lands the punch. “The issue with AI is that in the world of AI, humans were the horses.”
Griffin’s response on stage was “that’s a really depressing way to start the day.”
No hedging. No pivot to a feel-good line about co-pilots.
What He’s Seeing Inside Citadel
Then he told the audience what’s actually happening inside his firm.
Work that Citadel used to assign to people with master’s degrees and PhDs in finance, work that took weeks or months, is now being done by AI agents in hours or days. His phrase: “automated by a generic AI.”
These are not the mid-tier white-collar jobs people have been writing about for two years. These are the high-skill research roles. The ones that pay extraordinarily well. The ones we tell our kids to aim for.
Griffin said he went home one Friday actually depressed by it. That’s his word. Depressed.
When the guy running one of the most profitable hedge funds on earth tells a room at Stanford he went home depressed because of what AI did inside his own four walls, that’s not hype. That’s a data point worth taking seriously.
The Race
Griffin didn’t stop there, and neither should we.
He framed the next decade as a race. Jobs will get destroyed at some clip. New jobs will get created, hopefully fast enough to keep up. Griffin is an optimist on the creation side, and part of his case is that the moats that used to protect incumbents are getting filled in by the same tools doing the destroying.
His example was perfect. A friend handed a pet insurance business to his 25-year-old son. The kid built a workflow that scrapes social media for puppy photos, identifies the breed with image recognition, and fires off a custom note. “Congratulations on your new golden retriever. Buy Spot pet insurance.”
They sold the business a few weeks ago.
A billion dollars. With a B. Run by a 25-year-old. Using tools any of us can buy.
One caveat. I went looking for reporting to corroborate the billion-dollar number and couldn’t find any. Spot Pet Insurance was acquired by Independence Pet Holdings in 2024, but financial terms were never publicly disclosed. Griffin told the story on stage at Stanford. I’m passing it along as he told it, with the honest note that I can’t independently verify the price.
And Then He Brought Up Eighth Grade Math
Then Griffin pivoted to K-12 education, and the warning got sharper.
Roughly a quarter of American high school graduates are proficient in math. About a third are proficient in reading. In Illinois a couple of years ago, there were 53 public schools without a single student at grade level in math.
Read that again. Fifty-three schools. Zero students. At grade level. In math.
Now layer AI on top of that. We’re about to ask a workforce that already struggles with basic math and reading to compete with the best minds in China, India, and Europe, in a market where the old moats are gone and the pace of change is faster than any of us have seen.
This is the part of Griffin’s segment that should keep parents and policymakers up at night. The race he described between destruction and creation isn’t even the right framing for a large slice of the population. They can’t read the starting gun.
So What Do We Actually Do With This
I’m not going to pretend Monday morning has a tidy three-step answer. It doesn’t.
But here’s what Griffin’s segment forces all of us to sit with.
The highest-skilled work in your organization is no longer safe. Not the partner-level work. Not the senior analyst pool. Not the research function. None of it. I’ve watched managing partners at law firms this year realize the same thing Griffin realized in his hedge fund: research that used to take an associate two weeks is now coming back in an afternoon. And almost none of them have redesigned how junior talent actually learns judgment. They will eventually. The question is whether it happens on their terms or someone else’s.
If you’ve got kids, the floor on what they need just moved. And it didn’t move toward coding. It moved toward learning. The ability to keep learning, switch fields, and stay curious. That’s the real skill now. Griffin tells his new hires they haven’t finished learning. They’ve just started.
The small-versus-big dynamic also flipped. Twenty-five-year-olds with AI agents are real threats to incumbents who got comfortable behind their old moats. Pet insurance, a sleepy category, just produced a billion-dollar exit [again - I could not corroborate this but he mentioned it so I’m leaving it in]. Pick your sleepy category. Someone’s coming.
And then there’s the K-12 number. I’ll be blunt. We can’t have a serious national conversation about AI policy if half our kids can’t read or do math at grade level. That’s not a partisan point. It’s arithmetic. We need to fix this, and the business community has every reason to be loud about it.
The Last Line
Griffin closed his answer with a question, not a victory lap. He said he doesn’t know where we’ll be in 20 years on AI. He just knows the people in that Stanford room get to help write what comes next.
I keep coming back to Ferguson’s line.
Humans were the horses.
It’s an uncomfortable framing. But the discomfort is the point. The horse didn’t get a vote on the car. We do.
Griffin closed his Stanford answer with a question, not a victory lap. That’s the right posture for the rest of us. The people running firms, raising kids, and writing policy are all going to be asked the same thing in five years: what did you actually do with what you saw coming? If you’re a firm leader trying to redesign your pipeline, your training, or your client work for what’s actually here, I’d be glad to compare notes. steve@intelligencebyintent.com. The horse didn’t get a vote. We do.


