A Company That Ignored 163,000 Notices Just Shaped AI Copyright Law
Cox won by being a pipe. Most AI companies aren't pipes. That's the problem.
Image created by Nano Banana 2
The Supreme Court Just Gave AI Companies Their Best Copyright Defense. I Initially Thought It Was a Slam Dunk. It’s Not.
TL;DR: The Supreme Court ruled 9-0 in Cox v. Sony that knowing your users infringe copyrights isn’t enough to hold you liable. AI companies are celebrating. They should slow down and read the whole opinion. This protects “pipes,” not platforms that wink at infringement. And the biggest AI copyright fights aren’t even about outputs. They’re about training data, where nothing has been decided.
I spent most of Wednesday thinking this case was a clean win for AI companies.
A 9-0 Supreme Court ruling? A billion-dollar verdict tossed? A bright-line test that basically says “knowledge isn’t liability”? I mean, come on. If you’re running an AI company, what’s not to love?
Then I kept reading. The analysis pieces started landing. The copyright lawyers started weighing in. And I realized I’d been looking at one side of the coin.
So let me take you through this, because if you run a law firm, advise tech companies, or are trying to figure out where AI copyright risk actually lives right now, this case matters a lot. Just not in the simple way the headlines suggest.
The Backstory Is Almost Comically Bad for Cox
Quick setup. Cox Communications is one of the biggest ISPs in the country. Sony and a bunch of music labels sent Cox over 163,000 infringement notices across about two years. These were notices saying “hey, your subscriber at this IP address is pirating our music.”
Cox’s response? They terminated 32 accounts for infringement during that entire period. Thirty-two. Meanwhile, they cut off hundreds of thousands of people for not paying their bills. Priorities.
Oh, and there’s an internal Cox email where an employee literally wrote “F the DMCA!!!” Not a great exhibit.
A jury in 2019 hit Cox with a $1 billion verdict. The Fourth Circuit kept part of it alive. Cox appealed to the Supreme Court.
And Cox won. All nine justices.
What the Court Actually Said
Justice Thomas wrote the majority opinion (joined by Roberts, Alito, Kagan, Gorsuch, Kavanaugh, and Barrett) and set up a pretty clean two-part test. You’re only liable for contributory copyright infringement if you either (1) induced the infringement, or (2) provided a service tailored to infringement.
That’s it. Two doors. If you don’t walk through one of them, you’re not liable.
Just knowing your users are infringing? Doesn’t count. Knowing it’s happening over and over, for years, on a massive scale? Still doesn’t count. The Court drew a line between knowledge and intent, and that line is going to matter for a long time.
Cox won because internet access is obviously a general-purpose service. Cox didn’t market itself as a piracy tool. It formally told subscribers not to infringe, even if its actual enforcement was, let’s say, relaxed.
The Court also made an important move on the DMCA. Sony had argued that failing to qualify for the DMCA safe harbor should weigh against Cox. The Court basically said no, that’s not how this works. Not qualifying for a safe harbor doesn’t mean you’re automatically liable. Those are different questions.
Why AI Vendors Are Popping Champagne
Here’s the obvious read. If an ISP that ignored 163,000 infringement notices can’t be held liable, what about AI companies that actively build guardrails, publish terms of service prohibiting infringement, and train their models to refuse copyrighted outputs?
The Re:Create Coalition said it right after the ruling: this protects lawful technologies from liability for third-party misuse. X argued in an amicus brief that ruling against Cox would create havoc for AI companies specifically, because it would let copyright holders sue platforms whenever users violated copyright law with their tools.
And you can see why the argument works. Claude, ChatGPT, Gemini... these are general-purpose tools. People use them for writing emails, debugging code, analyzing data, planning strategy, a thousand things that have nothing to do with copyright infringement. Under the Cox test, that profile looks pretty strong. No inducement. No tailoring. Massive lawful use.
For general-purpose text models, I think that’s probably right. If you’re Anthropic or OpenAI providing an LLM that people mostly use for business work, you look a lot like Cox providing internet access. Same basic shape.
Here’s Where I Changed My Mind
I want to be honest about my process here, because I think the thinking-out-loud part matters.
My first reaction was: great news, case closed, AI companies can breathe easy. Then I started reading the copyright lawyers. And three problems jumped out that I hadn’t fully thought through.
The first one is the biggest, and it’s kind of obvious once someone says it out loud. Cox was a pipe. Bits went through its network. It didn’t create anything. But when you type a prompt into an AI model and it produces an image of Spider-Man, who’s doing the copying? The user typed words. The platform’s servers generated the image. That’s not secondary liability. That might be direct infringement. The platform isn’t the pipe, it’s the copier. And Cox has nothing to say about direct infringement claims. Nothing. That theory doesn’t even need secondary liability.
The second problem: Cox is about outputs. What happens when users do with the tool. It says zero about training. Whether scraping copyrighted books and articles to build the model in the first place is infringement, that’s a totally different legal question. It’s a central question in the consolidated OpenAI MDL, which includes the NYT v. OpenAI suit and 15 other cases, where the court recently ordered OpenAI to produce tens of millions of ChatGPT conversation logs as part of discovery. It’s what drove the Bartz v. Anthropic case that settled for $1.5 billion. Cox doesn’t move any of those needles.
And third, not every AI company looks like Cox. There’s a real spectrum here. Text-based LLMs sit on one end, looking very pipe-like. Image generators land somewhere in the middle. And then you’ve got cases like Midjourney, where the plaintiffs allege the company showcased infringing character outputs as marketing, deliberately weakened guardrails, and built a business model around the appeal of generating copyrighted content. That’s not the Cox pattern. That’s the Grokster pattern, the 2005 case where the Supreme Court found liability because the company actively courted infringers. There’s a reason the Grokster precedent survived Cox.
Don’t Sleep on Sotomayor’s Concurrence
Two justices, Sotomayor and Jackson, agreed Cox should win but wrote separately to say the majority went too far. Sotomayor argued the Court didn’t need to box secondary liability into just two categories. She said the common law leaves room for aiding-and-abetting theories, and she warned that the majority’s approach could gut the entire DMCA incentive structure. Her logic: if knowledge plus continued service can never create liability risk, what ISP would ever bother responding to a takedown notice?
Here’s where it gets interesting for AI specifically. Under Sotomayor’s approach, intent to assist infringement can be inferred from knowledge when that knowledge is specific enough. Cox only knew that some IP address had triggered an automated notice. AI platforms see the actual prompt in real time. A user types “generate Spider-Man fighting Batman.” The platform reads that, processes it, and produces the output. That’s a fundamentally different kind of knowledge.
Two justices isn’t a majority. I know that. But concurrences have a way of aging into law. If you’re advising AI companies, you should be thinking about Sotomayor’s opinion now, not waiting until it shows up in a future majority.
What to Do Monday Morning
If you’re counseling clients on AI risk, or thinking about your own exposure, here’s what actually changes after Cox.
Your guardrails just became your legal armor. Content policies, copyright filters, refusals to generate copyrighted characters, all of it is now evidence of non-inducement. Every investment your AI vendor made in compliance infrastructure is now their best exhibit in court.
The enforcement target shifts to end users. AI vendors have a strong new argument that the person who typed the infringing prompt is the right defendant, not the platform. That matters for how copyright holders allocate their legal budgets.
The real fights are still ahead, and they’re about training data. Cox gives some breathing room on outputs. But whether ingesting millions of copyrighted works to build a model counts as fair use? Wide open. The OpenAI MDL is deep in discovery. Those cases will define the actual contours of AI copyright law far more than Cox will.
Watch for the “Grokster test” in AI cases. Courts are going to start sorting AI companies into categories. General-purpose tools with active guardrails? Strong position. Companies that looked the other way on infringing outputs, or worse, used them as marketing? They’re going to have a bad time. The question isn’t “are you an AI company?” It’s “do you look more like Cox or more like Grokster?”
And keep watching licensing deals. Disney and OpenAI signed a deal late last year to let Sora generate Disney characters, with Disney investing $1 billion. That deal collapsed this week when OpenAI killed Sora. But the impulse behind it, content owners licensing IP for AI use rather than suing over it, isn’t going away. The next deal is already being negotiated somewhere.
The Irony Writes Itself
I keep coming back to this. A company whose employee emailed “F the DMCA!!!” just handed AI companies their strongest copyright defense in two decades. An ISP that terminated 32 accounts out of 163,000 infringement notices is now the precedent for why knowledge doesn’t equal liability.
The lesson isn’t that you can be sloppy about copyright. Cox won despite that culture, not because of it. The lesson is that the legal test is about product design and intent, not internal emails. If your product is general-purpose and you formally discourage infringement, you’re in a strong position on secondary liability.
But only on secondary liability. The training cases are coming. The direct infringement arguments are coming. And some AI companies don’t look like innocent pipes at all.
This is the most important copyright case in 20 years. It’s a real tailwind for responsible AI companies. It’s not a silver bullet. And if you’re advising clients, the question just changed from “does the AI vendor know infringement could happen?” to “did the vendor intend it?” That’s a much harder thing to prove. The responsible players should be fine.
The irresponsible ones? Cox won’t save them.
If you read this far, you’re probably the person in your firm who gets the call when a client asks “what does this mean for us?” and expects a real answer, not a qualified shrug. That’s the tension: you need to be right about what this ruling changes and honest about what it doesn’t.
That’s the conversation I have every day with firm leaders and legal advisors working through exactly this kind of question. If you’re sorting out what Cox means for your AI strategy, your client advice, or your risk posture, I’d like to hear what you’re working through. Reach me at steve@intelligencebyintent.com. I’ll tell you what I think is solid ground and where I think we’re all still guessing.


