Perplexity Has Citations. That's Not the Same as Judgment.
Your clients don't pay for links. They pay for judgment.
Perplexity is a great research tool. It’s not the best AI tool for attorneys.
I was at a seminar recently and heard a lawyer say Perplexity is “the best AI tool for attorneys.”
I winced.
Not because Perplexity is bad. It isn’t. I use it. A lot of smart people use it.
But because that statement blurs two very different jobs. And in law, confusing the job is how you create risk while feeling confident you’re being “careful.”
Perplexity is an excellent open-web research assistant. It’s fast. It’s tidy. It’s often well-sourced.
It is not, in most cases, the best tool for legal thinking, legal drafting, legal strategy, or anything that depends on sustained context. And those are the parts attorneys actually get paid for.
Let me put it in plain English.
Perplexity is built like an answer engine, not a matter workspace
Perplexity is search-first.
You ask a question. It goes out to the web. It gathers sources. Then it synthesizes what it found into a clean response, usually with citations.
That’s the point of the product.
So if your job is “get me current public information quickly,” Perplexity can feel like a superpower. It’s an AI-powered research runner that brings you links and a summary in one shot.
But legal work is rarely “bring me links.”
Legal work is “help me think.”
What matters here? What’s missing? What’s the best argument and the strongest counter? What’s the risk tradeoff? How do I explain this to a client who wants certainty when reality won’t give it to them?
Search tools find. Lawyers decide.
Perplexity sits much closer to “find” than “decide.”
The seminar trap: citations feel like safety
This is where good intentions go sideways.
Citations make people relax.
You see links and think, “Great, this is grounded.” And sometimes it is. But a cited answer is not the same thing as a defensible answer.
A link does not tell you whether the source is authoritative. It does not tell you whether it applies to your jurisdiction. It does not tell you whether it’s current in the way that matters. It definitely does not tell you whether it survives an opposing counsel who is motivated, well-funded, and looking for weakness.
Perplexity can cite the wrong thing with a straight face. Every tool can. The difference is that Perplexity’s interface nudges you to treat “has sources” as “is correct.”
That’s a dangerous mental shortcut in a profession that runs on standards, authority, and precision.
Law isn’t trivia. It’s judgment under constraints.
Most legal value lives in the messy middle.
A real matter is a pile of imperfect inputs: emails, call notes, drafts, timelines that keep changing, and clients who remember the key detail on Friday at 6:30 pm.
The best AI experiences for attorneys today are the ones that can hold that mess steady while you work. They let you iterate. They keep the thread. They remember what you already decided. They can draft, revise, and argue both sides without losing the plot.
Perplexity, by design, behaves more like a series of Q&A hits.
It’s great at giving you a crisp summary of what the internet says. It’s not great at staying with you through multi-step reasoning, long drafting cycles, or complex document-driven analysis.
And that shows up quickly when the work stops being a single question and turns into an actual matter.
Context is the whole game in legal AI
Here’s a simple test.
What happens when the input gets big?
A deposition transcript. A 90-page agreement with schedules. A set of financials. An employment matter with a 40-email chain where the key admission is buried in the middle. A one-off comment is overlooked in a one-hour video deposition.
In legal work, the hard part is rarely writing the final paragraph. The hard part is tracking the details long enough to make the paragraph right.
This is where the purpose-built model experiences tend to win. They’re designed for long context, sustained conversations, and iterative work. You can keep a running thread and build toward a work product over hours, not minutes. You can refine a draft without re-explaining the matter every time. You can ask, “What did we decide earlier?” and get continuity instead of a reset.
Perplexity can be pushed into longer work, but you feel the seams. The conversation can become fragmented. Earlier facts fall out of view. The tool drifts back to what it does best: summarize sources.
That’s useful. It’s just not the same as collaborating on legal analysis.
Consistency matters in law more than people admit
Another quiet issue is consistency.
In professional settings, you want repeatable outputs. You want to know what produced what. You want to be able to re-run a prompt next week and not feel like you’re rolling dice.
Perplexity is an intermediary. It’s a layer that can change behavior depending on mode, query type, and how it’s routing or packaging the response. That’s convenient for consumers. It’s a problem for professionals who need predictable work product.
If you’re drafting a motion or building a case strategy, you don’t want a tool that sometimes feels like a sharp analyst and sometimes feels like a search summary. You want one that behaves like a steady collaborator.
The real risk is how it changes attorney behavior
Here’s the part that worries me most.
Perplexity encourages “just paste a little context.”
A snippet of the agreement. A paragraph from a demand letter. A quick fact pattern so the search results get better.
That’s exactly how privileged and confidential material ends up in places it shouldn’t.
Now, can Perplexity be deployed responsibly? Sure. Many tools can, especially with enterprise controls, policies, and training.
But the seminar version of this advice is never “use it inside a governed environment with defined rules.”
The seminar version is “this is the best tool for attorneys.”
And that’s how you get casual use in non-casual situations.
Where Perplexity actually shines for lawyers
Perplexity belongs in the research lane.
It’s excellent for public-source orientation, fast scanning, and building a reading list. It’s great when you want a quick “what’s the current public conversation” view on a topic that is not confidential. It’s also useful as a first pass before you go deeper into proper legal research tools and your internal knowledge base.
Used that way, it’s a strong addition to a legal workflow.
But it should not be your primary tool for analysis, drafting, strategy, or anything that depends on sustained matter context.
What I’d tell a managing partner, GC, or practice leader
Stop asking, “What’s the best AI tool?”
Ask, “What’s the best tool for this job?”
Perplexity is your open-web research runner.
ChatGPT, Claude, and Gemini are your thinking and drafting partners when you need long context, iterative work, and real back-and-forth. They’re the tools you use when you’re trying to produce something you can stand behind, not just something you can link to.
And then you put a simple policy around it.
Make it easy for attorneys to do the right thing. Make the lanes obvious. Teach the difference between “cited” and “authoritative.” Train people to treat AI output like a first-year associate draft: helpful, fast, and never final without review.
Because the goal is not to sound modern in a seminar.
The goal is to produce work you can defend.
Here’s the line I want you to remember: Perplexity is great at finding information. Lawyers get paid to turn information into judgment. Don’t confuse the two.
Why I write these articles:
I write these pieces because senior leaders don’t need another AI tool ranking. They need someone who can look at how work actually moves through their organization and say: here’s where AI belongs, here’s where your team and current tools should still lead, and here’s how to keep all of it safe and compliant.
In this article, we looked at why “best AI tool” recommendations create risk when they blur the line between research and judgment. The market is noisy, but the path forward is usually simpler than the hype suggests: match the tool to the job, make the lanes obvious, and train your people to treat AI output the way they’d treat a first-year associate draft.
If you want help sorting this out:
Reply to this or email me at steve@intelligencebyintent.com. Tell me where your team is using AI today and where you’re unsure whether the tool fits the task. I’ll tell you what I’d test first, which part of the ChatGPT/Claude/Gemini stack makes sense for your workflow, and whether it makes sense for us to go further than that first conversation.
Not ready to talk yet?
Subscribe to my daily newsletter at smithstephen.com. I publish short, practical takes on AI for business leaders who need signal, not noise.


