In this Field Note episode, Ron Drescher shares a candid story about relying on hallucinated legal citations in a client memo — and explains why “just verify it” is not enough. He offers a more practical framework for lawyers who want to use AI safely, effectively, and without compromising their work product.
Listen/Watch On
In this Field Note episode, I share a candid story from my own early use of ChatGPT — including the moment I nearly relied on hallucinated legal citations in a client memo.
This episode is not just a confession. It’s a practical warning for lawyers who are tempted to use consumer AI tools for legal research, drafting, and court filings without fully understanding the risks.
I explain why “just verify it” is not enough, why AI citation attestations may create a false sense of safety, and why lawyers need a better framework than fear-based “don’t use AI” advice.
The answer is what I call Yes, If.
Using my green light / yellow light / red light governance framework, I walk through where AI can genuinely help lawyers right now — and where it can absolutely get them into trouble.
In the Practice Signal segment, I break down a lawyer’s question about getting back into FCRA work, and show how AI could help rebuild a niche practice area from Flintstones → Simpsons → Jetsons.
My near miss with hallucinated case law
Why lawyers get fooled by AI-generated legal output
Why hallucinations are dangerous because they don’t happen every time
Why verification is only the floor, not the ceiling
Why AI citation attestations may not solve the problem
A better “bright line rule” for using AI in legal drafting
The difference between consumer AI tools and legal research platforms
How AI can help lawyers improve workflow without compromising their practice
Practice Signal: using AI to explore or rebuild an FCRA practice
Flintstones / Simpsons / Jetsons level-up tips for lawyers at every stage
Do your legal research first in a trusted legal database. Then use AI to help you think, organize, draft, and improve.
AI can absolutely elevate legal work product — but only when it is constrained by verified authority and governed by sound workflow.
ChatGPT
Claude
Gemini
Westlaw
Lexis
Bloomberg Law
Fastcase / vLex / Vincent AI
Harvey
Legora
00:00 – Intro / Field Note setup
00:34 – Ron’s confession: the fake case memo
03:08 – Why lawyers get fooled by AI legal output
04:15 – The real problem: hallucinations don’t happen every time
05:21 – Why “verification” is not enough
07:08 – AI citation attestations and why they may fail
09:16 – Sanctions, contaminated opinions, and court risk
10:53 – “Why the hell would I use AI then?”
11:18 – The Yes, If framework
11:36 – Green light uses for lawyers
13:36 – Yellow light uses for lawyers
14:03 – Red light uses for lawyers
15:25 – Consumer AI vs. legal research tools
16:26 – Bright line rule: use the established tool first
19:20 – Practice Signal: getting back into FCRA work
22:36 – Flintstones / Simpsons / Jetsons level-up tips
24:56 – Closing thoughts and call to share