Lawyers are getting into real trouble using AI — from hallucinated cases to ethical violations. In this episode, Professor Nancy Rapoport explains how to use AI responsibly, close the trust gap, and avoid becoming the next cautionary tale.
Listen/Watch On
In this episode, Ron Drescher and Heather Gardner are joined by Professor Nancy Rapoport, co-author of A Short & Happy Guide to Artificial Intelligence and Legal Ethics, to explore how lawyers can use AI responsibly — and why so many are getting into trouble doing it wrong.
From hallucinated cases to ethical missteps, the conversation dives into the growing “trust gap” between large firms with AI infrastructure and solo/small firm lawyers navigating these tools on their own.
What You’ll Learn
Why lawyers are being sanctioned for AI misuse — and how to avoid it
The concept of the “trust gap” in legal AI adoption
How ethics rules (1.1, 1.4, 5.1, 5.3) apply to AI usage in practice
Why AI is powerful — but not “thinking”
Practical ways to safely incorporate AI into legal workflows
How AI impacts billing models, efficiency, and access to justice
What lawyers should include in engagement letters regarding AI
Why client use of AI can create serious discoverability risks
How to think about AI across the Flintstones → Simpsons → Jetsons spectrum
Professor Nancy Rapoport is a leading legal ethics expert, author, and speaker focused on AI, professional responsibility, and helping lawyers avoid risk in modern practice.
Practice Signal: Lawyer Departure Ethics (Featured Segment)
A junior lawyer asks:
“My partners don’t want me to notify clients that I’m leaving the firm. Am I wrong for wanting to follow the ethics rules?”
Nancy explains why:
Clients — not firms — control representation decisions
Transparency is not optional
Ethical obligations override internal firm pressure
Key Takeaways
AI is a tool, not a substitute for judgment
Guardrails and verification are essential
Lawyers must understand both how AI works and how it fails
The goal isn’t speed — it’s better lawyering with less risk
Moving up the AI adoption curve requires intentional, gradual learning
Notable Quote
“Using AI is like giving a chainsaw to a toddler — it could go right, but it probably won’t without guardrails.”
Resources
A Short & Happy Guide to Artificial Intelligence and Legal Ethics – Nancy Rapoport & Joe Tiano
Follow the podcast for practical, real-world AI guidance for lawyers
00:00 — Introduction
AI risks in legal practice and introduction of Professor Nancy Rapoport
02:00 — The Book & Legal Education
Why lawyers and students must learn to use AI responsibly
05:00 — The Trust Gap
Big firms have safeguards — smaller firms often don’t
08:00 — Guardrails & Ethics
Why AI without safeguards is risky (“chainsaw for a toddler”)
11:00 — Are Lawyers Using AI?
Heather shares hesitation and slow adoption in practice
14:00 — ZettaJet Example
Using AI to eliminate repetitive billing tasks
17:00 — Does AI Save Time?
Efficiency vs. strategy vs. quality
21:00 — Limits of AI
Why hallucinations require constant verification
24:00 — Practice Signal: Departure Ethics
Should lawyers notify clients when leaving a firm?
28:00 — Clients Using AI
Discoverability risks from client AI use
31:00 — Engagement Letters & Fees
Explaining AI use and rethinking billing
34:00 — Flintstones → Simpsons → Jetsons
AI workflows at each adoption level
37:00 — Closing Thoughts
Key takeaways for using AI responsibly