Sixth Circuit Removes Attorney for “Inexcusable” AI Transgressions: What the Decision Actually Says
The U.S. Court of Appeals for the Sixth Circuit has taken one of the most consequential steps to date in addressing attorney misuse of generative AI in legal practice. In United States v. John C. Farris (Apr. 3, 2026), the court removed appointed appellate counsel, denied compensation, and referred the matter for disciplinary review after determining that AI-assisted briefing contained materially false representations of legal authority.
Case Overview
Court: U.S. Court of Appeals for the Sixth Circuit
Date: April 3, 2026
Matter: Criminal appeal involving court-appointed counsel
Technology Used: Westlaw CoCounsel (AI-assisted legal research/drafting)
What the Attorney Did
According to the court’s findings:
Counsel relied on AI tools to assist in drafting appellate briefs
The submitted filings included:
Incorrect quotations attributed to real cases
Misstatements of legal holdings
In several instances:
The cited cases existed, but
The quoted language and described holdings were inaccurate
This distinction is critical. The issue was not limited to fabricated (“hallucinated”) cases—it extended to misrepresentation of authentic legal authority.
The Court’s Analysis
The Sixth Circuit framed the issue as a violation of core professional duties, not a failure of technology.
Key Principles Reinforced
1. Nondelegable Duty of Verification
The court emphasized that attorneys may not delegate responsibility for accuracy to any tool—AI or otherwise.
The obligation to verify citations and quotations remains with counsel at all times.
2. Candor to the Tribunal
Submitting inaccurate quotations—even if derived from AI—was treated as a breach of the duty of candor.
3. Competence
Reliance on AI without adequate validation procedures was viewed as falling below the standard of professional competence.
Sanctions Imposed
The court imposed multiple, compounding consequences:
Removal of counsel from the case
Denial of Criminal Justice Act (CJA) compensation
Order for re-briefing by new counsel
Referral for potential disciplinary proceedings
Notification to relevant oversight authorities
This is notable for both severity and scope—the court did not limit its response to monetary sanctions.
Why This Decision Is Different
Prior AI-related sanctions have often focused on fabricated citations. This decision expands the scope of risk in two important ways:
1. Misquotation Is Treated as Seriously as Fabrication
Even when cases are real, incorrect attribution or characterization can trigger sanctions.
2. Tool Legitimacy Is Not a Defense
The AI tool used was a commercial legal research product, not a public chatbot. The court made clear that:
The type of tool does not reduce responsibility
The accuracy of output must be independently verified
Context: A Developing Pattern
The Sixth Circuit’s decision aligns with a broader trend in federal courts:
Increasing scrutiny of AI-assisted filings
Escalating sanctions, including fee awards and disciplinary referrals
Explicit judicial statements that AI does not alter ethical obligations
Courts are moving beyond warning language toward enforcement actions with professional consequences.
Practical Implications for Legal Practice
The decision does not prohibit AI use. Instead, it clarifies the conditions under which AI may be used without violating professional duties.
Minimum Expectations Emerging from the Case
Attorneys using AI in legal work should assume the need for:
Independent verification of all citations and quotations
Review of underlying source material (not summaries alone)
Supervisory oversight of AI-assisted work
Clear internal protocols governing AI use
Failure to implement these controls introduces exposure not only to sanctions, but also to:
Disciplinary action
Malpractice claims
Reputational harm
A Structural Observation
One of the more important takeaways from Farris is structural rather than technological:
The risk is not the use of AI—it is the absence of a repeatable, documented verification process.
Courts are increasingly evaluating conduct through that lens.
For firms attempting to formalize their approach, a structured baseline evaluation—such as an AI governance assessment—can help identify whether current practices meet emerging judicial expectations. A reference framework is available at:
AI Governance Phase 0 Assessment.
Conclusion
The Sixth Circuit’s decision in United States v. Farris represents a clear statement:
AI-assisted drafting is permissible
Unverified AI-assisted content is not
The duty of competence, candor, and verification remains fully intact—and courts are now enforcing that principle with meaningful consequences.
For practitioners, the question is no longer whether AI can be used, but whether its use can be defended under scrutiny.
FAQ
What did the Sixth Circuit decide about AI use by attorneys?
The court held that attorneys have a nondelegable duty to verify all legal citations and cannot rely on AI-generated content without independent validation.
Can lawyers use AI in legal filings?
Yes, but they must independently verify all outputs. Failure to do so may result in sanctions, removal from cases, or disciplinary action.
What is the risk of using AI in legal work?
Risks include inaccurate citations, ethical violations, sanctions, malpractice exposure, and reputational harm if outputs are not properly reviewed.