AI Risks & Sanctions
Can AI use lead to sanctions against attorneys?
Yes. Courts have already sanctioned attorneys for improper use of artificial intelligence, particularly where AI-generated content was submitted without adequate verification. The most visible example is Mata v. Avianca, Inc., where counsel filed a brief containing non-existent case citations generated by an AI system. The court imposed sanctions after determining that the attorneys failed to verify the accuracy of the material before submission.
This outcome reinforces a central principle: the use of AI does not reduce or transfer responsibility. Attorneys remain accountable for all representations made to a tribunal, regardless of how the work product is created. Submitting inaccurate or fabricated information—whether produced by a human or a system—can implicate duties of competence and candor.
Sanctions risk is not limited to fabricated citations. It can arise in any context where AI contributes to inaccurate filings, misstatements of fact, or unsupported legal arguments. As AI becomes more integrated into legal workflows, courts are increasingly attentive to whether attorneys have exercised appropriate oversight.
For law firms, the implication is clear: AI-assisted work must be verified to the same standard as traditionally produced work. Governance structures should explicitly require independent validation of outputs before they are relied upon or filed.
Read more: AI Sanctions and Misuse Cases
Related: What ABA Rules Apply to AI Use?
What are real examples of AI misuse in legal practice?
Real-world misuse generally falls into repeatable patterns rather than isolated incidents. Understanding these patterns is more useful than focusing on any single case.
Unverified AI-generated research and citations.
Attorneys may rely on AI-generated summaries or case references without independently confirming their accuracy. This can result in fabricated or mischaracterized authority being presented in filings or client work product.
Improper handling of confidential information.
Client-sensitive data may be entered into AI systems without a clear understanding of how that data is processed, stored, or shared. This creates potential exposure under confidentiality obligations, particularly if the system is not designed for secure legal use.
Over-reliance on AI-generated drafting.
AI tools can produce persuasive language that appears complete but may contain subtle errors, omissions, or unsupported assumptions. Without careful review, these issues can be incorporated into legal documents.
Lack of supervisory control.
Firms may allow attorneys or staff to use AI tools without defined policies or oversight. This creates inconsistency in how AI is used and increases the likelihood of non-compliant practices.
Misalignment with client expectations.
Clients may not expect or understand how AI is being used in their matter. In some cases, failure to align expectations can create communication or trust issues, particularly if outcomes are affected.
These patterns demonstrate that AI misuse is typically not the result of a single failure, but of systemic gaps in governance. Addressing these gaps requires more than training or tool selection. It requires a structured approach to how AI is evaluated, approved, and supervised.
Read more: AI Risks in Law Firms
Related: AI Governance Phase 0™ Assessment Explained
What risks do law firms overlook when using AI?
The most significant risks are often not the ones that firms initially focus on. While attention is frequently given to accuracy, broader governance risks are commonly overlooked.
Informal or “shadow” AI use.
AI adoption often begins outside formal channels. Individual attorneys or staff may use tools independently, without disclosure or oversight. This creates blind spots where the firm cannot assess risk because it does not have visibility into usage.
Assumptions about vendor safeguards.
Firms may assume that a tool labeled as “secure” or “enterprise” automatically satisfies confidentiality and compliance requirements. In reality, the firm must understand how data is handled and whether those practices align with its obligations.
Inconsistent verification standards.
Different individuals may apply different levels of scrutiny to AI-generated output. Without defined standards, verification becomes uneven, increasing the likelihood of error.
Lack of documentation.
Even where reasonable practices exist, firms often fail to document them. This becomes a problem when the firm must explain its processes to a client, regulator, or insurer. Undocumented controls are difficult to demonstrate.
Failure to align governance with actual workflows.
Policies may be drafted without a clear understanding of how AI is actually used across the firm. This results in controls that do not address real exposure points.
Overconfidence in efficiency gains.
Firms may prioritize productivity benefits without fully accounting for the additional review and oversight required to maintain professional standards. This can lead to underestimation of the effort needed to use AI responsibly.
These overlooked risks share a common theme: they arise from lack of visibility and structure, not from the technology itself. Addressing them requires a disciplined approach to identifying how AI is used and where controls are needed.
Read more: Why Law Firms Need AI Governance
Related: Defensible AI Use in a Law Firm (link to article)
Understanding risk is necessary, but not sufficient. Law firms must also be able to demonstrate that their use of AI is controlled, supervised, and aligned with professional obligations.
→ Continue to: Defensible AI Use