What Impact Will AI Have on Small Law Firms Over the Next Five Years?

As an advisor to practicing attorneys in firms that typically have fewer than ten lawyers — and as someone who studies AI governance in legal practice with the same seriousness that an Attorney studies the Rules of Professional Conduct, I believe it is critical for small Law firms to see AI for what it is and what it is not. And where it is being used and how. This is the first in a series of three blog posts on the impact of AI use by Small Law Firms. And this is why:

Artificial intelligence is already inside your law practice — whether you have governed it or not.
The next five years will determine whether AI becomes your competitive advantage or your malpractice exposure.
— Peter J. Keane

Artificial intelligence is no longer experimental. It is embedded in research platforms, drafting tools, litigation analytics, document review systems, client intake automation, and increasingly, courtroom strategy.

The question is not whether AI will impact small firms. It already has.

The real question is this:

Will small firms treat AI as an operational tool — or as a professional responsibility event?

Over the next five years, the firms that understand that distinction will gain disproportionate advantage. The firms that ignore it will assume disproportionate malpractice risk.

Let’s examine what is coming — and what the ABA Model Rules already require of us.

I. Immediate Operational Impact

1. Legal Research

AI-assisted research tools now produce case summaries, doctrinal overviews, and citation trees in seconds. They reduce time spent locating authorities but introduce a new duty:

  • Verification of accuracy

  • Identification of hallucinated citations

  • Independent confirmation of quoted language

ABA Model Rule 1.1 (Competence) requires not only substantive legal knowledge, but technological competence. Comment 8 makes clear that lawyers must understand the benefits and risks associated with relevant technology.

Blind reliance on AI research outputs is not competence. It is delegation without supervision.

Which brings us to:

2. Drafting and Document Production

AI can draft:

  • Motions

  • Contracts

  • Discovery requests

  • Client letters

  • Demand packages

This reduces drafting time dramatically. But if AI generates unsupported assertions, fabricated authority, or inaccurate factual framing, Rule 3.3 (Candor Toward the Tribunal) becomes immediately implicated.

Courts have already sanctioned attorneys for submitting AI-generated filings without verification. The five-year trajectory suggests stricter judicial scrutiny, not leniency.

3. Client Communications

AI tools increasingly draft client emails and strategy memos. This introduces:

  • Confidentiality concerns (Rule 1.6)

  • Vendor exposure

  • Data retention ambiguity

  • Cross-border processing risks

If client data enters an AI platform without proper contractual protections, we may be disclosing confidential information to a third party without informed consent.

ABA Formal Opinion 512 makes clear that lawyers must evaluate confidentiality risks before using generative AI tools. That includes understanding:

  • How data is stored

  • Whether data is used for training

  • Data residency location

  • Retention policies

  • Security controls

For small firms, vendor due diligence is no longer optional.

II. Billing Model Pressure

AI compresses time.

If research takes 30 minutes instead of three hours, what happens to the hourly model?

Clients will increasingly ask:

“If AI makes this faster, why is the bill the same?”

This pressure will accelerate movement toward:

  • Flat fees

  • Value-based billing

  • Hybrid subscription models

Firms that fail to adapt may face client attrition — especially when competing against AI-optimized competitors.

The ethical issue is not whether to use AI.

The ethical issue is whether to bill ethically in an AI-enabled workflow.

Rule 1.5 (Fees) will increasingly intersect with technological competence.

III. Emerging Malpractice Exposure

Over the next five years, malpractice exposure will increase in three ways:

  1. Failure to verify AI output

  2. Improper disclosure of confidential information

  3. Failure to supervise AI as nonlawyer assistance

Model Rule 5.3 governs responsibilities regarding nonlawyer assistance. AI tools, functionally, operate as nonlawyer assistants.

If we rely on them, we must supervise them.

Failure to supervise equals failure of professional responsibility.

IV. Vendor Due Diligence Obligations

Every AI platform is a vendor.

Small firms must evaluate:

  • SOC 2 certification

  • Data encryption standards

  • Data residency (U.S.-only vs global servers)

  • Retention periods

  • Subprocessors

  • Breach notification procedures

Rule 1.6 requires reasonable efforts to prevent unauthorized disclosure.

Reasonable efforts in 2026 include contractual review.

V. Data Residency and U.S.-Only Processing

Small firms increasingly serve clients with:

  • HIPAA exposure

  • Trade secrets

  • Sensitive corporate information

  • Cross-border compliance risks

Data processed outside the United States may introduce:

  • GDPR implications

  • Foreign surveillance risk

  • Export control concerns

Clients are beginning to ask:

“Where does my data go?”

In five years, this question will be routine.

VI. Early Regulatory Signals

State bars are already signaling:

  • Mandatory AI CLE discussions

  • Ethics advisory opinions on generative AI

  • Judicial education on AI misuse

  • Possible disclosure requirements

ABA Formal Opinion 512 is only the beginning.

The regulatory trajectory is tightening, not loosening.

VII. Competitive Advantage for Small Firms

Here is the optimism:

Small firms can move faster than large firms.

Without layers of bureaucracy, we can:

  • Implement structured governance quickly

  • Train staff intentionally

  • Choose secure vendors carefully

  • Adjust billing models nimbly

Governed AI use reduces cost and increases speed.

That is a competitive advantage — if structured.

VIII. Why Unstructured AI Adoption Violates Rules 1.1 and 5.3

If a firm:

  • Allows staff to use public AI tools without policy

  • Fails to vet vendors

  • Does not train attorneys on hallucination risks

  • Does not document supervision

  • Does not create verification protocols

That firm is not exercising competence (Rule 1.1) and not supervising nonlawyer assistance (Rule 5.3).

Ad hoc adoption is indefensible.

IX. The Only Defensible Model: Assess → Design → Deploy

Small firms need a structured framework.

Assess

  • Inventory AI usage

  • Identify data flows

  • Map vendor exposure

  • Evaluate regulatory risk

  • Identify CLE gaps

Design

  • Draft governance policy

  • Establish verification protocols

  • Create vendor review standards

  • Define supervision obligations

  • Adjust billing disclosures

Deploy

  • Train attorneys and staff

  • Implement documented workflows

  • Monitor compliance

  • Maintain audit documentation

  • Review annually

Anything less is improvisation.

Improvisation in ethics becomes liability.

Predictions for the Next Five Years

  1. AI competence CLE will become common, possibly mandatory in some states.

  2. Courts will impose sanctions for AI misuse.

  3. Malpractice carriers will request AI governance disclosures.

  4. Clients will require AI transparency.

  5. Firms without governance structures will experience claim exposure.

Phase 0 AI Governance Assessment

Before expanding AI usage, every small firm should conduct a Phase 0 AI Governance Assessment.

  • Not as marketing.

  • Not as compliance theater.

  • As risk infrastructure.

  • AI is not optional.

  • But neither is governance.

Phase 0 AI Governance Assessment.

Next
Next

If The AI Fails, Who Pays?