Defensible AI™ Use in Law Firms

What is Defensible AI use in a law firm?

Defensible AI use in a law firm is the ability to demonstrate—under scrutiny—that the firm’s use of artificial intelligence is controlled, supervised, and aligned with professional responsibility obligations.

It is not defined by whether AI is used, but by whether that use can be explained, justified, and supported by documented processes.

A defensible approach requires that a firm understand how AI is used across its operations, including drafting, research, client communications, and administrative workflows. It also requires that the firm establish clear expectations for how those tools are applied, what data may be used, and how outputs are reviewed before they are relied upon.

In practice, defensibility is evaluated in hindsight. When a question is raised by a client, a court, a regulator, or an insurer, the firm must be able to explain what controls were in place, how decisions were made, and how oversight was exercised. Informal or inconsistent practices are difficult to defend because they cannot be reliably demonstrated.

A defensible posture does not eliminate risk. Instead, it provides a structured way to identify, manage, and explain that risk.

This distinction is critical. The objective is not to claim that AI use is “safe” or “compliant,” but to ensure that it is reasonable, controlled, and supportable under professional standards.

Read more: AI Governance for Law Firms
Related: AI Risks & Sanctions

What is a defensibility score in AI governance?

A defensibility score is a structured metric used to evaluate how well a law firm can justify its use of AI if that use is questioned. It reflects the firm’s current state across both governance controls and professional responsibility alignment, providing a consolidated view of risk exposure.

The score is typically derived from multiple categories, such as how AI use is identified across the firm, how data is handled, how outputs are verified, and how supervision is exercised. Each category is evaluated based on the presence, consistency, and maturity of controls. These evaluations are then combined into a weighted score that represents the firm’s overall defensibility posture.

Importantly, a defensibility score is not a certification and does not indicate compliance with any specific legal standard. It is an internal assessment tool designed to support decision-making. A lower score indicates areas where governance is informal, inconsistent, or absent, while a higher score reflects more structured and documented controls.

The practical value of the score lies in its ability to prioritize action. Rather than treating AI risk as a general concern, the firm can identify specific gaps and address them systematically. Over time, improvements in the score should correspond to a more controlled and explainable approach to AI use.

Read more: AI Governance Phase 0™ Assessment Explained (link to product page)
Related: How Law Firms Should Start Using AI Responsibly (link to article)

How do law firms demonstrate defensible AI use?

Law firms demonstrate defensible AI use through a combination of visibility, control, documentation, and supervision. Each of these elements contributes to the firm’s ability to explain its practices in a credible and consistent manner.

Visibility.
The firm must understand where AI is used and by whom. This includes both formally approved tools and informal usage by attorneys or staff. Without visibility, risk cannot be accurately assessed or managed.

Control.
The firm must define acceptable use and implement controls that align with professional obligations. This may include restrictions on the use of certain tools, guidelines for handling confidential information, and requirements for verifying AI-generated output.

Documentation.
Policies, procedures, and decisions must be documented in a way that can be reviewed and explained. Documentation provides evidence that the firm has taken reasonable steps to manage risk, even if issues arise.

Supervision.
Attorneys must maintain oversight of how AI is used within their matters and within the firm more broadly. This includes reviewing outputs, ensuring compliance with firm policies, and addressing deviations when they occur.

In addition to these core elements, defensibility is strengthened by consistency. Practices should not vary significantly between individuals or matters without justification. Consistent application of governance controls makes it easier to demonstrate that the firm’s approach is deliberate rather than ad hoc.

Most firms are not able to demonstrate these elements without first establishing a baseline. This is why the process begins with identifying current usage and evaluating existing controls.

Read more: Why Law Firms Need AI Governance (link to governance article)
Related: Ethical & ABA Rules (link to section or page)

Defensibility is achieved through structure, not assumption. The next step for most law firms is to establish a clear baseline of how AI is currently used and where gaps exist.

→ Continue to: Getting Started (Phase 0)