Ethical & ABA Rules for
Lawyers using AI
What ABA rules apply?
The use of artificial intelligence in legal practice is governed by existing professional responsibility rules rather than a separate, AI-specific framework. Several provisions of the Model Rules issued by the American Bar Association are directly implicated when attorneys use AI in client matters or internal legal workflows.
Rule 1.1 (Competence) requires lawyers to provide competent representation, which includes maintaining a sufficient understanding of the benefits and risks associated with relevant technology. When AI is used in drafting, research, or analysis, competence includes the ability to evaluate the reliability of outputs, recognize limitations, and apply independent professional judgment.
Rule 1.6 (Confidentiality) is central whenever client or matter-related information is input into an AI system. Lawyers must take reasonable steps to prevent the unauthorized disclosure of information relating to the representation of a client. This includes understanding how AI tools handle data, whether information is stored or used for training, and whether appropriate safeguards are in place.
Rule 1.4 (Communication) may be implicated when AI affects how legal advice is generated or conveyed. Attorneys must ensure that client communications remain accurate, understandable, and sufficient to allow informed decision-making. If AI materially affects how work is produced or delivered, those implications may need to be communicated to the client.
Rule 5.1 (Responsibilities of Partners, Managers, and Supervisory Lawyers) requires firm leadership to make reasonable efforts to ensure that the firm has measures in place giving reasonable assurance that all lawyers conform to the Rules of Professional Conduct. This includes establishing policies and oversight mechanisms for AI use across the firm.
Rule 5.3 (Responsibilities Regarding Nonlawyer Assistance) extends supervisory responsibilities to nonlawyer personnel and, by extension, to tools that perform functions traditionally carried out by nonlawyers. Attorneys must ensure that the use of AI systems is compatible with their professional obligations and that appropriate oversight is maintained.
Additional considerations may arise under Rule 1.5 (Fees) if billing practices are affected by the use of AI, and under Rule 3.3 (Candor Toward the Tribunal) if AI-generated content is submitted to a court without proper verification.
The American Bar Association reinforces that lawyers may use generative AI tools, provided they do so in a manner consistent with their professional responsibilities. The opinion emphasizes that lawyers remain responsible for all work product, must understand the technology sufficiently to supervise its use, and must safeguard client information.
Taken together, these rules establish a clear principle: the use of AI does not reduce or transfer professional responsibility. It increases the need for structured oversight and informed judgment.
Read more: ABA Rules and AI Use in Law Firms (link to full article)
Related: AI Governance for Law Firms (link to governance page)
Can lawyers ethically use AI?
Yes. Lawyers may use artificial intelligence in their practice, but ethical use depends on how the technology is applied, supervised, and controlled. There is no general prohibition on AI use. The determining factor is whether the attorney’s conduct remains consistent with professional responsibility obligations.
Ethical use requires that AI be treated as an assistive tool rather than a substitute for legal judgment. Attorneys must independently evaluate the accuracy and appropriateness of AI-generated outputs. This includes verifying legal citations, confirming factual assertions, and ensuring that conclusions are supported by reliable sources. Reliance without verification introduces risk and may fall short of the duty of competence.
Supervision is equally important. AI systems often perform functions analogous to those of nonlawyer assistants, such as drafting or summarizing. As a result, their use must be subject to oversight consistent with the standards applied to human personnel. This includes defining acceptable use, setting verification expectations, and ensuring that outputs are reviewed before being relied upon or communicated externally.
Confidentiality must also be preserved. Ethical use requires an understanding of how an AI system processes data and whether entering client information is appropriate. In some cases, this may require restricting use of certain tools or modifying how information is provided to them.
Transparency may be required depending on the circumstances. If the use of AI materially affects the representation or could influence client decisions, the lawyer should consider whether disclosure is necessary to satisfy communication obligations.
The practical reality is that ethical AI use is not achieved through informal or ad hoc practices. It requires a structured approach that defines how AI is used, what controls are in place, and how compliance is maintained. Without that structure, even well-intentioned use can create risk.
Read more: Defensible AI Use in a Law Firm (link to article)
Related: How Law Firms Should Start Using AI Responsibly (link to article)
Do lawyers need client consent?
Whether client consent is required for the use of artificial intelligence depends on the nature of the use, the sensitivity of the information involved, and whether the use is consistent with the client’s reasonable expectations.
In many routine situations, lawyers may use AI tools in the same way they use other technologies, without obtaining explicit client consent, provided that the use does not involve disclosing confidential information in a manner that is inconsistent with Rule 1.6. However, when AI use involves the input of client-specific or sensitive information into systems where confidentiality cannot be assured, additional considerations arise.
If the use of AI could result in the exposure of confidential information to third parties, or if the terms of the AI tool allow data to be stored, reviewed, or used beyond the immediate interaction, the lawyer must determine whether informed consent is required. Informed consent involves explaining the material risks and reasonably available alternatives so the client can make an informed decision.
Consent may also be appropriate when AI materially affects the nature of the representation. For example, if a firm relies heavily on AI to produce substantive legal work or to communicate with clients in a way that differs from traditional practice, disclosure may be necessary to satisfy communication obligations under Rule 1.4.
Even where consent is not strictly required, transparency can reduce risk. Clearly communicating how technology is used, particularly in matters involving sensitive data or complex legal work, can help align client expectations and avoid misunderstandings.
The key point is that the need for consent is context-dependent. Firms should not assume that consent is never required, nor should they assume it is always required. Instead, they should evaluate AI use within a structured governance framework that considers confidentiality, client expectations, and the potential impact on the representation.
Read more: AI Confidentiality Risks for Law Firms (link to article)
Related: AI Governance Phase 0™ Assessment Explained (link to product page)
While ethical obligations define the boundaries of acceptable AI use, the practical risks faced by law firms often emerge through real-world failures. Courts have already responded to improper use of AI, and those outcomes provide important guidance.
→ Continue to: AI Risks & Sanctions