The Alaska Bar Association recently issued an ethics opinion addressing the use of artificial intelligence tools in law practice. Alaska Bar Ethics Opinion 2025-1 surveys the issues surrounding AI tools in law practice generally while focusing on competence, confidentiality, and billing. As such, it is broadly similar in scope and content to ABA Formal Opinion 512 that was issued last year.
For Washington lawyers, there is a nuance in the Alaska opinion’s treatment of confidentiality that warrants note. When using AI tools for research, the search prompts can trigger confidentiality issues because AI tools often harvest inputs to “learn” information for future analysis. This can run headlong into a lawyer’s duty of confidentiality under RPC 1.6 if the prompts are so specific as to reveal client confidential information. In the notorious New York case involving the lawyer who used a “free” version of a consumer AI tool to write a brief that included citations the AI tool simply made-up, the lawyer also used an increasingly specific series of prompts that may have revealed confidential information. Although the lawyer in Mata v. Avianca, Inc., 678 F. Supp.3d 443 (S.D.N.Y. 2023), was sanctioned for the fake citations, the confidentiality issue is an important risk management teaching point as well.
The Alaska opinion notes that confidentiality is often addressed by using an AI tool tailored to law practice with contractual assurances of confidentiality consistent with our own duty in that regard—typically by employing a “closed” system where a lawyer’s prompts will not be used to teach the AI tool and instead effectively remain within the lawyer’s account. In that sense, the approach is similar to law firm files stored on third-party remote servers, and the Alaska opinion cites to its cloud-computing predecessor (Alaska Ethics Op. 2014-3) in this regard.
The Alaska opinion counsels that when lawyers use an AI tool that is not a closed system (such as general consumer products or services), they should “anonymize” their prompts. The Alaska opinion, however, doesn’t drill down further on the risks of even “anonymized” prompts. Albeit in the context of providing statistical data to legal aid funders, a recent Washington opinion—WSBA Advisory Op. 202402 (2024)—parses the confidentiality risks of anonymized data in an era when sophisticated programs can in some instances pair anonymous information with publicly available data to reconstruct, for example, client identity. In doing so, WSBA Advisory Opinion 202402 notes that Comment 4 to RPC 1.6 cautions that our duty of confidentiality extends to “disclosures …that do not in themselves reveal protected information but could reasonably lead to the discovery of such information by a third person.” This risk seems especially pronounced when entering data in a powerful AI tool. That’s not to say that anonymizing inputs in a “non-closed” system is not a good idea, but in some circumstances it may not be a foolproof solution and suggests only using a closed system with appropriate contractual assurances of confidentiality for inputs that could reasonably—either directly or indirectly—reveal client confidential information.

