ETHICAL AND PRIVACY RULES STILL APPLY EVEN WHEN DELEGATING TASKS TO AI

By Senior Partner Jane G. Kearl, Irvine, CA, Partner Amanda L. Marutzky, Irvine, CA, and Kate Given, Columbia University

Introduction

AI has become one of the defining technological advancements of the 21st century. In recent years, legal professionals have begun capitalizing on popular AI-based platforms to maximize efficiency and  productivity . But despite the undeniable utility of AI, its use raises questions about the potential impacts on a lawyer’s duty of confidentiality. A thorough examination of current legislation and the evolving technology is necessary to properly integrate AI into the legal workplace. Lawyers must also consider how and whether their professional obligations impose additional limits on AI use.

On July 29, 2024, the American Bar Association published Formal Opinion 512 – “Generative Artificial Intelligence Tools” which set forth an ethical framework for lawyers’ use of GenAI in their practices. The Opinion specifically analyzes a lawyer’s ongoing obligations under the ABA Model Rules of Professional Conduct with respect to GenAI tools, citing the Duties of Competence (Model Rule 1.1), Confidentiality (Model Rules 1.6, 1.9(c), 1.18(b)), and Communication (Model Rule 1.4), Conduct before the Court (Model Rules 3.1, 3.3 and 8.4(c)) as well as Supervisory Responsibilities (Model Rules 5.1 and 5.3) and Fees (Model Rule 1.5). A thorough review of the Model Rules and their implication is necessary prior to a lawyer’s use of a GenAI tool to perform any legal task.  

Protecting Privacy when Using AI in the Legal Workplace

Legal professionals working in California must consider not only the rules of professional conduct but also the  California Consumer Privacy Act (CCPA) and California Privacy Rights Act (CPRA) when using AI. California Rules of Professional Conduct mandate that lawyers cannot input confidential client information into public GenAI tools that could be shared with third parties (CA Rule 1.6). Similarly, use of AI must be disclosed, adequately supervised to avoid hallucinations or inaccurate information (CA Rules 5.1, 5.3) and an attorney’s duty of competence requires them to understand the technology of any tools being used (CA Rule 1.1).

Additionally, the California State Bar noted in its “State Bar Practical Guidance on Generative AI” that legal professionals must avoid providing GenAI models with access to confidential information without sufficient security and privacy protections. (State Bar of California Standing Committee on Professional Responsibility and Conduct: Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law, November 16, 2023). Legal professionals must be conscious of the ever-growing legal requirements as they continue to adapt to keep up with the growing use of GenAI.

Similar to the Duty of Confidentiality under the Model and Professional Rules, the CCPA, as amended by CPRA,  mandates  that businesses  implement ample security procedures to protect the personal information of California residents from unauthorized access (See Civil Code § 1798.100(d)(2-5)). 

A lawyer must verify that any service provider to which the personal information of their clients is disclosed is secure and in compliance with all applicable security and privacy obligations. In the context of GenAI, this means comparing the relevant requirements with the platforms’ privacy policy and terms of service. The regulations for the  CCPA state that “reasonable and appropriate steps may include ongoing manual reviews and automated scans of the service provider’s system and regular internal or third-party assessments, audits, or other technical and operational testing at least once every 12 months.”  (CA Code of Regulations, Title 1,4, Section 7051(a)(7).) The platforms that fail to pass these assessments or meet the relevant security or privacy requirements should not be provided with access to clients’ personal information. Additionally, the CCPA final regulations on Automated Decision-Making Technology (ADMT), effective January 1, 2026 with compliance by January 1, 2027, which encompasses GenAI, require businesses to provide transparent and specific descriptions of their ADMT usage, as well as grant consumers the opportunity to opt-out of the businesses use of this technology. Legal professionals should therefore thoroughly document their GenAI usage to track their compliance with future regulations.  

Future Implications and Recommendations

As developments surrounding this kind of technology continue to unfold, legal professionals should be wary of encroaching on client confidentiality by providing personal information to any LLM or GenAI-based platform. Current regulations and ethical guidelines require that a lawyer should obtain prior client consent for the use of any GenAI tools. Rather than merely obtaining form consent in an initial retainer agreement, client consent to AI use should be ongoing, with regular client communication and approval as the technology use adapts or expands in any given manner. Thorough research, verification, and  testing is also recommended to verify the system’s compliance with security and privacy regulations.  If used legally and ethically, LLMs and GenAI-based platforms have the potential to provide significant advantages and boost workplace productivity.  

AI technology shifts the landscape day by day.  With the accessibility of these unfamiliar yet readily available tools, it is imperative that legal professionals approach AI with caution and deliberation to ensure client protection.