By Attorney Nils Peter Johnson
Johnson & Johnson Law Firm

CANFIELD, Ohio – A troubling thing happened recently.  I had occasion to draft a commercial lease that was to feature a rather complex sublease/assignment section where the tenant was authorized – in limited circumstances – to sublease a portion of the rented premises to a third party. 

So I did as any lawyer would: pillage the firm’s existing archives of lease forms for a clause that spoke to the situation at hand. I found one that got about 80 percent of the job done, massaged the prose (ahem, legalese) to account for the client’s unique situation and inserted it into the lease form. Next, I dispatched the draft to the client for their feedback and questions.  

The trouble: The client’s feedback – which I received about 3 hours
later – was output from ChatGPT that appeared to summarize the lease form and suggest improvements.  

Now, I admit to using AI in my practice – most commonly to proofread draft documents to spot pronoun problems that arise when replacing names in standardized document forms, and other similar clerical functions (ChatGPT’s premium offering includes a “temporary chat” function in which questions posed are not recalled and do not serve to train the underlying LLM). I will also use AI as a learning tool to quickly explain new topics to me and steer me generally in the right direction for more verifiable information. I will ask ChatGPT to locate provisions of the Ohio Revised Code that might speak to a specific topic, for example.  

In the course of learning how AI could improve my practice, I had not had occasion to consider how my clients’ use of AI would also impact my practice. Here, it was clear to me that the client had not personally read the commercial lease I had prepared or the substance of the email to which the draft lease was attached. And though the ChatGPT feedback provided featured all green check-marks throughout it (suggesting I did a good job), I was more concerned with whether or not my client took the time to personally read the lease so that when the occasion to discuss it again in the future inevitably arose – in the event of the tenant’s default, or a disastrous assignment to a third-party, for example – the subject would be accessible on a conversational basis at the moment of crisis. 

This concern is not merely practical; it reaches into the attorney’s ethical duties under the Ohio Rules of Professional Conduct Rule 1.4, governing communication, requiring a lawyer to “reasonably consult with the client about the means by which the client’s objectives are to be accomplished,” to “keep the client reasonably informed” and, crucially, to “explain a matter to the extent reasonably necessary to permit the client to make informed decisions regarding the representation.” If a client relies on an AI summary instead of reading the document itself, they may mistakenly believe they have understood the operative terms, while in reality miss nuance, structure or risk allocation that AI tools may oversimplify, omit or mischaracterize.

Similarly, Rule 1.2 emphasizes that the client sets the objectives, while the lawyer advises and develops strategy. For a client to meaningfully exercise that authority, they must grasp – personally and directly – the terms of the transaction they are entering. A client who delegates their own understanding to an algorithm may unintentionally erode their ability to participate in the decision-making process, and the lawyer, unaware of the substitution, may mistakenly assume comprehension that does not exist.

AI tools, impressive though they are, do not absolve attorneys of their duty to ensure that clients understand their legal rights and obligations. If anything, they increase the burden to confirm understanding. As lawyers, we may soon find ourselves not only educating clients about the documents we draft but also about the limitations, hallucinations and overconfidence of the AI tools they use to interpret those documents. The Rules of Professional Conduct do not prohibit client use of AI – but they require us to bridge any resulting gaps in understanding so that decisions remain informed.

The irony is that as lawyers explore how AI can streamline and strengthen our own workflows, we must also become vigilant in recognizing the new blind spots it creates for the people we advise. The practice of law has always been a dialogue between attorney and client; AI is now an uninvited participant in that dialogue. When it stands in for the client’s own reading and reflection, it risks compromising the informed-consent framework at the heart of our ethical duties. The task before us is not to discourage technology, but to ensure that it supports – rather than supplants – the human understanding on which good legal judgment depends.