Do commercial AI tools always have accurate legal information?
The short answer to this is no. AI tools are known to “hallucinate” or make up plausible-seeming responses that are in fact not based on anything. This is very common in the legal field, where AI tools will invent court cases in response to searches for legal authority on certain points, or will attribute fictional quotes and holdings to real cases. In the last couple of years, many lawyers have faced sanctions for submitting documents to courts that contain hallucinated cases. Just this month a company sued OpenAI, arguing that the company engaged in the unauthorized practice of law when ChatGPT encouraged a woman to try to re-open a settled lawsuit and then generated the pleadings she used in court. Some states are taking steps to prevent AI tools from providing legal advice. A bill in New York would ban chatbots from giving legal or medical advice, and give people a right to sue companies whose chatbots violate the law. As with any information generated by AI tools, it is important to check any AI output about laws or legal cases against actual databases to verify that the information is correct.
If I use AI to develop a summary of my case for an attorney, is that summary privileged?
When it comes to protecting documents from disclosure to the opposing side in a legal case, there are two important protections to understand: the attorney-client privilege and the work product protection. Many people know about the attorney-client privilege, which protects private communications between an attorney and someone seeking legal advice (even if they are not officially a client yet). This privilege generally only applies to confidential communications between the attorney and client; in most cases if a third party joins the conversation the privilege is waived. The work product protection protects documents that are prepared in anticipation of litigation. Just like the attorney-client privilege, this protection can be waived if the materials are showed to a third party in a way that makes it likely the materials could end up in the hands of the legal adversary.
What does this mean for people using AI? On February 10 two federal judges came to opposite conclusions about whether a litigant’s AI prompts and responses had to be disclosed to the other side in litigation. In New York, a judge in the Southern District addressed whether a defendants’ communications with AI, seized by the government pursuant to a search warrant, could be protected from inspection (U.S. v. Heppner). The judge held that neither the attorney-client privilege nor the work-product protection applied to those materials. In his written opinion memorializing the decision, the judge reasoned (1) there no attorney-client privilege because the defendant’s communications with Claude were not with an attorney, (2) there was no expectation of confidentiality in the communications with Claude because the terms of use make clear that Claude reserves the right to disclose data to third parties and to use people’s inputs for training; and (3) because the lawyer did not direct the defendant to use Claude, he wasn’t using it for the purpose of obtaining legal advice. The judge read the scope of the work product protection narrowly and found it did not apply because the AI documents were not prepared at the behest of counsel and did not reflect the lawyer’s case strategy.
That same day in a civil case in the Eastern District of Michigan, a judge granted a self-represented plaintiff’s motion to protect information about her use of AI tools in connection with the litigation. The judge in that case (Warner v. Gilbarco, Inc.) found that the work product doctrine protected the plaintiff’s AI searches made in conjunction with her litigation. The judge held that the work product doctrine protects parties’ ability to use AI tools just as they use more traditional tools to prepare for litigation. The judge also noted that AI programs are “tools, not persons,” and the protection over work product is only waived when the work product is disclosed in a way that is likely to get it into the hands of a litigant’s adversary. He found that entering work product into a LLM was not likely to result in disclosure of the work product to an adversary.
Two district court decisions cannot tell us much about how this law will develop, nor whether it will develop uniformly across the country. In Massachusetts, for example, the work product protection does not depend on documents being created “at the behest” of counsel, as the New York court held was necessary for the protection to attach. Instead, where litigants create documents “because of” existing or anticipated litigation those documents are protected by the work product doctrine. Factors such as whether the AI tools save, train on, and might disclose inputs may be relevant to determinations about how these protections apply, as do the specific factual circumstances surrounding why and how a litigant used AI tools.
What are the takeaways from these cases?
Technology develops more quickly than the law, and courts are just starting to decide how to handle generative AI in litigation. The most important lesson from this rapidly developing landscape is that you should always speak to a lawyer before putting information about your case into an AI tool, and review legal information provided by AI tools through a critical lens.
If you are looking to speak to a lawyer about your criminal, employment, or academic legal issues please contact us.