LEGAL QUESTION – SHOOULD YOU ASK THE AI?
PITFALLS ASSOCIATED WITH USING AI TOOLS FOR "LEGAL SERVICES"
Download a PDF copy of this Blog here.
The use of Large Language Model tools (commonly referred to as “AI”) such as ChatGPT and Claude, is becoming quite common in Canada. But what are the ramifications of using such tools for legal issues, such as legal research, legal advice, and the drafting of legal documents such a contracts, pleadings, or legal submissions?
In this report, we review some of the risks associated with relying on AI tools as a replacement for legal professionals.
The Privilege Problem
The recent case of United States v. Heppner , S.D.N.Y. February 17, 2026 [Heppner] held there was no attorney-client privilege with AI Tools. In Heppner, the defendant was charged with criminal offences and used Anthropic’s ‘Claude’ AI tool to generate reports (the “AI Documents”). The AI Documents outlined defence strategies and other information concerning potential arguments and charges.
The Court ultimately held that the AI Documents were not protected by attorney-client privilege for a number of reasons, including:
- The AI Documents were not communications between the defendant and his attorney as ‘Claude’ was not an attorney.
- The defendant’s communications with ‘Claude’ were not confidential because it was a third-party AI platform and Anthropic’s written privacy policy reserved the right to disclose such data to third parties such as government authorities.
While the Heppner case is not binding in Canada, we suspect that a Canadian court will soon grapple with the same issue and rule on whether privilege applies to communications with an AI.
Hallucinations / False Information
Anyone using AI tools for some length of time will have come across “hallucinated” information – an answer that either contains false information, or contains information that is not supported by the citations the AI tool provides. In the legal context this might be references to cases that do not exist, references to real cases that do not contain the quote/proposition cited, or applying the law of the wrong jurisdiction.
Researchers at Open AI have reportedly confirmed that such incorrect answers are actually mathematically inevitable – even with perfect data. Accordingly, everything generated by an AI tool will always need to be double-checked.
The legal consequences of such hallucinations can also be significant, particularly in the context of a legal dispute, as courts across Canada have awarded costs against parties relying on hallucinated references (e.g., Reddy v. Saroya , 2025 ABCA 322; Lloyd's Register Canada Ltd. v. Choi , 2025 FC 1233). The Ontario Superior Court has even ordered a contempt hearing in Ko v. Li (2025 ONSC 2965)!
Inability to Access Non-Public Information
An AI is also only as good as its training data, and frequently in the legal context there are resources available to legal professionals which are not publicly available, and therefore not part of the dataset available to AI tools. This means that AI tools may be missing information which, while easily accessible to legal professionals practicing in the area, is not publicly available.
AI tools also have no way to tap into the years of personal experience of legal professionals including knowledge of matters settled out of court.
for legal advice from Experienced Legal Counsel.
Takeaways
While it may be tempting for a business to use AI tools as a way to save on their legal fees and get “fast answers”, reliance on AI tools brings with it significant business risk. Most businesses would be better served by consulting with a legal professional when in need.
For help with direct selling matters, please click here.