Using AI to Gather Information About Your Personal Injury Claim: The Risks and What You Should Know



April 20 2026

Artificial Intelligence (AI) tools like ChatGPT, Gemini, Perplexity, and Claude are becoming increasingly common for quick answers to everyday questions and tasks. However, their use for more complex tasks can come with risks.

Relying solely on AI to answer questions about your personal injury claim or to seek legal advice can lead to incorrect or unreliable information that can hurt your claim. This article discusses the risks of AI, and the reasons why personalized support from an experienced personal injury lawyer is a better option to ensure your rights are properly protected.

What are the downsides of AI?

AI does not “think” or “analyze” information like a human does. Instead, it guesses what the correct answer to your question might be, based on patterns in information it takes from the internet. It does not truly understand your question and cannot critically analyze whether the answers it gives are accurate or logical. This is why AI is not a reliable source of legal information and cannot be trusted to accurately assess a legal claim.

The Alberta Courts have issued a Notice to the Public and Legal Profession cautioning litigants about the risks of using AI tools in preparing legal materials. The Notice, entitled Ensuring the Integrity of Court Submissions when Using Large Language Models, contains recommendations that apply to both lawyers and self-represented litigants.

The risks associated with AI-generated materials includes:

i.      Hallucinations

AI is known to “hallucinate” information. This is when it answers your questions by creating fake but convincing-sounding information, or by presenting true information in an inaccurate or misleading way.

AI tools are known to do this when asked legal questions. Two recent Alberta cases highlight this risk:

In DJ v SN, 2025 ABCA 383 a self-represented litigant submitted materials to the court referencing cases that did not exist. She admitted that the fake citations had been generated through her use of AI tools. In addition to dismissing her appeal, the court ordered her to pay an additional $500 in costs. The court advised that AI should not be trusted to perform legal research and warned that self-represented litigants presenting hallucinated case law to the courts “can expect more substantial penalties to be imposed in future cases”.

In a second case, HDO v MDF, 2026 ABCA 45, a self-represented litigant also submitted materials containing hallucinated  cases and irrelevant cases that did not stand for the propositions they were said to support. The Court again warned that “serious penalties can be imposed for misleading the court”.

ii.    The “Garbage In, Garbage Out” phenomenon

AI answers questions with information from the internet, which can be misleading, biased or simply wrong. This is known as the “Garbage in, Garbage out” phenomenon. Because AI does not actually understand the information it gives, it does not perform any analysis or verification of the accuracy of the content it reports back to you.

Similarly, you may not know the right questions to ask. That means you are potentially missing important information that you need to know. Large language models (LLMs) like ChatGPT etc react to prompts that you provide. Aside from accuracy of information, if you don’t know the right prompts to include that impact your claim you are not going to get full information.

iii.   Lack of transparency

There is little to no transparency in how AI reaches its conclusions or what information it uses to do so. This is known as the “Black Box” phenomenon. This means that you cannot verify any of its work to make sure that it is in fact sound and accurate. Sometimes, you will see source citations (e.g., AI overviews) but again, you need to check the reliability of the underlying source.

iv.   Lack of consistency

AI has also been shown to give inconsistent answers to questions. When asked the same question on different days or in different ways, it can give different responses, making it highly unreliable.

v.    Overly agreeable with users

Researchers at Stanford University have found that AI tools are more likely to be “overly agreeable” than a human being when users are discussing personal problems. The result is that “AI systems might tell you what you want to hear, but perhaps not what you need to hear”.

Legal claims are often complicated and pursuing them involves risk and uncertainty. Receiving overly optimistic advice, which ignores or downplays weaknesses and problems in your case, could lead to false expectations or harm your chances of success.

Risks of using AI for your personal injury claim

Personal injury law is complex and highly detail oriented. Each claim requires careful analysis of the facts, because no two claims are the same. Many of the analyses needed, such as assessing witness credibility or interpreting evidence, require critical thinking, sound judgment that comes with expertise, and an understanding of the surrounding context. This kind of careful, nuanced analysis and judgment cannot be achieved by AI.

In addition, personal injury law is a constantly evolving field and the rules that may apply are highly dependent on jurisdiction. Timelines for filing claims, insurance schemes and court procedures can vary between provinces and countries. AI may not take these considerations into account and may cite outdated information or rely on law and procedure from other jurisdictions that do not apply in Alberta

Personal injury claims also involve personal, sensitive details about your health, day to day life, and financial well-being. Understanding how an injury affects your life requires human empathy and understanding, which AI simply cannot provide.

Other practical risks to your case from AI use

i.      AI does not keep your information confidential

Your lawyer is legally required to keep your communications confidential. This protected relationship is important because it allows you to share all details about your case so your lawyer can provide accurate and effective advice. It also allows you to control who has access to private information about you.

If you input sensitive information about your case, such as medical or financial records, or details about communications with your lawyer into AI, you risk losing the privacy of your information. In addition, sharing the information outside of the lawyer-client relationship may mean it is no longer considered confidential. This could impact the protected relationship with your lawyer.

There have been serious concerns raised about the privacy of AI tools. In most cases, there is no requirement for these tools to keep your information confidential, and we do not know where the information is stored or how it could be further used. This makes inputting sensitive details about your case into AI even more risky.

While the full impact of AI in this context is not yet known in Canada, a recent case out of the New York federal court highlights the risk. In United States v. Heppner, the court ruled that written exchanges between a defendant and the LLM Claude were not protected by attorney-client privilege or the work product doctrine. This was, in part, because the legal concept of privilege depends on the confidential relationship between a client and a licensed lawyer. In this case, the “conversation” was between a defendant and an LLM. In addition, that conversation involved a third-party platform governed by a privacy policy that alerted users that data could be collected and disclosed.

Many people assume that their use of AI tools to organize information and prepare notes before talking to a lawyer takes place in private. This decision shows that is potentially risky. The Heppner case indicates that privilege depends on lawyer involvement and confidentiality at the time the material is created. Further, the case clearly states that showing the material to a lawyer later does not make it privileged. The full fallout from this case is not yet known but at a minimum it highlights the importance of being cautious in your use of AI tools in the context of legal matters.

ii.    For more information on this case read: AI Chats Lose Privilege Protection in US Court Ruling. No accountability for false information

Your lawyer has ethical and legal obligations to provide you with honest, candid, competent advice and services. They are also professionally regulated and insured. AI has none of these obligations.

If AI provides incorrect or incomplete information or advice, there could be damaging consequences to your case, with no way to hold anyone accountable.

iii.   Investigating AI-generated information or advice may lead to an increase in legal fees

Aside from the confidentiality/privilege risks noted above, bringing inaccurate or incomplete information generated by AI to your lawyer could increase the time they must spend on your case, in turn raising the overall cost of their services. If reliance on AI harms your case, your lawyer may also need to spend additional time working to try to mitigate any further negative impacts.

While AI can be a useful tool in certain contexts it is never a substitute for legal advice from an experienced professional. This is particularly relevant when it comes to personal injury matters, where claims are complex and fact specific.

When it comes to your health, your financial recovery, and your future, it is therefore important to seek personalized legal advice from an experienced personal injury lawyer. This will help to ensure that your rights are protected and will give you the best possible opportunity to achieve a fair and successful outcome for your claim.

Questions about an injury claim? We have answers. CONTACT the lawyers at CAM LLP for a free legal consultation.