Blog

Unchartered Territories: Canadian Courts and Law Societies Grapple with the Use of Generative Artificial Intelligence Tools

May 23, 2024

Close

Written By Stephen Burns, Sebastien Gittens, Scott Bower and Ahmed Elmallah

Canadian courts and law societies, alike, are faced with an ever-evolving challenge: straddling the line between recognizing the potential benefits of emergent generative artificial intelligence (GenAI), while balancing the associated risks of using this technology. Principal to these associated risks is the ability of GenAI to "hallucinate" (i.e., generate incorrect information).

This article canvasses developments in respect of: (1) practice directions issued by different Canadian courts on the use of GenAI in court filed materials; (2) notices on the courts’ own use of GenAI; and (3) guidelines issued by various Canadian law societies on the use of GenAI in the legal profession.

Various practice directions and guidelines have been issued over the last several months. Following from our previous blog, New Practice Directions Consider Artificial Intelligence in Court Submissions, a common theme of these reference materials is: (1) the importance of a “human in the loop” for verifying the outputs of any GenAI; and (2) in certain jurisdictions, an additional disclosure requirement if GenAI is used to prepare court filed materials.

1. Notices or Practice Directions Issued by Canadian Courts on Use of GenAI

The following Canadian courts have issued notices and/or practice directions relating to the use of GenAI in court filed materials:

Court Jurisdiction

Content of Notice or Practice Direction

Alberta

 

On October 6, 2023, a joint committee of the Chief Justices of the Alberta Court of Justice, Court of King's Bench of Alberta, and Court of Appeal of Alberta collectively issued a “Notice to the Public and Legal Profession Ensuring the Integrity of Court Submissions when Using Large Language Models”.1

The notice acknowledges concerns surrounding the potential fabrication of legal authorities through large language models (LLMs). LLMs are broadly defined in the notice as referring to any type of AI system capable of processing and generating human-like text based on vast amounts of training data.

The notice outlines the following three points:

  1. the notice urges litigants to exercise caution when referencing legal authorities or analysis derived from LLMs in their submissions;
  2. the notice advises that it is essential to rely exclusively on authoritative sources (e.g., official court websites or well-established public services such as CanLII) when referring to case law, statutes, or commentary in making representations to the courts; and
  3. the notice requires a "human in the loop": that is, any AI-generated submissions must be verified with meaningful human control. Verification can be achieved through cross-referencing with reliable legal databases, and ensuring that the citations and their content hold up to scrutiny.

Accordingly, the notice attempts to balance the use of emerging technologies with the necessity of integrity of cited legal references.

Federal

 

On December 20, 2023, the Federal Court issued a comprehensive notice titled “The Use of Artificial Intelligence in Court Proceedings”.2

This notice requires counsel, parties, and interveners in legal proceedings at the Federal Court to make a declaration for artificial intelligence (AI)-generated content, and to consider certain principles when using AI to prepare documentation filed with the Court.

  1. Declaration

    To ensure that the Court understands how AI has been used, the notice requires that any document prepared for the purpose of litigation, and submitted to the Court by or on behalf of a party or intervener that contains content created or generated by AI, must include a declaration. The declaration shall be made in the first paragraph of the document in question, for instance, the first paragraph of a Memorandum of Fact and Law or Written Representations.
  2. Principles on the Use of AI

    Before using AI in a proceeding, the Court encourages counsel to consider providing traditional, human services to clients if there is reason to believe a client may not be familiar with, or may not wish to use, AI.

    Further, when referring to jurisprudence, statutes, policies, or commentaries in documents submitted to the Court, it is crucial to use only well-recognized and reliable sources.

    As well, to ensure accuracy and trustworthiness, it is essential for there to be a “human in the loop” to check documents and material generated by AI. The Court urges verification of any AI-created content in these documents. This kind of verification aligns with the standards generally required within the legal profession.

The Court notes that the notice only applies to certain forms of AI, defined as a computer system capable of generating new content and independently creating or generating information or documents, usually based on prompts or information provided to the system. To this end, the notice does not apply to AI that lacks the creative ability to generate new content.

On May 7, 2024, the Federal Court issued a further “Notice to the Parties of the Profession – The Use of Artificial Intelligence in Court Proceedings”.3

In this notice, the Court iterates its expectation that parties to proceedings before the Court to inform it, and each other, if documents they submit to the Court, that have been prepared for the purposes of litigation, include content created or generated by AI. This shall be done by a declaration in the first paragraph stating that AI was used in preparing the document, either in its entirety or only for specifically identified paragraphs (the “Declaration”). For greater certainty, the Declaration is only intended to notify the Court and parties so that they can govern themselves accordingly.

This updated notice applies to all materials that are (1) submitted to the Court, and (2) prepared for the purpose of litigation. For greater certainty, the notice does not apply to: (1) Certified Tribunal Records submitted by tribunals or other third-party decision-makers, or (2) Expert reports.

The Court confirms that the inclusion of a Declaration, in and of itself, will not attract an adverse inference by the Court.

Newfoundland & Labrador

 

On October 12, 2023, the Supreme Court of Newfoundland and Labrador issued a notice to the profession and the general public, titled “Ensuring the Integrity of Court Submissions when Using Large Language Models”.4

In this notice, the Court outlines requirements for: (1) caution by practitioners and litigants when referencing legal authorities derived from large language models (LLMs); (2) reliance exclusively on authoritative sources (e.g., official court websites, commonly referenced commercial publishers and well-established public services such as CanLII); and (3) a “human in the loop” to verify any AI-generated submissions.

Nova Scotia

 

On October 27, 2023, the Provincial Court of Nova Scotia issued a guidance, titled “Use of Artificial Intelligence (AI) and Protecting the Integrity of Court Submissions in Provincial Court”.5

The Court acknowledges while there are tremendous benefits with the application of AI, there are also competing consequences.

The Court encourages counsel and litigants to exercise caution when relying on reasoning that was ascertained from artificial intelligence applications. The Court further confirms its expectations that all written and oral submissions referencing case law, statutes or commentary will be limited to accredited and established legal databases.

Finally, the Court notes that any party wishing to rely on materials that were generated with the use of artificial intelligence must articulate how the artificial intelligence was used.

Manitoba

On June 23, 2023, the Court of King's Bench of Manitoba issued a practice direction, titled “Use of Artificial Intelligence in Court Submissions”.6 A summary of this practice direction is also found in our previous blog.

In the practice direction, the Court acknowledges that AI is rapidly developing, and that it is impossible to “completely and accurately predict how [AI] may develop or how to exactly define the responsible use of [AI] in court cases". Further, the Court confirms that there are concerns about the reliability and accuracy of information generated from the use of AI.

To that end, the Court now requires that any use of AI in preparation of materials filed with the Court must indicate how the AI was used.

Quebec

On October 24, 2023, the Superior Court of Quebec issued a notice virtually identical to that of Alberta's, and titled “Integrity of Court Submissions When Using Large Language Models”.7

Quebec’s notice issues the same caution, emphasizing reliance on authoritative sources, and requiring the same "human in the loop" requirement as Alberta.

Yukon

On June 26, 2023, the Supreme Court of Yukon likewise issued a practice direction, titled “Use of Artificial Intelligence Tools”.8 A summary of the notice is also discussed in our previous blog, New Practice Directions Consider Artificial Intelligence in Court Submissions.

Similar to notices by other Courts, this practice direction also acknowledges the legitimate concerns about the reliability and accuracy of information generated from the use of AI. Accordingly, if any form of AI is used for legal research or submissions, in any matter and in any form, before the Court, the party must advise the Court of what tool was used and for what purpose.

As an example of a tool requiring disclosure, the Court made specific reference to ChatGPT, a well-known AI tool.

 

2. Guidance on Court’s Own Use of GenAI

The Federal Court has also issued guidance on its own use of GenAI:

Court Jurisdiction

Content of Notice or Practice Direction

Federal

On December 20, 2023, the Federal Court issued a notice titled “Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence”.9

Unique to this notice is that it outlines how the Court, itself, intends to use GenAI, rather than how parties, self-represented litigants and interveners appearing before the Court should use these tools.

The notice outlines the following three guidelines:

  1. the Court will not use AI, and more specifically automated decision-making tools, in making its judgments and orders, without first engaging in public consultation. For greater certainty, this includes the Court’s determination of the issues raised by the parties, as reflected in its Reasons for Judgment and its Reasons for Order, or any other decision made by the Court in a proceeding;
  2. the Court will embrace the following principles in any internal use of AI: (1) accountability, (2) respect of fundamental rights, (3) non-discrimination, (4) accuracy, (5) transparency, (6) cybersecurity and (7) “human in the loop”; and
  3. if a specific use of AI by the Court may have an impact on the profession or public, the Court will consult the relevant stakeholders before implementing that specific use.

Accordingly, the notice provides a unique viewpoint on how GenAI can be integrated by Courts in the process of judicial decision-making.

 

3. Law Society Guidelines on Use of GenAI

In addition to the notices and practice direction issued by various courts, the following law societies have also issued guidance on the use of GenAI in the legal profession:

Law Society

Guidelines

Alberta

 

In late 2023, the Law Society of Alberta issued a professional conduct guidance titled “The Generative AI PlaybookHow Lawyers Can Safely Take Advantage of the Opportunities Offered by Generative AI”.10

Among other points, the guidance reminds lawyers of the duty of technological competence and provides the following best practices for lawyers using GenAI:

  • determining how the Gen AI tool was trained;
  • keeping current on the status of Canada’s proposed Artificial Intelligence and Data Act (AIDA); and
  • reviewing the Terms of Service provided by Gen AI providers. For example, OpenAI’s Terms of Service expressly note that OpenAI’s models are not fine-tuned to provide legal advice.

The guidance also highlights a number of associated risks with the use of GenAI, including: confidentiality and security, fraud and cybercrime, knowledge cutoff, hallucinations, bias and copyright infringement.

The guidance then provides a number of recommendations including: (1) being careful when using GenAI; (2) informing clients of the use of GenAI; (3) knowing the rules (e.g., regulations) associated with using GenAI; (4) protecting client confidentiality; (5) protecting client privacy; (6) using GenAI to supplement, not replace, legal judgment; (7) verifying the output of GenAI tools; (8) supervising juniors using GenAI; (9) understanding the technology; (10) not adopting a false sense of comfort; (11) ensuring time and budget pressures don’t incentivize overreliance on GenAI; and (12) experimenting with innovative uses of GenAI.

British Columbia

In July 2023, the Law Society of BC initially released a guidance on the use of ChatGPT, which stated:11

With the rapid deployment and use of technologies like ChatGPT, there has been a growing level of AI-generated materials being used in court proceedings. Counsel are reminded that the ethical obligation to ensure the accuracy of materials submitted to court remains with you. Where materials are generated using technologies such as ChatGPT, it would be prudent to advise the court accordingly. The Law Society is currently examining this issue in more detail and we expect that further guidance to the profession will be offered in the coming weeks.

On November 20, 2023, the Law Society of BC followed this guidance and issued a practice resource, titled “Guidance on Professional Responsibility and Generative AI”. 12

The Guidance iterates a number of professional responsibility considerations for lawyers when considering the use of GenAI, including: competence, confidentiality honesty and candor, responsibility, information security, requirements of courts or other decision-makers, reasonable fees and disbursements, plagiarism and copyright, fraud and deep fakes and bias.

Manitoba

In April 2024, the law society of Manitoba released “Generative Artificial IntelligenceGuidelines for Use in the Practice of Law”.13

These guidelines are intended to assist lawyers in the learning process, as well as in using generative AI in a manner consistent with the professional obligations set out in the Law Society of Manitoba Code of Professional Conduct.

Various guidelines are provided including: (1) being technologically competent, (2) maintaining confidentiality; (3) guarding against discrimination, harassment and bias; (4) supervising work; (5) treating tribunals with candor and respect; and (6) complying with applicable laws and rules.

Newfoundland & Labrador

 

The law society of Newfoundland & Labrador issued a general notice, titled “Artificial Intelligence in Your Practice”.14

This notice reminds lawyers that their professional responsibilities do not change when using AI. In particulars, lawyers are still bound by duties of competence and confidentiality and all other obligations under the Code of Professional Conduct and the Law Society Rules.

The notice also advises, as a matter of best practice, to ensure lawyers understand the limitations of AI software before using it and carefully reviewing any work created by AI to confirm it is complete, accurate, and relevant.

Nova Scotia

The Nova Scotia Barrister’s Society provides a general article titled “Artificial Intelligence in the Practice of LawWhat is AI and can I or should I use it in my practice”.15

The article reminds lawyers to be vigilant when using generative AI and canvasses some of the risks of using online generative AI tools, such as ChatGPT.

Ontario

In April 2024, the Law Society of Ontario issued a comprehensive white paper titled “Licensee use of generative artificial intelligence”.16

The white paper is provided to help licensees as they navigate the use of generative AI tools and stresses the relevance of rules of professional conduct including competence, confidentiality, supervision, licensee-client relationships, fees and disbursements and discrimination and harassment.

The law society also provides a helpful quick-start checklist to guide practitioners before, while and after using AI.17

Practice tips18 are also provided by the law society for using generative AI and include knowing your obligations, understanding how the technology works, prioritizing confidentiality and privacy, learning to create effective prompts, confirming and verifying AI-generated outputs, avoiding dependency and overreliance, establishing AI use policies for employees and staying informed on AI developments.

Saskatchewan

In February 2024, the law society of Saskatchewan published guidelines titled “Guidelines for the Use of Generative Artificial Intelligence in the Practice of Law”.19

The guidance is prepared with the goals of: (1) helping lawyers use generative AI in a manner consistent with their professional obligations; and (2) assisting legal workplaces to develop appropriate internal policies on generative AI.

To this end, the guidance highlights the importance of various duties under the code of professional conduct including the duties of competence and diligence, confidentiality, complying with the law, supervising and delegation, communication, charging for work, candor to the tribunal, and prohibition on discrimination, harassment and guarding against bias.

 

While not all law societies have issued practice guidelines on the use of GenAI, the code and rules of professional conduct adopted by most law societies incorporates a technology competence requirement. This requirement necessitates lawyers understand the capabilities and limitations of AI (and GenAI) tools, prior to using these tools in their practice. The requirement for technological competence was initially adopted in 2019 by the Federation of Law Societies of Canada in the Model Code of Professional Conduct,20 and is now incorporated in the code and rules of professional conduct of Alberta,21 Ontario,22 Manitoba,23 Saskatchewan,24 Yukon,25 Northwest Territories,26 and Nova Scotia27.

4. AIDA

The federal government has tabled Bill C-27: An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts.28 One component of this legislation is the Artificial Intelligence and Data Act (AIDA) which seeks to regulate "high impact AI systems". As the companion document released by Innovation, Science and Economic Development Canada indicates, the government will consider a list of factors in determining which AI systems would be considered "high impact" under AIDA29:

  1. evidence of risks of harm to health and safety, or a risk of adverse impact on human rights, based on both the intended purpose and potential unintended consequences;
  2. the severity of potential harms;
  3. the scale of use;
  4. the nature of harms or adverse impacts that have already taken place;
  5. the extent to which for practical or legal reasons it is not reasonably possible to opt-out from that system;
  6. imbalances of economic or social circumstances, or age of impacted persons; and
  7. the degree to which the risks are adequately regulated under another law.

In this light, it may be possible that certain AI used in the preparation of court submissions could be deemed to be "high impact AI systems," and thus subject to AIDA. If subject to AIDA, organizations which manage use of these AI systems may be subject to certain obligations, including ensuring adequate monitoring and human oversight of the system use and output, and intervening as needed.

5. Additional Developments

Following a number of similar cases coming out of US courts, the British Columbia Supreme Court recently addressed a child custody case where a lawyer filed court submissions which included fictitious cases that were suggested by ChatGPT30.  The court in this instance held31:

Citing fake cases in court filings and other materials handed up to the court is an abuse of process and is tantamount to making a false statement to the court.  Unchecked, it can lead to a miscarriage of justice.

Due to the "additional effort and expense were incurred" by the insertion of these fictitious cases, the court held the lawyer personally responsible for such expenses. The lawyer was also ordered to: (1) review all of her files that are before the court within 30 days; and (2) advise opposing parties and the court of any filed materials which contain any case citations or summaries that were obtained through GenAI tool. 

6. Conclusion

As highlighted in our previous blog, New Practice Directions Consider Artificial Intelligence in Court Submissions, these guidelines and practice directions demonstrate a continuing trend by Canadian courts and law societies to regulate the use of GenAI by the legal profession. As the court held in Zhang v. Chen32:

As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers. Competence in the selection and use of any technology tools, including those powered by AI, is critical.  The integrity of the justice system requires no less. 

It will be important for practitioners to monitor these and any new developments regarding GenAI given the significant implications associated with failing to do so.

The Bennett Jones Privacy and Data Protection group is available to discuss how your organization can responsibly and effectively manage the use of information and artificial intelligence in its operations.

For more AI insights, please visit our Artificial Intelligence Insights Hub.


20 Federation of Law Societies of Canada, Model Code of Professional Conduct, Section 3.1-2 commentary [4A] [4B].

21 February 27, 2020 Law Society of Alberta Code of Conduct Changes. See also Law Society of Alberta Code of Conduct, s 3.1-2 commentary [5],[6].

22 Law Society of Ontario, Rules of Professional Conduct, s 3.1-2 commentary [4A][4B].

23 Law Society of Manitoba, Code of Professional Conduct, s 3.1-2 commentary [4A][4B].

24 Law Society of Saskatchewan, Code of Professional Conduct 2023, s 3.1-2 commentary [4A][4B].

25 Law Society of Yukon, Code of Professional Conduct July 24, 2023,  s 3.1-2 commentary [4A][4B].

26 Law Society of the Northwest Territories, Code of Professional Conduct March 16 2021, s 3.1-2 commentary [4A][4B].

27 Nova Scotia Barristers' Society, Code of Professional Conduct, s 3.1-2 commentary [4A][4B].

28 Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, 1st Sess,44th Parl, 2021 (second reading completed by the House of Commons on 24 April 2023).

29 Canada, The Artificial Intelligence and Data Act (AIDA) – Companion document (last modified 13 March 2023).

31 Ibid at para. 42.

32 Ibid at para. 46.

Authors

Related Links



View Full Mobile Experience