Generative Artificial Intelligence (AI): Canadian Government Continues to Clarify Use of Generative AI Systems

September 20, 2023

Written By Stephen D. Burns, Sebastien Gittens, Ahmed Elmallah and David Wainer

The Canadian government continues to take note and react to the widespread use of generative artificial intelligence (AI). Generative AI is a type of AI that generates output that can include text, images or other materials, and is based on material and information that the user inputs (e.g., ChatGPT, Dall-E 2 and Midjourney). In recent development, the Canadian government has: (1) opened up consultation on a proposed Code of Practice (the Code) and provided a proposed framework for the Code;1 and (2) published a Guide on the use of Generative AI for federal institutions on September 6th, 2023 (the Guide).2

More generally, as discussed below, as Canadian companies continue to adopt generative AI solutions, they may take note of the initial framework set out for the Code, as well as the information in the Guide, in order to minimize risk and ensure compliance with future AI legislation. A summary of the key points of the proposed Code and Guide is provided below.

1. Consultation on the Code

The Code is intended for developers, deployers and operators of generative AI systems to avoid harmful impacts of their AI systems and to prepare for, and transition smoothly into, future compliance with the Artificial Intelligence and Data Act (AIDA),3 should the legislation receive royal assent.

In particular, the Government has stated that it is committed to developing a code of practice, which would be implemented on a voluntary basis by Canadian firms ahead of the coming into force of AIDA.4 For a detailed look into what future AI regulation may look like in Canada, please refer to our blog, Artificial Intelligence—A Companion Document Offers a New Roadmap for Future AI Regulation in Canada.

In the process of developing the Code, the Canadian government has set out a framework for the Code, and has now opened consultation on this framework. To that end, the government is requesting comments on the following elements of the proposed framework:

  • Safety: Safety considerations of a generative AI system should involve a holistic analysis of a generative AI system's lifecycle, and should contemplate the "wide range of uses of many generative AI systems".5

    In the proposed framework for the Code, developers and deployers would be asked to identify ways that their system may attract malicious use (e.g., impersonate real individuals) and take steps to prevent such use from occurring.

    Additionally, developers, deployers and operators would be asked to identify the ways that their system may attract harmful inappropriate use (e.g., use of a large language model for medical or legal advice) and again, take steps to prevent this inappropriate from occurring.
  • Fairness and Equity: Developers of generative AI systems should carefully curate the datasets used in their generative AI systems to mitigate the risk of biases and implement measures to mitigate the risk of biased output.

    To this end, it would be suggested by the Code that developers assess and curate datasets to avoid low-quality data and non-representative datasets/biases. Further, developers, deployers and operators would be advised to implement measures to assess and mitigate risk of biased output (e.g., fine-tuning).
  • Transparency: Individuals should be able to realize when they are interacting with an AI system or AI-generated content.

    Accordingly, a future Code would recommend that developers and deployers provide a reliable and freely available method to detect content generated by the AI system (e.g., watermarking), as well as provide a meaningful explanation of the process used to develop the system (e.g., provenance of training data, as well as measures taken to identify and address risks).

    Additionally, operators would be asked to ensure that systems that could be mistaken for humans are clearly and prominently identified as AI systems.
  • Human Oversight and Monitoring: It is noted that human oversight and monitoring "are critical to ensure that these systems are developed, deployed and used safely."6

    A future Code would potentially recommend that deployers and operators of generative AI systems provide human oversight in the deployment and operations of their system. Further, developers, deployers and operators would be asked to implement mechanisms to allow adverse impacts to be identified and reported after the system is made available.
  • Validity and Robustness: An emphasis is placed on the need for AI systems to build trust by working as intended across any context, encouraging developers to use a variety of testing methods.

    In this vein, a future Code would recommend that developers use a wide variety of testing methods across a spectrum of tasks and contexts (e.g., adversarial testing) to measure performance and identify vulnerabilities. As well, developers, deployers and operators would be asked to employ appropriate cybersecurity measures to prevent or identify adversarial attacks on the system (e.g., data poisoning).
  • Accountability: Accounting for the powerful nature of generative AI systems, it is further noted that "particular care needs to be taken with generative AI systems to ensure that a comprehensive and multifaceted risk management process is followed."7

    Developers, deployers and operators of generative AI systems may therefore ensure that multiple lines of defence are in place to secure the safety of their system, such as ensuring that both internal and external (independent) audits of their system are undertaken before and after the system is put into operation; and develop policies, procedures and training to ensure that roles and responsibilities are clearly defined, and that staff are familiar with their duties and the organization's risk management practices.

Accordingly, as the framework for the Code evolves through the consultative process, it is expected that it will ultimately provide a helpful guide for Canadian companies involved in the development, deployment and operation of generative AI systems as they prepare for the coming-into-force of AIDA.

2. The Guide on the use of Generative AI

The Guide is another example of the Canadian government accounting for the use of generative AI. The Guide provides guidance to federal institutions and their employees on their use of generative AI tools, including identifying challenges and concerns relating to its use, putting forward principles for using it responsibly, and offering policy considerations and best practices.

While the Guide is intended for federal institutions, the issues it addresses may have more universal application to the use of generative AI systems, broadly. Accordingly, organizations may consider referring to the Guide as a guiding template, while developing their own internal AI policies for use of generative AI.

In more detail, the Guide identifies challenges and concerns with the use of generative AI, including the generation of inaccurate or incorrect content (known as "hallucinations") and/or the amplification of biases. More generally, the government notes that generative AI may pose "risks to the integrity and security of federal institutions."8

To mitigate these challenges and risks, the Guide recommends that federal institutions adopt the "FASTER" approach:

  • Fair: Ensure that the generative AI's content does not amplify biases and complies with procedural and substantive fairness obligations;
  • Accountable: Hold federal institutions accountable for their use of generative AI;
  • Secure: Ensure that the federal institution accounts for the security classification of the information;
  • Transparent: Identify content that has been produced using generative AI;
  • Educated: Ensure that institutions and employees learn about the responsible use of generative AI; and
  • Relevant: Ensure that the use of generative AI "contributes to improved outcomes for Canadians."9

Organizations may take heed of the FASTER approach as a potential guiding framework to the development of their own policies on the use of generative AI.

Various other highlights noted by the Guide include the following:

  • Administrative Decision-Making: Generative AI, at its current stage of development, may not be suited for use in administrative decision-making. A generative AI system may not be suited, for example, to "summarize a client's data or to determine whether they are eligible for a service."10 The Guide notes that the use of generative AI in this context may not align with the FASTER principles.
  • Privacy Considerations: Personal information should not be entered into a generative AI tool or service unless a contract is in place with the supplier and covers how the information will be used and protected. Before using a generative AI tool, federal institutions must make sure that the collection and use of personal information, including information used to train the tool, meets their privacy obligations. Institutions are reminded that, in each instance where the deployment of generative AI is being considered, the institution must determine if a Privacy Impact Assessment is required.
  • Potential Issues and Best Practices: A brief overview is provided of several areas of risk and sets out best practices for the responsible use of generative AI in federal institutions. These include the following:
    • Protection of Information: The Guide recommends that sensitive or personal information is not used with any generative AI.
    • Bias: To mitigate the potential for generative AI to amplify bias, the Guide recommends that federal institutions review all generated content to "ensure that it aligns with Government of Canada commitments."11
    • Quality: To account for inaccurate generated content, the Guide recommends that institutions indicate where and when generative AI has been used and to carefully review such content.
    • Public Service Autonomy: Overreliance on generative AI may interfere with public servant autonomy and judgment, and the Guide recommends that generative AI is used only when it is required. Organizations would be well-served to incorporate these recommendations into their AI policies.
    • Legal Risks: Various legal risks posed by the use of generative AI, including (but not limited to) human rights violations, privacy obligations, intellectual property protection, and procedural fairness obligations. The Guide notes that federal institutions should comply with all federal directives regarding AI, check whether generated content is similar to copyright-protected material, and to consult legal services about the legal risks of generative AI.
    • Distinguishing Humans from Machines: Federal institutions should clearly communicate when and how generative AI is being used to the public so that clients can easily distinguish between human and machine and that federal institutions should consider the potential environmental impacts of use of generative AI.
    • Environmental Impacts: The development and use of generative AI systems can be a significant source of greenhouse gas emissions. The Guide recommends using generative AI tools hosted in zero-emission data centers.

In view of the foregoing, Canadian companies exploring the use of generative AI may take a note of the FASTER principles set out by the Guide, as well as the various best practices proposed.

Conclusion

Taken together, the Code and the Guide provide helpful guidance for organizations who wish to be proactive as they develop their AI policies and ensure they are compliant with AIDA should it receive royal assent.

The Bennett Jones Privacy and Data Protection group is available to discuss how your organization can responsibly integrate AI into its operations and any challenges you may encounter.


1 Government of Canada, Canadian Guardrails for Generative AI – Code of Practice, last modified 16 August 2023 ["Consultation Announcement"].

2 Government of Canada, Guide on the use of Generative AI, last modified 6 September 2023 ["The Guide"].

3 Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, 1st Sess, 44th Parl, 2021 (second reading completed by the House of Commons on 24 April 2023).

4 Consultation Announcement.

5 Consultation Announcement.

6 Consultation Announcement.

7 Consultation Announcement.

8 The Guide.

9 The Guide.

10 The Guide.

11 The Guide.

Authors

Stephen D. Burns
403.298.3050
burnss@bennettjones.com

J. Sébastien A. Gittens
403.298.3409
gittenss@bennettjones.com

Ahmed Elmallah
780.917.4265
elmallaha@bennettjones.com

David Wainer
403.298.3264
wainerd@bennettjones.com



Please note that this publication presents an overview of notable legal trends and related updates. It is intended for informational purposes and not as a replacement for detailed legal advice. If you need guidance tailored to your specific circumstances, please contact one of the authors to explore how we can help you navigate your legal needs.

For permission to republish this or any other publication, contact Amrita Kochhar at kochhara@bennettjones.com.