Generative AI Development: Canada Releases Voluntary Code of Conduct

October 04, 2023

Written By Ruth Promislow, Oscar Crawford Ritchie and Julia Mogus

The federal government has recently released its voluntary Code of Practice (the Code) relating to advanced generative artificial intelligence (AI) systems. The code identifies measures that organizations are encouraged to adopt when they are developing generative AI systems.  The Code outlines measures that are aligned with six core principles:

  • Accountability: Organizations will implement a clear risk management framework proportionate to the scale and impact of their activities
  • Safety: Organizations will perform impact assessments and take steps to mitigate risks to safety.
  • Fairness and equity: Organizations will assess and test systems for biases.
  • Transparency: Organizations will publish information on systems and ensure that AI systems and AI-generated content can be identified.
  • Human oversight and monitoring: Organizations will ensure that systems are monitored and that incidents are reported and acted on.
  • Validity and robustness: Organizations will conduct testing to ensure that systems operate effectively and are appropriately secured against attacks.

Organizations seeking to use, develop, and manage such systems are encouraged to integrate the principles of the Code into their operations, and by doing so, take steps to ensure that risks associated with the use of and reliance on AI are appropriately identified and mitigated.

The federal government's release of the Code occurred just after it published a Guide on the use of Generative AI for government institutions on the use of Generative AI, and opened a consultation on a proposed Code of Practice for generative AI systems. Bennett Jones has previously blogged about both—Generative Artificial Intelligence (AI): Canadian Government Continues to Clarify Use of Generative AI Systems and Artificial Intelligence—A Companion Document Offers a New Roadmap for Future AI Regulation in Canada.

In the background of these developments, Bill C-27, which includes draft legislation on AI—The Artificial Intelligence and Data Act—is expected to be passed into law relatively soon.  Although, it is worth noting that Bill C27 has been under consideration since June 2022. This draft legislation includes substantial compliance obligations in connection with the design, development and deployment of AI systems in the private sector, and corresponding exposure to penalties for non-compliance. The focus of this draft legislation is on addressing potential harm (physical, psychological, damage to property, or economic loss) arising from the use of AI systems. In its current state, the draft legislation lacks clarity as to what activities involving the use of AI will be defined as "high risk" (a relevant standard for the imposition of obligations and penalties). At present, pending Bill C-27 being passed into law, regulation of AI in the private sector is governed by the federal privacy legislation (Personal Information Protection and Electronic Documents Act).    

While the Code is voluntary, the principles underlying this code will likely serve as a framework for assessing regulatory compliance, and therefore provide a loose roadmap of how AI is regulated. However, the precise manner in which the principles underlying the Code will be interpreted is critical to defining more precisely what compliance looks like.  Likewise, how the concept of "high risk activities" will be defined will be significant in understanding the relevant compliance standards. 

In short, at present, there is no clearly defined roadmap from the federal government to guide organizations in the design, development and deployment of AI. Absent this roadmap, organizations seeking to deploy AI in their business operations may inadvertently expose the business to regulatory scrutiny and penalties. Careful navigation is required to reap the benefits of AI while effectively managing exposure.

The Bennett Jones Privacy and Data Protection group is available to discuss how your organization can effectively develop its AI compliance program.

Authors

Ruth E. Promislow
416.777.4688
promislowr@bennettjones.com

Oscar Crawford-Ritchie
416.777.7822
crawfordritchieo@bennettjones.com

Julia Mogus
416.777.7876
mogusj@bennettjones.com



Please note that this publication presents an overview of notable legal trends and related updates. It is intended for informational purposes and not as a replacement for detailed legal advice. If you need guidance tailored to your specific circumstances, please contact one of the authors to explore how we can help you navigate your legal needs.

For permission to republish this or any other publication, contact Amrita Kochhar at kochhara@bennettjones.com.