Written By Ruth Promislow, Matthew Flynn and Sakina Babwani
ChatGPT—the artificial intelligence-powered chatbot—has been making headlines since its initial release date on November 30, 2022. Most recently, ChatGPT has been the subject of attention in connection with privacy regulation.
The Office of Privacy Commissioner of Canada (OPC) recently announced that it has launched an investigation into the company behind ChatGPT, OpenAI. The investigation was launched in response to a complaint that OpenAI collects, uses and discloses personal information without consent. The OPC has not yet provided details of the investigation. In connection with the announcement regarding the investigation, Philippe Dufresne, the Privacy Commissioner of Canada, has stated that "AI technology and its effects on privacy is a priority for my Office".
The regulation of artificial intelligence is also the subject matter of Bill C-27, which includes the draft privacy regime to replace the existing federal legislation as well as the Artificial Intelligence and Data Act (AIDA). The AIDA includes provisions that impact organizations involved in the design, development and deployment of AI systems, as well as substantial penalties—up to 5 percent of gross global revenue—for contravention of the provisions.
While rumours are circulating that the AIDA may be clawed out of Bill C-27 to permit further consideration of the draft legislation, the regulation of AI may still occur under existing legislation such as the federal privacy legislation under which the OPC has initiated the current investigation into OpenAI.
The OPC is the latest government entity in the western world to investigate OpenAI. Earlier this month, the Italian Data Protection Authority (Garante per la protezione dei dati personali) (the Garante) imposed a temporary limitation on the processing of Italian users' data by OpenAI. It also launched an investigation into OpenAI, giving OpenAI 20 days to developmeasures to comply with the Garante's order, failing which OpenAI could be fined up to €20 million, or 4 percent of the total worldwide annual turnover.1
The Garante's investigation came in light of a data breach that affected ChatGPT users’ conversations and information on payments by subscribers to the service. In ordering the temporary suspension of ChatGPT, the Garante asserted that OpenAI did not provide any information to users and data subjects whose data is collected by OpenAI through the operation of ChatGPT, and that there may not be a legal basis for the massive collection and processing of personal data in order to "train" the algorithms on which the platform relies.2
The Garante also asserted that the information made available by ChatGPT did not always match factual circumstances resulting in inaccurate personal data being processed. Lastly, the Garante stated its concern over the lack of an age verification mechanism in ChatGPT.3
The Garante has now reportedly said that it will lift the temporary ban on ChatGPT if OpenAI begins complying with privacy requirements, including data protection rights for users and non-users, a legal basis for data processing and a public awareness campaign, and the implementation of age verification to prevent access by minors.4
France and Ireland have reportedly reached out to the Garante to learn more about the basis of the temporary suspension, although neither jurisdiction has launched its own investigations or imposed any measures against ChatGPT.5 France's Commission Nationale De L'informatique Et Des Libertés has, however, received a couple of complaints relating to data protection.6 Germany is also reportedly in touch with the Garante and has publicly stated that in principle, ChatGPT could also be banned in Germany.7 Separately, Spain has asked the European Data Protection Board to assess privacy issues with ChatGPT so that harmonized actions within the European Union (EU) may be implemented as part of the application of the GDPR.8
OpenAI, which is a U.S.-based company, is also facing a complaint made to the U.S. Federal Trade Commission (FTC), urging the Commission to open an investigation into OpenAI and to suspend further deployment of GPT commercial products until OpenAI complies with the Commission's Guidance for artificial intelligence products. The complaint was filed by U.S.-based civil society group, Centre for AI and Digital Policy, but the FTC has not yet announced whether it will undertake an investigation into OpenAI.9 Amid concerns over safety of artificial intelligence products, the National Telecommunications and Information Administration, which advises the White House, is apparently preparing a report on the efforts to ensure that AI systems work as intended and without causing harm.10
Takeaway
As organizations incorporate the use of artificial intelligence in their operations (either on their own or through third party providers), they will need to identify and manage potential risks involving, among other things, (1) how artificial intelligence may be used to make predictions, recommendations or decisions that impact individuals, and (2) how artificial intelligence is used in the collection and use of personal information. Managing compliance upfront is critical to avoiding costly regulatory proceedings and potential fines. Bennett Jones' Privacy and Data Protection group can help organizations navigate this rapidly changing landscape.
4 Italy will lift ChatGPT ban if OpenAI complies with data rights, age verification, public information requirements, by Frank Hersey (mlex)
Please note that this publication presents an overview of notable legal trends and related updates. It is intended for informational purposes and not as a replacement for detailed legal advice. If you need guidance tailored to your specific circumstances, please contact one of the authors to explore how we can help you navigate your legal needs.
For permission to republish this or any other publication, contact Amrita Kochhar at kochhara@bennettjones.com.