ANALYSIS: Ottawa Says It’s Leading in AI Regulation, While Critics Raise Questions

The fast-moving pace of AI has prompted Ottawa to create a framework and rules for its use, but critics question whether regulation can be effective.
ANALYSIS: Ottawa Says It’s Leading in AI Regulation, While Critics Raise Questions
Minister of Innovation, Science and Industry François-Philippe Champagne rises during question period on Parliament Hill in Ottawa on Oct. 5, 2023. (The Canadian Press/Spencer Colby)
Noé Chartier
11/16/2023
Updated:
11/16/2023
0:00

Canada participated in the first-ever international summit on the safety of artificial intelligence (AI) in early November in the UK, where newfound opportunities were being balanced with warnings about the dire consequences of leaving the technology unchecked.

The Canadian government says it’s an important player in AI governance on the world stage. At the same time, its bill to regulate the field has been met with criticism for being either too vague or already outdated.
“Canada continues to play a leading role on the global governance and responsible use of AI,” said Innovation Minister François-Philippe Champagne after the summit in Bletchley Park, Buckinghamshire.
“Canada was the first country to adopt a national AI strategy, we recently launched a voluntary AI code of conduct for advanced AI systems, and we are moving ahead with one of the first AI laws in the world.”

Proposed Legislation

Mr. Champagne tabled Bill C-27 to regulate AI and revamp consumer privacy protection in June 2022. The bill would enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act.

However, the fast pace of developments in AI means tabled legislation can quickly become irrelevant, and Bill C-27 has moved slowly in the House of Commons.

Ottawa recognizes the fast-moving pace of the technology and has sought to create more open-ended and nimble legislation, but this has also raised questions in terms of whether regulation can be effective.

“Particularly when it comes to artificial intelligence, we see, at the speed of change, that the only way forward is to have a framework as opposed to being prescriptive,” Mr. Champagne told the House Standing Committee on Industry and Technology on Sept. 26.

Some of the concerns raised have been heard and the government will be tabling amendments to C-27, the minister said.

The bill, which does not apply to the government, focuses on the commercial usage of “high-impact AI systems,” a term not currently defined in the legislation. Mr. Champagne said an amendment will introduce a definition. He has described high-impact AI systems as those that could be involved in deciding whether or not an individual can get a loan or an insurance policy, for example.

“We have to make sure that the algorithm does not generate biased results that would lead in the wrong direction,” the minister said.

Privacy Commissioner Philippe Dufresne has made a number of recommendations regarding C-27, one of which relates more specifically to artificial intelligence. He told the House industry and technology committee on Sept. 28 that Canadians should be “given the right to request an explanation when an AI system makes a prediction, recommendation, decision or profiling about them.”

Another amendment mentioned by Mr. Champagne aims to cover AI systems like ChatGPT. The tool was launched in November last year by OpenAI and enables users to engage in conversations with the system, answer questions, and perform tasks. Other tech companies have launched competitors, such as Google’s Bard and X’s Grok.

“Reading the bill today, four months since OpenAI unleashed ChatGPT on the world, is akin to reading a bill designed to regulate scribes and calligraphers four months after the advent of the printing press,” Conservative MP Michelle Rempel Garner remarked during debate on Bill C-27 in April.
The Calgary MP noted how the technology is now available to a broad audience, which she said makes the bill’s approach “obsolete.”

‘Scrapped Completely’

Bill C-27 has also come under criticism by various stakeholders. The Canadian Labour Congress (CLC) has decried the lack of public debate and consultation surrounding the Artificial Intelligence and Data Act (AIDA) part of the legislation.
CLC Vice-President Siobhán Vipond told the industry and technology committee on Oct. 31 that it’s a “major deficiency” that AIDA exempts the government and Crown corporations.

The lack of “robust” consultation has also been called out by business interests, with Canadian Chamber of Commerce Vice-President Catherine Fortin LeFaivre saying it’s “required to properly address AI regulation needs in Canada.”

“It’s critical that our AI regulations are precise enough to provide important guardrails for safety, while allowing for our businesses to harness AI’s full potential responsibly,” she testified at the Oct. 31 committee meeting.

Jim Balsillie, former BlackBerry chair and co-CEO and founder of the non-profit Centre for Digital Rights, believes AIDA needs to be “scrapped completely.”

While testifying at the Oct. 31 committee meeting, he went further than the privacy commissioner, saying that people should be able to contest decisions made by AI systems, such as those involving insurance, school admissions, and credit scoring.

“There has been much gaslighting from industry lobbyists and self-interested parties whose profits depend on mass surveillance, arguing that meaningful AI privacy regulations limit innovation,” he said. “Privacy and AI regulations are not impediments to innovation.”

Ms. Vipond also questioned the relevance of the government’s voluntary AI code of conduct for businesses.

“I think that any time we go into voluntary we get into trouble, just as a fundamental approach to the work, so we do have major concerns there,” she said.

“I think industry self-regulation in such an important area is precisely not what we need at the moment,” added her CLC colleague Chris Roberts. “We need clear statutory and regulatory rules around industry and clear expectations from industry.”

Outside Canada

Other Western democracies have come forward with various plans to regulate AI, each claiming a leading role in the management of the frontier science.
The United States has been able to move rapidly due to its political system, with the Biden administration not waiting on legislation and instead issuing an Executive Order on the “Safe, Secure, and Trustworthy Development and Use” of AI on Oct. 30.

U.S. President Joe Biden said he places the “highest urgency” on the safe and responsible development and use of AI. “The rapid speed at which AI capabilities are advancing compels the United States to lead in this moment for the sake of our security, economy, and society,” he wrote.

The European Union’s legislation on AI, which it calls the “world’s first comprehensive AI law,” is currently being studied and could be adopted by year’s end.

The common understanding is that AI is full of both promise and pitfalls.

The Bletchley Declaration made at the AI Safety Summit hosted by the UK in early December, signed by participatory countries like Canada and the United States, talks of “enormous global opportunities” brought on by AI, while also saying there is the “potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.”

Mr. Champagne said the world previously put a stop to technology it judged harmful and that AI should be assessed the same way.

“Cloning is the best example of humanity’s deciding that we won’t go there,” he told MPs on Sept. 26. “AI is kind of the same thing. It’s not just something that should float and go wherever it may go.”

He said boundaries should be set in which we can “have creative and responsible innovation to help people in so many ways, but that there is going to be a line that should not be crossed because that would be detrimental to people.”

In March, hundreds of stakeholders in the field of AI signed an open letter calling for a moratorium on the development of systems more powerful than OpenAI’s Generative Pre-trained Transformer 4 (GPT-4).

It says decisions about letting AI systems replace humans in a number of tasks should not be “delegated to unelected tech leaders.”

“Should we risk loss of control of our civilization?” asks the letter. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”