Artificial Intelligence Vastly Entrenched in Daily Life as Ottawa Seeks to Regulate

AI is quickly entering medicine, academia, banks, literature, social media, and more.
Artificial Intelligence Vastly Entrenched in Daily Life as Ottawa Seeks to Regulate
People walk past an AI sign at the All In artificial intelligence conference in Montreal on Sept. 28, 2023. (The Canadian Press/Ryan Remiorz)
Tara MacIsaac
1/5/2024
Updated:
1/5/2024
0:00

Many Canadians encounter artificial intelligence (AI) in their daily lives without even realizing it, and the use of AI is quickly expanding. Ottawa is trying to curb the risks through regulation, though limits on the rapidly developing technology are likely to remain voluntary for years to come.

About a quarter of Canadian organizations surveyed by Deloitte and Modus Research in March 2023 said they’ve launched one or more AI implementations and 42 percent said they have an AI pilot in place.
Meanwhile, 57 percent of Generation Z (aged 18 to 26) are experimenting with AI to generate content for work, school, or personal use, although only 6 percent of Boomers are doing so, according to a Canadian Journalism Foundation poll published in October 2023. That poll also found that 58 percent of all respondents had encountered false or misleading AI-generated online or social media information in the past six months.
AI is ubiquitous in daily life—Google Maps uses it to change routes based on traffic, online advertisers use it to target user preferences, and banks use it to suggest services based on a customer’s spending habits.
Canadians may find it monitoring their behaviour in various ways. The Ottawa Hospital announced in the fall the installation of AI machines on hospital room ceilings to surveil hand-washing practices.
The machines, which look like overhead projectors, scrutinize not only whether doctors, nurses, or other visitors to a patient’s room have washed their hands, but how well they washed. The machine beeps at people who haven’t washed their hands well enough if they go near the patient. It records the infraction, though it doesn’t identify individuals.
AI has been used by law enforcement as well. In 2021, Canada’s privacy commissioner flagged the RCMP’s use of Clearview AI technology for facial recognition, the problem being that it scraped more than 3 billion images from the internet without permission. Privacy issues are among the concerns around AI. It can collect massive amounts of data and filter through it relatively quickly to sift out whatever information its user wants, a power that could be dangerous in the wrong hands.
Legislation to regulate AI use, called the Artificial Intelligence and Data Act, is slowly working its way through the House of Commons, and isn’t expected to take effect until 2025 if passed. The federal government has published voluntary guidelines in the meantime.

Other concerns with AI include fairness and accuracy. For example, some have questioned how AI is used to filter job applicants, and whether it’s fair for rejected applicants.

Canadian courts, such as the Supreme Court of Yukon, have expressed concern about AI being used for legal research and submissions in court and the accuracy of AI-generated information. AI can “hallucinate,” presenting false information as fact, a bug developers are trying to work out.

The AI industry is a homegrown one, with some of its key pioneers based in Canada, and some say it’s a national opportunity. Canada has seen more than $8.6 billion in total AI investments, according to the Deloitte report, and only the United States and Britain had more venture capital investment in AI during the period from April 1, 2022, to March 31, 2023.

But the vast majority of business organizations surveyed for the report, 86 percent, cited concerns about AI’s ethical risks. Fifty-one percent worried about AI bias and poor results. Some also worried about data privacy, cybersecurity, and job losses.

AI is quickly entering medicine, academia, banks, literature, social media, and more. We’ll take a look at some examples.

In Medical Practice

The experience of a Saskatchewan woman named Ann Johnson is an example of the benefits AI may offer. She suffered from a rare brainstem stroke in 2005, at the age of 30, and was left unable to move her body for the most part. She remains unable to talk on her own, but AI talks for her.
American doctors put an implant inside her head to pick up on signals from her brain, which AI analyzes and translates into speech, spoken for her by an avatar. An article about Ms. Johnson’s treatment was published in the journal Nature in August 2023.

Another medical use for AI is in prosthetics. Toronto entrepreneurs have created a SmartARM that uses AI to analyze visual data from cameras and make the prosthetic respond appropriately. For example, as the prosthetic hand nears a cup handle, it moves the fingers to curl around it and take hold.

AI chatbots, virtual assistants, and customer service avatars are increasingly replacing human associates, and nurses may similarly be replaced for some tasks. In November, Global News previewed an AI nurse created through a partnership between Deloitte and The Ottawa Hospital.

The nurse helps follow up on patient recovery after discharge. It’s one of many ways Deloitte is looking to bring AI into hospitals.

ChatGPT, created by OpenAI, is the most popular AI tool across many fields and for personal use. It gained over 100 million users within two months of launching in November 2022; by contrast, it took Instagram nearly two years to gain that many users after it was launched in 2010.
ChatGPT had more than a billion users by February 2023.

In Schools, Literature

AI has helped many with daily tasks, such as translating, getting personalized movie recommendations, and coming up with ideas for anything from a birthday party theme to a marketing pitch. But it has become notorious for helping students cheat.

ChatGPT can write essays or any other kind of text desired.

“This undermines the very purpose of higher education, which is to challenge and educate students, and could ultimately lead to a devaluation of degrees,” says a paper published in March 2023 by three UK researchers in the journal Innovations in Education and Teaching International.
Authors have also been displeased with ChatGPT’s writing capabilities. American author and long-time publisher Jane Friedman detailed on her website in August 2023 the trouble she had after someone used AI to write books mimicking her writing style and sold them under her name. It took her much effort to have the titles removed from Amazon and other platforms.
Famous Canadian authors including Margaret Atwood and Alice Munro were among those whose works were fed to AI—without their permission—in a large training dataset meant to train AI how to write. The Atlantic broke the news about the dataset, called Books3, in September 2023, and CBC took a closer look at the authors included in it.
Authors in the United States have filed a class-action lawsuit against OpenAI over the dataset as they did not give their permission for the use of their books.

The Future

A new workforce is in the making to create all this AI. Dozens of Canadian academic institutions are now offering AI-related degrees and certificates.
The creation of AI jobs helps mitigate the loss of some jobs AI will make obsolete, OpenAI CEO Sam Altman said during a U.S. Senate hearing on May 16, 2023. The hearing looked at potential effects ChatGPT could have on the economy. Mr. Altman said ChatGPT is generally “good at doing tasks, not jobs,” so it will help people do their jobs efficiently but not replace them. 
Goldman Sachs Group said in a March 2023 report, however, that generative AI could eliminate up to 300 million full-time jobs globally, particularly white-collar jobs.

But the deep-seated fear of AI comes not necessarily from job loss, copyright infringement, or even loss of privacy. Conversations about AI often come back to the science-fiction-inspired ideas of machines that outsmart, and overpower, humans.

Deloitte Canada published a panel discussion on its report on the country’s AI ecosystem in October 2023, and the idea of “rogue AI agents” was brought up by Marie-Paule Jeansonne of Mila, a Montreal-based AI research and development institute.

“Even if there’s the slightest possibility that it might happen, we think it’s important to start researching countermeasures right away,” she said.

The panellists noted that AI pioneers such as Geoffrey Hinton and Yoshua Bengio recently suggested governments and companies spend at least a third of their AI development funding on ensuring safety and ethical use.

Right now, the reality is not a third of the investment but more like a fiftieth, said Audrey Ancion of Deloitte’s AI Institute in Canada.