LP_468x60
ontario news watch
on-the-record-468x60-white
and-another-thing-468x60
Alberta
Other Categories

AI is already permeating politics — and it’s time to put rules in place, experts say

OTTAWA — A woman in a grey-brown shirt sits next to a man, looking like she could be listening intently to someone out of frame. She has her arms crossed on a table — but also a third arm, clothed in plaid, propping up her chin.

Voters did a double take as they looked through a Toronto mayoral candidate’s platform last summer and saw an image of the mysterious three-armed woman. 

It was an obvious tell that Anthony Furey’s team had used artificial intelligence — and amid much public snickering, they confirmed it. 

The snafu was a high-profile example of how AI is coming into play in Canadian politics. 

But it can also be leveraged in much more subtle ways. Without any rules in place, we won’t know the full extent to which it’s being put to use, says the author of a new report. 

“We are still in the early days, and we’re in this weird period where there aren’t rules about disclosure about AI, and there also aren’t norms yet about disclosure around uses of generative AI,” says University of Ottawa professor Elizabeth Dubois. 

“We don’t necessarily know everything that’s happening.”

In a report released Wednesday, Dubois outlines ways in which AI is being employed in both Canada and the U.S. — for polling, predicting election results, helping prepare lobbying strategies and detecting abusive social-media posts during campaigns. 

Generative AI, or technology that can create text, images and videos, hit the mainstream with the launch of OpenAI’s ChatGPT in late 2022. 

Many Canadians are already using the technology in their everyday lives, and it is also being used to create political content, such as campaign materials. In the United States last year, the Republican Party released its first AI-generated attack ad. 

Sometimes it’s obvious that AI has been used, like with the three-armed woman. 

When the Alberta Party shared a video of a man’s endorsement online in January 2023, people on social media quickly pointed out he wasn’t a real person, says the report. It was deleted. 

But if the content looks real, Dubois says that can be hard to trace.

The lack of established rules and norms on AI use and disclosure is a “real problem,” she says.

“If we don’t know what’s happening, then we can’t make sure it’s happening in a way that supports fair elections and strong democracies, right?”

Nestor Maslej, a research manager at Stanford University’s Institute for Human-Centered Artificial Intelligence, agrees that’s a “completely valid concern.”

One way AI could do real harm in elections is through deepfake videos.

Deepfakes, or fake videos that make it look like a celebrity or public figure is saying something they’re not, have been around for years. 

Maslej cites high-profile examples of fake videos of former U.S. president Barack Obama saying disparaging things, and a false video of Ukrainian President Volodymyr Zelenskyy surrendering to Russia. 

Those examples “occurred in the past when the technology wasn’t as good and wasn’t as capable, but the technology is only going to continue getting better,” he says.

Maslej says the technology is progressing quickly, and newer versions of generative AI are making it more difficult to tell whether images or videos are fake. 

There’s also evidence that humans struggle to identify synthetically generated audio, he notes. 

“Deepfake voices (are) something that has tended to trip up a lot of humans before.”

It’s not to suggest that AI can’t be used in a responsible way, Maslej points out, but it can also be used in election campaigns for malicious purposes. 

Research shows “it’s relatively easy and not that expensive to set up AI disinformation pipelines,” he says.

Not all voters are equally vulnerable. 

Individuals who aren’t as familiar with these types of technologies are likely to be “much more susceptible to being confused and taking something that is in fact false as being real,” Maslej notes. 

One way to put in place some guardrails is with watermarking technology, he says. It automatically marks AI-generated content as such so people don’t mistake it as real. 

No matter the solution that policymakers decide upon, “the time to act is now,” Maslej says. 

“It does seem to me that the mood in Canadian politics has sometimes become a bit more more tricky and adversarial, but you would hope there is a still an understanding among all participants that something like AI disinformation can create a worse atmosphere for everybody.”

It’s time to have government agencies identify the risks and harms that AI can pose in the electoral processes, he says.

“If we wait for there to be, I don’t know, some kind of deepfake of (Prime Minister Justin) Trudeau that causes the Liberals to lose the election, then I think we’re going to open a very nasty can of political worms.”

Some malicious uses of AI are already covered by existing law in Canada, Dubois says. 

The use of deepfake videos to impersonate a candidate, for instance, is already illegal under elections law. 

“On the other hand, there are potential uses that are novel or that will not clearly fit within the bounds of existing rules,” she says. 

“And so there, it really has to be a kind of case-by-case basis, initially, until we figure out the bounds of how these tools might get used.”

This report by The Canadian Press was first published Feb. 1, 2024.

Anja Karadeglija, The Canadian Press