
Information manipulation poses the single biggest threat to Canadian democracy, concluded commissioner Marie Josée Hogue, in her
final report on foreign interference
in federal elections, earlier this year.
It probably came as no surprise to the commissioner that emerging technologies amplified the falsehoods during the recent general election.
Generative AI has emerged as a new player in the disinformation game, enabling malign actors to create huge quantities of misleading content.
Cyabra, a company that monitors disinformation online, published a two-part analysis on the use of fake profiles by co-ordinated networks on X (formerly Twitter), Facebook and Instagram to target the Liberal campaign and party leader Mark Carney.
Cyabra sampled 2,451 profiles that mentioned Carney between February 19th and March 21st, which generated 3,418 posts and comments.
Negative sentiment dominated around one-third of those posts and, of those posts, 76 per cent referenced a connection with disgraced socialite Ghislaine Maxwell, or implied that Carney had visited an island owned by her former partner, the late American financier and sex offender Jeffrey Epstein.
The claims were often accompanied by
that suggested Carney is a “child-molesting pervert” and were shared as if they were authentic.
A crossover from the online world to the campaign took place at a Liberal rally in Kitchener, Ont., when a heckler was heard shouting: “How many kids did you molest with Jeffrey Epstein?”
Cyabra’s detection systems identified that nearly one-quarter of the X accounts were fake (the software looks for signs like synchronized posting, copy-paste campaigns, fake engagement loops and other bot-like behaviour, such as accounts with no personal bios and using default avatars).
The other Cyabra analysis looked at activity on X between April 14th to 21st and found a surge of inauthentic activity
aimed at creating negative perceptions of the Liberal party and Carney
.
The company analyzed 2011 profiles that generated 2,947 posts and comments.
It found that 28 per cent of the profiles were fake and pushed negative sentiment, such as labelling Carney as an elitist who was trying to manipulate the political process. The profiles portrayed the political system as corrupt and urged people not to vote Liberal.
“These are not abstract data points,” said Jill Burkes, Cyabra’s communications lead. “They show in real time how democratic discourse is being hijacked by actors who know exactly what buttons to push and when.”
The Liberals were not the only victims. Scammers used sensationalist headlines and facsimiles of
featuring Conservative Leader Pierre Poilievre to lure people into a cryptocurrency Ponzi scheme.
But the Cyabra analysis is a particularly good example of how malicious actors can use social media to spread false narratives across the country before the truth can get its socks on.
AI is helping to undermine trust in a system that is already regarded with suspicion — and it is getting better at doing it.
“AI keeps developing in leaps and bounds,” said Marcus Kolga, founder and director of DisinfoWatch, a Canadian disinformation monitor. “There will come a day when these fake images and videos will be almost undetectable. We need to be prepared for that moment, and I’m not sure we are.”
Thanks to the foreign interference controversies, Canadians are familiar with the ways hostile powers have tried (and are trying) to shape democracy in this country.
The 2025 election proved no different from its immediate predecessor. The Russia Today news propaganda channel actively tried to exploit divisions between Canada and the U.S.
American conspiracy theorists Tucker Carlson and Alex Jones embraced People’s Party leader Maxime Bernier, with the former
.”
The government’s Security and Intelligence Threats to Election task force intervened mid-election to warn about a
co-ordinated online influence campaign by the Chinese government
on the WeChat social media platform,
including a disinformation operation aimed at Conservative candidate Joe Tay
, a Hong Kong democracy advocate, who lost in the riding of Don Valley North. Tay was subject to an arrest warrant in Hong Kong and a post on Facebook asked: “Is Canada about to become a fugitive’s paradise?”
Nagging worries remain about China’s ability to manipulate the information environment through its effective control of TikTok. Critics argue that the social media platform serves as
to advance its political narratives and that it can configure its algorithm to suit its needs. I’ve seen no evidence to suggest the platform intervened on this occasion, but the potential is clearly there.
The most worrying aspect of these shifts is that the Wild West of social media is precisely where younger people source their news. A slide by Relay Strategies during a recent presentation suggested that women between 18 and 34 received only about one-quarter of their campaign information from television, compared to nearly three-quarters for women over 65.
That reflects the
found by pollster Pollara last year, which said that 57 per cent of generation Z got its news from social media, compared to just 18 per cent for boomers.
I would be first to admit that legacy media has to do a better job convincing Canadians that it is worthy of their confidence. But the Pollara poll suggests there are still high levels of trust in outlets like CBC, CTV, Global, Radio Canada and TVA in broadcast, and National Post, the Globe and Mail, Toronto Star and La Presse in print.
For all the grousing about the mainstream media, you would have to possess the naiveté of Forrest Gump to rank X, Instagram or TikTok as more trustworthy than untrustworthy.
The technology exists to use AI to fight malicious AI. For example, Cyabra’s software can identify mis/disinformation. But, crucially, it doesn’t have the power to remove it. That responsibility lies with the platforms — and they are not minded to moderate their content. Meta, which owns Facebook and Instagram,
has ended its fact-checking program
and X has pulled out of the European Union’s voluntary code on disinformation.
The EU still has a Digital Services Act that obliges platforms to address the spread of disinformation. However, Canada has no enforcement mechanism, beyond the Canada Election Act’s prohibition on spreading false information about candidates, and the Criminal Code on hate speech.
DisinfoWatch’s Kolga points to Finland’s efforts to counter Russian disinformation as a non-legislative response that Canada should consider.
The Finnish government launched anti-fake news initiatives in 2014 and has since incorporated media literacy and critical thinking into school curriculums. Kindergarten teachers are seen as the first line of defence, and students are encouraged to become digital detectives in the search for disinformation. The Nordic country was recently rated Number 1 in a list of 35 European countries in terms of resilience to disinformation.
“Finland has taken a long-term approach, inoculating its children to disinformation from an early age. That is something we can, and should, look at,” said Kolga.
National Post
Get more deep-dive National Post political coverage and analysis in your inbox with the Political Hack newsletter, where Ottawa bureau chief Stuart Thomson and political analyst Tasha Kheiriddin get at what’s really going on behind the scenes on Parliament Hill every Wednesday and Friday, exclusively for subscribers.
.








