LP_468x60
on-the-record-468x60-white
Canada

Spy watchdog reviewing Canadian security agencies’ use of artificial intelligence

OTTAWA — Canada’s spy watchdog is examining the use and governance of artificial intelligence in national security activities.

The National Security and Intelligence Review Agency has informed key federal ministers and organizations of the study, which will look at how the security community defines, uses and oversees aspects of AI technologies.

Canadian security agencies have used AI for tasks ranging from translation of documents to detection of malware threats.

In a letter to ministers and heads of organizations with a national security role, review agency chair Marie Deschamps said the study’s findings will provide insights into the use of new and emerging tools, help guide future reviews and highlight “potential gaps or risks” that might require attention.

The review agency has a statutory right to see all information held by departments and agencies under examination — including classified and privileged material, with the exception of cabinet confidences.

The letter, posted on the review agency’s website, says requests for information may involve documents, written explanations, briefings, interviews, surveys and system access.

“This review may also include independent inspections of some technical systems,” Deschamps added.

The letter was sent to multiple cabinet members, including Prime Minister Mark Carney, Artificial Intelligence and Digital Innovation Minister Evan Solomon, Public Safety Minister Gary Anandasangaree, Defence Minister David McGuinty, Foreign Affairs Minister Anita Anand and Industry Minister Mélanie Joly.

It also went to the heads of agencies with major security roles, including the Canadian Security Intelligence Service, the RCMP and the Communications Security Establishment, Canada’s cyberspy service.

The letter was also sent to the heads of agencies that may not come immediately to mind in the security context, such as the Canadian Food Inspection Agency, the Canadian Nuclear Safety Commission and the Public Health Agency of Canada.

The RCMP said in response to a question about the review that it embraces independent examination of national security and intelligence activities.

“The RCMP believes that establishing transparent and accountable external review processes is critical to maintaining public confidence and trust,” the RCMP said in a media statement.

In 2024, a report from a federal advisory body called on Canada’s security agencies to publish detailed descriptions of their current and intended uses of artificial intelligence systems and software applications.

The National Security Transparency Advisory Group predicted increasing reliance on the technology to analyze large volumes of text and images, recognize patterns and interpret trends and behaviour.

At the time, CSIS and the CSE acknowledged the importance of transparency about AI but added there were limitations on what could be disclosed publicly, given their security mandates.

The federal government’s principles for the use of AI include promoting openness about how, why and when it employed, and assessing and managing any risks AI poses to legal rights and democratic norms at an early stage.

The principles also advocate training for public officials developing or using AI so that they understand legal, ethical and operational issues, including privacy and security.

In its most recent annual report, CSIS said it was implementing AI pilot programs across the agency in a manner consistent with the federal government’s guiding principles.

The RCMP says on its website there are several factors involved in ensuring that AI is used legally, ethically and responsibly.

These elements include careful system design to avoid bias and discrimination, respect for privacy during information analysis, transparency about how an AI system makes decisions and accountability measures to ensure proper functioning, the Mounties say.

In its artificial intelligence strategy, the CSE says it is committed to developing new capabilities to solve critical problems through innovative use of AI and machine learning technologies, championing responsible and secure AI and countering threats posed by AI-enabled adversaries.

The CSE’s strategy says that, if deployed safely, securely and effectively, these capabilities will improve its ability to analyze larger amounts of data faster and with more precision, improving the quality and speed of decision-making.

“We will always be thoughtful and rule-bound in our adoption of AI, keeping responsibility and accountability at the core of how we will achieve our goals,” CSE chief Caroline Xavier says in a message included in the strategy.

“Recognizing that these technologies are fallible, we will experiment and scale incrementally, with a focus on rigorous testing and evaluation, keeping our highly trained and expert humans in the loop.”

This report by The Canadian Press was first published Jan. 1, 2026.

Jim Bronskill, The Canadian Press