In the summer of 2020, military reservist and small business owner Corey Hurren posted COVID-19 conspiracy content before arming himself and ramming the gates of Rideau Hall with the intention of arresting Prime Minister Justin Trudeau. 

Before this attempt, and after the start of the pandemic, Hurren found refuge in conspiracy theories like “Event 201,” which he referenced in a note left in his car. This theory claims that the elite are the reason behind the pandemic, and it is based on an actual exercise (called Event 201) that took place a few months before the COVID-19 outbreak, which simulated a policy response to a hypothetical pandemic scenario. 

While Hurren’s actions did not end up physically hurting anyone, they highlight the dangerous implications of fake news in Canada. The conspiracy theories that Hurren fell into were debunked by fact-checking organizations months before his attack on Rideau Hall, but that information either never entered Hurren’s digital world, or he didn’t believe it if it did. His actions demonstrate the real violence and consequences that we are increasingly seeing in Canada and around the world as the spread of misinformation online becomes more common. 

Although the federal government took steps to contain misinformation during elections by modernizing laws to stop false statements about candidates, its efforts to combat disinformation during the pandemic are far from impressive.

According to Stats Canada, 96% of Canadians who used the internet to search for COVID-19 facts came across information they suspected were false or misleading. And two in five Canadians said they believed such misinformation before realizing that it was false. As the pandemic increases discussions around fake news and studies come out showing evidence of previous foreign disinformation campaigns in Canadian elections, taking the discussion around fake news to the mainstream level is becoming more essential.

Although the federal government took steps to contain misinformation during elections by modernizing laws to stop false statements about candidates, its efforts to combat disinformation during the pandemic are far from impressive. In response to the rise of misinformation online, and in some instances fueled by governments themselves, we have seen the emergence of dedicated fact-checking organizations. But this is not a simple solution, nor a silver bullet for the spread of misinformation.

Fact-checking, which is the process journalists, editors or dedicated fact-checkers use to verify news before publication, is an integral part of journalism. However, the new age of fact-checking includes organizations that investigate published news to judge its truthfulness. The first dedicated fact-checking organizations were founded in 1994 in the U.S., and now there are about 300 organizations worldwide. The fact-checking industry has been held up as one solution to combat fake news dissemination, especially after political interference in the 2016 U.S. presidential elections. Political fact-checking in particular is seen as an effective way to refute false statements made by politicians.

The fact-checking landscape in Canada is relatively small, particularly when compared to the U.S. Only a few active fact-checking organizations and initiatives verify news in Canada. These include French-speaking Agence Science-Presse and Descrypteurs along with DisinfoWatch and Facebook partner AFP Fact Check Canada.

AFP Fact Check Canada is part of Global News Agency while Descrypteurs belongs to Radio-Canada and focuses on the spread of false information on social media. Agence Science-Presse is an independent media non-profit, with a section for fact-checking called “The Rumor Detector.” DisinfoWatch was launched in 2020 as part of the Macdonald-Laurier Institute, to monitor and debunk disinformation around COVID-19 and foreign disinformation. Both Descrypteurs and AFP are part of Poynter’s International Fact-Checking Network, which aims to ensure accountability in journalism by providing a code of principles and verifying nonpartisan organizations that follow this code. But as more fact-checking organizations join media companies, the independence of fact-checking itself is becoming questionable. Facebook’s work with nonpartisan fact-checkers to detect fake news in Canada and other countries is an important case study in the ways in which fact-checking can be weaponized.

The fact-checking methodology organizations rely on can differ, but the process mainly focuses on three steps. First comes choosing a claim to verify, then investigating it by examining publicly available evidence and contacting relevant sources. Finally, fact-checkers judge the truthfulness of the claim and publish it using rating systems.

Organizational and personal biases may affect the initial step of choosing claims to verify.

Despite arguments about its overall effectiveness, research has found that the process of checking 18 political claims and documenting facts is beneficial for the political landscape and its spread can increase awareness about misinformation. However, it is essential to remember that fact-checking is not a silver bullet. Every day there are thousands of claims to check and fact-checking is a lengthy process. This forces researchers to only select a few claims to investigate. By the time a claim is verified, it may have reached thousands of people.

On top of all this, organizational and personal biases may affect the initial step of choosing claims to verify. Furthermore, choosing evidence to verify a claim is an issue of its own. Researchers rely on available official public data and information from non-partisan parties and governments to help them reach a judgment. But how can a verdict be reached when evidence is coming from undemocratic governments? Or, for a Canadian example, how can we properly fact-check Indigenous history when the government destroyed 200,000 Indian Affair files between 1936 and 1944?

After the massive fake news campaigns during the 2016 U.S. presidential election, Facebook initiated its third-party fact-checking program, inviting organizations worldwide to verify posts on Facebook and Instagram. The process was supposed to be simple. Facebook and fact-checking organizations identify suspicious viral posts, organizations then examine the accuracy of claims and eventually provide Facebook with a judgment. Afterward, the platform assigns different warning labels. If a post is misleading, this can result in penalties to users, like reducing reach and removing the ability to advertise and monetize.

Nevertheless, problems regarding the transparency of a program funded by a tiny portion of Facebook’s revenue and its effects emerged. Organizations and initiatives like Snopes and ABC News stopped working with Facebook, citing numerous concerns. As a conglomerate with political and business goals, Facebook controls the spread of disinformation even when working with fact-checking organizations, as the platform interferes with fact-checking decisions by pressuring fact-checkers to downgrade labels.

Since Facebook does not fact-check opinions, they can use this loophole to ask fact-checking organizations to adjust their decision if they consider some claims “opinion pieces.” This means that in addition to relying on a tech company to decide what is truth and what is fake we have to trust how they will manage their fact-checking process, which could be extremely dangerous for significant issues like pandemics or the climate crisis. One example of this is: a post by active Facebook advertiser Prager University that claimed misleading climate content and was labelled false by a fact-checking organization. Facebook later discussed changing the label and the verdict was downgraded, enabling Prager University to continue publishing misleading ads.

Just as Donald Trump exploited the term “fake news” to avoid criticism until the term lost its meaning, politicians can do the same with “fact-checking.”

Besides Facebook’s program, what is also worrying is how unknown groups and users impersonate fact-checkers to publish fake verifications to undermine legitimate fact-checkers. These groups also attack fact-checkers with misinformation campaigns to damage their credibility, in addition to threatening them. After suspicions regarding Saudi Arabia’s role in the murder of journalist Jamal Khashoggi, a fake Saudi Arabian fact-checking organization labelled news blaming the Saudi government as fake. Misinformation campaigns also promoted fake fact-checking news claiming that fact-checking project Factcheck.org exposed another fact-checker, Snopes, as a “liberal propaganda site.”

Just as Donald Trump exploited the term “fake news” to avoid criticism until the term lost its meaning, politicians can do the same with “fact-checking.” Politicians now expect to be fact-checked, and they are ready to use the term “fact-checking” to sweep away criticism and call opponents liars. Republican and Democratic leaders in the U.S. accuse fact-checkers of being backed by certain agendas, while other political candidates defend claims against their campaigns using their own fact-checking groups.

Despite these depressing developments, fact-checking remains an essential part of discussions around fake news. A strong fact-checking industry can stop the normalization of lying and advocate for policy changes. But the weaponization of fact-checking can cause irreversible harm. Deciding what is true at the moment is turning into an impasse, and weaponizing truth verifications can make the current thin line separating truth and falsehood even blurrier.

What happened after 2016 shows us that significant solutions and effects may come from big tech companies, who can halt or spread disinformation. Although Canada may still only be in the infancy of its fact-checking, Canadian fact-checkers could easily find themselves facing weaponization incidents similar to other cases worldwide. As a first step, Canada should refrain from outsourcing fact-checking to tech companies with the power to change verifications.

Canada and the rest of the world needs to establish a public discussion to explore transparency issues around platforms like Facebook, Instagram, and Twitter and how they handle misinformation. In the meantime, platforms can amend their policies regarding highly persuasive topics (for example, Facebook’s ban on publishing anti-vaccination ads).

Advertisements containing political, environmental and health misinformation can affect millions of people. One first and major step the government can take is to ensure that ads on social media platforms comply with federal and provincial laws that prevent misleading advertisements. Fact-checkers can support such processes by verifying ads pre-publication. Though it will never be an easy job, it will be a lot harder without independent fact-checking organizations.