Below the Fold: Trust abhors a vacuum

If governments are serious about preventing the growth of disinformation, it is up to them to stop providing such fertile soil.
Author(s): 
September 3, 2019

Using the internet to become informed about politics today may less resemble cruising an information highway than it does careening down Mario Kart’s Rainbow Road. Staying on course in the popular Nintendo racing game’s frictionless technicolour track takes sustained vigilance, lest carelessness, a fake Item Box, or a Bob-omb pitched your way (see GIF) throw you into the dark.

As the finish line of voting day looms, how do we ensure we get an accurate picture of the 2019 federal elections when our online feeds throw us so many fake items and distracting turns? This is a question that academics, policy experts and governments have struggled to address in recent years.

Much of the worry—expressed in an April report of the Communications Security Establishment (CSE) on foreign interference in elections; the work of the new SITE Task Force involving CSE, the RCMP, CSIS and Global Affairs Canada; and in government exhortations to social media companies—has focused on the possibility of co-ordinated disinformation campaigns disseminated through social media platforms. On Twitter, for example, troll and bot accounts have exacerbated anti-immigration sentiment, promoted conspiracy theories and artificially amplified hashtags such as #TrudeauMustGo. A false story this February about federal NDP leader Jagmeet Singh was shared thousands of times on Facebook.

While it is important to think about how online platforms enable the spread of disinformation, governments and politicians too often seem to forget their own role in creating and perpetuating demand for “alternative facts.” After all, their own public statements are often quite divorced from reality.

Homegrown examples of political disinformation include Prime Minister Justin Trudeau’s disingenuous handling of the SNC-Lavalin affair; Conservative leader Andrew Scheer tolerating public promotion of the discredited “Pizzagate” conspiracy at a town hall (until receiving backlash); Alberta Premier Jason Kenney downplaying the climate change crisis; and Ontario Premier Doug Ford banning provincial ministries from mentioning climate change on social media. And who doesn’t recall the Conservative party’s voter suppression robocalls in the 2011 federal election? Automated disinformation before it was cool.

Canada’s fourth estate also regularly fails to uphold its informational responsibility and integrity. Think of how the mainstream media derailed the national discussion on the final report of the inquiry into missing and murdered Indigenous women and girls. Or the Toronto Sun publishing an untrue and incendiary story stoking anti-immigrant hatred. Or Postmedia lobbying to work with Premier Kenney’s explicitly anti-environmentalist “war room.” Or Quebecor’s constant airing and publishing of Islamophobic content. Or long-standing double-standards in how the media routinely treats violence from white men (domestic terrorism, sexual assault) versus their victims and racialized offenders.

Placing increasingly stringent regulations on social media platforms will not suffice to solve the long-term flaw in our democracy that the rise of disinformation exposes. What is being regulated as a technology issue is at its core political: the willingness of political leaders and legacy media outlets to consistently mislead, lie to and obscure the truth from constituents. Until that stops, bad actors will continue finding ways to exploit the lost and roving trust of disaffected voters, no matter how many platforms set up ad registries or how many times Mark Zuckerberg is summoned before a parliamentary committee.

Moreover, political discourse has increasingly migrated from open forums to closed groups and private chats over apps such as WhatsApp, WeChat and Telegram. Disinformation campaigns spread via WhatsApp played a prominent role in recent elections in India, Brazil, Mexico, Nigeria and Spain. WeChat was involved in a municipal vote-buying controversy in Vancouver. However, there is a twofold flaw in attempting to regulate away disinformation in these channels.

First, end-to-end encryption and ephemeral messaging (messages that are automatically deleted after a certain time or after being read) may make effective legal responses impossible in practice if they depend on being able to see the contents of messages. Second, chasing disinformation into increasingly private spheres of citizens’ digital communications risks ever more invasive measures. We should not have to trade away our digital privacy rights so that politicians, third-party election advertisers and governments can continue to lie to or mislead the electorate, then rely on after-the-fact attempts to clean up the information environment they themselves polluted.

In January, the federal government established a “critical election incident public protocol” to monitor and identify threats to electoral integrity. Tellingly, as reported by CBC News, “officials say there is no plan to police the usual political spin on the campaign trail.”

The “usual political spin” is how we got here in the first place. If the government—and those who wish to form government—are serious about preventing the growth of disinformation in Canadian political discourse, then it is up to them to stop providing such fertile soil.


 
Cynthia Khoo is a digital rights lawyer at Tekhnos Law and a master’s of law candidate (concentration in law and technology) at the University of Ottawa. Her column, Below the Fold, appears regularly in the Monitor.

 

Offices: