Where is the consent of the algorithmically policed?

The criminal justice system is exactly the wrong place for the recklessness and techno-solutionism glorified by Silicon Valley.
Author(s): 
September 1, 2020

In her 2012 book, Consent of the Networked, Rebecca MacKinnon noted that the companies and governments “that build, operate, and govern cyberspace are not being held sufficiently accountable for their exercise of power over the lives and identities of people who use digital networks.” MacKinnon’s observation, that both public and private sector actors are “sovereigns operating without the consent of the networked,” is even more apparent today, not least in the context of policing and law enforcement in the criminal justice system. 

Law enforcement agencies across Canada have been deploying various algorithmic policing technologies on the public without any advanced notice, let alone prior informed consent. This lack of due process and democratic engagement is disturbing given the high risk that these technologies may result in a range of constitutional and human rights violations, as new research (in which I was involved) by the Citizen Lab and the International Human Rights Program at the University of Toronto details.

For example, police services in Calgary, Edmonton, Toronto, Peel, Halton, Ottawa, Durham, Niagara and Hamilton, as well as the RCMP, all admitted in early 2020 to having used or tested a controversial facial recognition tool built by Clearview AI. But they only did so in response to media inquiries following a New York Times feature on the company, which mentioned its technology was being used by Canadian law enforcement authorities. Similarly, we only found out the Toronto Police Service had been using another facial recognition technology for more than a year after the Toronto Star reported the fact in May 2019. 

These are high-stakes matters that should not be left up to the discretion of individual police forces. Facial recognition technology poses a significant threat to the right to privacy, by potentially putting an end to the ability to maintain anonymity in public. It also allows police to repurpose data previously collected in a different context (such as using mugshot databases) without any built-in mechanism to ensure that constitutional safeguards against unreasonable search and seizure are appropriately calibrated to account for algorithmically enhanced police capabilities.  

On another front, the RCMP has repeatedly engaged in social media surveillance targeting sociopolitical movements for Indigenous rights and racial justice, including Idle No More and Black Lives Matter. Again, the public tends not to hear about it until years later, and only through news media revelations. Additionally, in March 2019, The Tyee exposed that the RCMP had been engaging in never-reported “proactive” and “ongoing wide-scale monitoring” of individuals’ Facebook, Twitter, Instagram and other social media activity “for at least two years.” The initiative, known as Project Wide Awake, used software from a Washington, D.C. contractor that also works with U.S. intelligence and defence.

This April, the RCMP issued a public tender seeking expansive and intricate algorithmic social media surveillance capabilities, exacerbating pre-existing concerns with police surveillance chilling freedom of expression. Studies have shown that those who know or merely suspect their online activities are being monitored by government are prone to engage in self-censorship. Further, the right to equality is violated when historically marginalized groups who face systemic discrimination are targeted for disproportionate and particularly invasive scrutiny by law enforcement, especially if they are targeted for surveillance due to the very act of advocating for their equality and civil rights. 

The criminal justice system is exactly the wrong place for the kind of entrepreneurial recklessness, techno-solutionism and “ask for forgiveness, not permission” attitude that Silicon Valley encourages. Yet relying on the coerced “forgiveness” of a surveilled population is exactly what law enforcement agencies do every time they roll out another new technology for use on the public without any notice, public dialogue, consultation, or a meaningful and consequential way for the networked, the governed, and the policed to simply say “No.” 

The Canadian public, including its most disproportionately policed members, have not consented and do not consent to the use of secretive facial recognition technologies, or to indiscriminate social media surveillance of social movements, or to algorithm-boosted police stops, or (by definition) to any advanced policing technologies we have not been informed of. Relevant questions must be asked before use, not after the fact. To that end, the public is owed immediate public disclosures of all algorithmic policing technologies under use, development or consideration by law enforcement agencies across Canada. It does not take an algorithm to know that this is the right thing to do.


Cynthia Khoo is a technology and human rights lawyer, and a research fellow at the Citizen Lab, Munk School of Global Affairs & Public Policy, University of Toronto. Her column, Below the Fold, appears regularly in the Monitor.

Offices: