Asserting a place for labour within the technosocial gestalt

According to a popular apocryphal tale, the United States and Soviet Union both realized during the space race that a standard pen would not work in orbit. NASA spent millions of dollars to develop an “anti-gravity” pen that would. The Soviets used a pencil.

The story is not true. However, as analogy, it helpfully illustrates a critical point at the heart of law- and policy-making aimed at preparing society for another transitionary era of technology-driven labour upheaval. NASA faced a simple problem obfuscated by rapidly advancing technology, so the counterfactual space agency assumed the solution would also need to be on the cutting edge. In reality, both the problem (how to write) and solution (with a pencil) were manageable based on well-worn ways of thinking and ideas that long pre-existed, needing only to be taken up in earnest. 

The point here is that if we focus too much on the new technology itself, we risk overlooking the more central questions, such as making sure our astronauts are fairly compensated and won’t run out of oxygen while scribing away on humanity’s behalf. We risk missing the belt for the asteroids, so to speak. Or, in today’s terms, we are missing the labour rights and income inequality issues for the job-automating, artificially intelligent robots.

***

It will not be enough, to meet the challenge that automation presents, to sit back and hope that the industrial robot revolution eventually generates enough commercializable innovation and new jobs to accommodate displaced humans, let alone to ensure that they thrive in the future workplace. Rather, we should be preparing a comprehensive package of new laws and social insurance reforms (e.g., to EI and other systems) that can guarantee maximum sociopolitical and economic equity in response to both conventional, eroding employment models and the emerging, often more precarious models that platform capitalism (popularly known as the “gig economy”) facilitates, and that automation and artificial intelligence appear set to accelerate. 

To return to our analogy, AI did not cause today’s yawning wealth gap, which we can lay at the feet of all-too-human capitalist structures, modes of thinking and governance. However, attacking inequality first, with political and economic tools already at hand, would make dealing with a potentially AI-overrun future much less daunting.

In 2016, Sunil Johal, policy director of the Mowat Centre, and Mowat policy associate Jordann Thirgood released a report called Working Without a Net: Rethinking Canada’s Social Policy in the New Age of WorkThe report analyzes how Canadian labour will be affected by both job automation and platform capitalism, and concludes with a slate of astutely broad-minded policy recommendations for how governments might futureproof certain sectors. Call it a just transition for the digital age. Among Johal and Thirgood’s recommendations are a universal child care program paid for through progressive taxation, more investment in affordable housing, and the expansion of health care to include a universal pharmacare program. 

What is striking here is how similar these digital transition policies are to the measures frequently assigned to the just transition for carbon economy workers. Those grappling with two of the most significant questions confronting our collective vision for the future have drawn the same overall conclusions: remedying socioeconomic inequality, strengthening safety nets for the vulnerable, and divesting from certain belief systems around labour and value are all integral to the decarbonization of some industries and what might, in the far distance, come to be seen as the re-humanization of work.

Lost in these separate conversations is how reforms such as national child care or affordable housing would create a more democratic and equitable society for all of us, as people, not only as “workers.” They are valuable and necessary reforms in their own right, independent of what else the future holds.

Indeed, were such laws and systems already a reality—better employment standards, for example, and fairer wealth distribution—AI-driven automation might not be such cause for alarm in the first place. Technological change on the scale we appear to be expecting today would arrive genuinely as a boon that all members of society could equally benefit from, as the welfare-state economist John Maynard Keynes predicted. Instead, governments must now play catch-up and fearlessly course-correct at scale.

Inquiring into the chasm between Keynes’s vision and today’s reality, anthropologist David Graeber asserts in his well-known essay, On the Phenomenon of Bullshit Jobs, “The answer clearly isn’t economic.” Equally, the answer clearly isn’t technological. It is, as Graeber goes on to state, “moral and political.”

When it comes to ensuring a just transition to a sustainable and democratic society in which all individuals may thrive, neither carbon nor Silicon will do. Rather, those obligated to act in the public interest must exercise ironclad political will and ethical courage to ensure no one is left behind in the wake of technological innovation and progress.

***

All of the above is not to say that digital rights are separate from a just transition for labour in the digital age. On the contrary, many central digital rights issues play out in the context of employment relationships, such as the right to privacy online, freedom of expression, online content moderation, and algorithmic accountability. In this sense of protecting people’s rights as exercised through and made meaningful on the internet, in the context of their rights as workers, digital rights are labour rights.

Three broad principles emerged from a panel I attended at the RightsCon 2018 conference in Toronto this May. The panel, titled “Robots and Rights: Exploring the Impacts of Automation on the Future of Work,” raised some of the same issues as Johal (who was on the panel) and Thirgood in their report. At the heart of these principles is recognizing the reality that fundamental sociopolitical and economic inequities are deeply entrenched in Canada today and predate the emergence of artificial intelligence.

Illustration by Alisha Davidson

First, the growing wealth gap requires intrepid action to address meaningfully. Though average wages have stagnated, the country’s top 1% of the population by income had accumulated 15% of Canada’s total household wealth by 2011. A just transition involves establishing socioeconomic structures and systems that distribute wealth, profits and economic value more equitably across different income brackets.

Speakers on the RightsCon panel I attended pointed out that when it comes to digital innovation, tax policy favours capital at the expense of labour, by incentivizing “innovation” through tax credits and discouraging support for and investment in workers through tax liability. Additionally, the surplus profits that companies accrue and which owners pocket (through automation) flow neither back to workers themselves nor into improving society as a whole, particularly after taking into account systematic and aggressive tax avoidance practices.

It is no coincidence that recent years have seen researchers and media give more serious consideration to the idea of a universal basic income (UBI). Regardless of one’s opinion of the UBI itself, that a primary response to the AI revolution might be a socioeconomic policy that dates back, in Canadian political discourse, to the 1930s—and was piloted in Manitoba in the 1970s—again demonstrates how technological advancements can be the catalyst for reform, but are likely not the lens through which the most effective solutions will materialize.

Second, all stakeholders in our shared society must learn to acknowledge and validate the many kinds of unpaid, invisible, undervalued, discounted and often gendered or racialized labour on which society relies. That, in turn, requires explicitly recognizing and fairly compensating such work on par with the kinds of jobs that Canadian society currently rewards.

One speaker on the RightsCon panel suggested the answer to automation may lie not in more jobs, but “different jobs.” This could mean replacing displaced knowledge and manual work with “empathy work,” examples of which include senior care, health care, childrearing, community organizing or relationships management. These high-effort interactions do not often count as formal “work,” even though they demand a high degree of emotional labour, time and energy to carry out, particularly if they occur in the workplace on top of one’s formal job description. They are tasks most frequently carried out by women, visible minorities and other marginalized individuals, often without recognition, acknowledgement or pay.

Johal and Thirgood recommend investments that could drive new generations of workers toward careers in empathy and caring fields, similar to the current attention paid to increasing enrolment in STEM (science, technology, engineering and mathematics). Critically, placing an equal value on the caring economy as on stereotypical ideas about innovation would contribute to breaking down gender stereotypes, instilling greater respect for such work, and ensuring fair and higher pay and benefits across the workforce.

In the context of digital policy, content moderation on online platforms usually arises as a freedom of expression issue. This is also a good example of devalued labour. Content moderators spend their days clicking through the worst and most violent, abusive, graphic and traumatizing material that humanity has to offer. They work for little pay, no benefits or security (as many are contractors), less recognition and esteem compared to coder colleagues, thankless outcomes, and at the high human cost of their psychological well-being.

Automating content moderation, despite what Mark Zuckerberg tells the U.S. Congress, is unlikely in the near future due to the difficulty of training AI to assess borderline or nuanced cases correctly. Late 2017 saw Facebook and Google announcing plans to hire 10,000 more moderators each. In Bullshit Jobs, Graeber observed, “[I]n our society, there seems a general rule that, the more obviously one’s work benefits other people, the less one is likely to be paid for it.” This rule applies as much to those who make the internet at all bearable as to their offline counterparts who care for and maintain the infrastructure and functioning state of our physical and relational worlds. Canada should seize the opportunity that the automation upheaval gives us to flip that formula on its head.

Third and last, the Canadian government should respond to mass automation by strengthening protection for workers’ rights across the board, to insure against increased moral hazard and exploitation on the part of AI- and automation-equipped businesses. Digital rights issues abound in this context.

For example, the European Union’s General Data Protection Regulation (GDPR), implemented in May to great fanfare and trepidation after a two-year grace period, specifically addresses workers’ data. This matters because employment is often a battleground for privacy rights. Activities such as monitoring employees’ internet use, keylogging and ubiquitous tracking of detailed metrics regarding physical location, health, communications and work performance likely cross the line from legitimate business interests to invasion of personal privacy. They also risk infringing upon workers’ freedom of expression or freedom of association—rights particularly fundamental in a labour justice context.

The GDPR strengthens protections for EU workers in at least three notable ways. First, the regulation does not only apply to full-time employees but also to other individuals whose data enters a company’s orbit, such as job candidates and part-time workers. Second, according to an Article 29 Working Party opinion, employees mostly cannot consent under GDPR. This recognizes the inherent power imbalance between employers and employees, and in many cases will force employers who want to collect and use workers’ data to find other legal grounds to do so.

Third, the GDPR generally prohibits relying solely on automated decisions that result in legal or “similarly significant” consequences for an individual, such as hiring or dismissal. In her book Weapons of Math Destruction, about algorithmic discrimination and non-accountability, Cathy O’Neil (pictured in her 2017 TED Talk) relates that in the United States, commercial truck drivers work with installed devices logging every element of their driving while a camera stays trained on their faces. Companies use such mandatory surveillance data to assess routes, driver performance and risk scores, among other variables, which are then fed into systems notorious for errors and that harm real people due to making decisions based on their distorted or mistaken data shadows.

It remains to be seen how the impact of the GDPR will play out on the ground, but as far as privacy and data protection rights are concerned, Canadian law-makers concerned with a just digital transition may want to start casting their eyes across the pond for inspiration.

***

Despite the severe and widespread concerns with algorithmic transparency and the opacity of AI-driven automation, their arrival has made one thing quite clear: our present-day society is not ready, and it has nothing to do with the new technologies themselves.

Increasing income inequality, undervalued and unrecognized labour, a growing wealth gap, the erosion of workers’ rights, widespread work precarity, policies systemically favouring capital at the expense of labour, and an outdated social safety net are the natural result of capitalist logic and its institutions set up to govern Canadian society. They are old problems, albeit now with a robotic twist.

If we do not address these problems directly for what they are, independent of the technology that brought them into sharper relief, we risk sorely misdirecting the future of work in Canada. But more than that, we risk squandering the current political momentum, popular interest and ethical self-awareness to bring about the free, equitable and democratic society we should already inhabit.

A just transition does not mean transforming just enough to preserve a broken status quo. Done right, a just transition could be transcendent. And there would be nothing artificial or automated about it.

Cynthia Khoo is a Toronto-based lawyer who focuses on internet policy and digital rights. You can reach her at www.cynthiakhoo.ca. Her column, Below the Fold, appears regularly in the Monitor.