As a secondary educator in Ontario’s public school system, I am currently staring generative artificial intelligence (genAI) in the face, and it’s not pretty.  

I realize not everyone sees it that way, but I expect I’m not the only one—at least not from what I’ve heard from many colleagues, who seem to be treading water pedagogically.  Many of us are frantically turning to pens and paper to try to ensure we’re actually evaluating a real, human student’s thinking and capabilities, for instance—hard to pull off in eLearning, though.  

Many proponents of AI may brand me a luddite, or techno-alarmist for questioning its appropriateness in education.  In fact, I was an early adopter of online learning platforms as a  teacher at the university and secondary levels.  I’m all for assistive technology for students that need it, even though it’s clear that our Ministry of Education and school boards cut far too many exclusive deals with for-profit providers and licenses of education technology. 

But should a machine do the work that I am going to grade?  Absolutely not.  Marking work completed by a robot is my worst nightmare—and so is pretending that a student is learning when they avoid work by submitting something spit out by genAI.   

When I look at genAI,  I see the hollowing out of education.  I’m seeing emerging evidence concerning harm to students’ critical thinking skills, creativity and cognitive abilities in relying on genAI to complete work for them.  I’m also seeing many folks gush sycophantically and uncritically about a promised land of education made infinitely more “productive” and “efficient” through genAI use by students, teachers, and everyone, everywhere.  

This particular brand of techno-evangelism offers students the opportunity to simply avoid learning, especially when challenged with any kind of writing task that involves putting real, human thoughts together in a coherent way and writing them out from a real, human brain, through real, human hands and fingers.  If you stop to think about it, that’s quite a bit of what comprises education.  

Some would argue that the way education is currently organized makes it horrifically vulnerable to this particular disruption.  If all we’re after is good grades and well-written “products” from students, we’re working in a hollowed out system that is quite vulnerable to genAI removing any of the actual learning from the equation.  Education researcher Stefan Popenici wryly observes that the dystopian scenario of teachers using genAI to evaluate student work created with genAI is conceivable. Such a system would result in what could be well described as fake teaching and learning.

Imagine someone who needs to do some physio or rehab exercises to heal and become stronger.  If that person presses a few buttons and then watches as a machine does the exercises for them, do they get stronger or healthier?  Is that the promised land of technotopia—education unleashed and totally unattached from learning?

If we embrace genAI’s offer to allow students the chance to avoid learning, we’re losing the plot.  Recent meta-analyses and mapping studies have revealed trends in harms to students’ cognitive and critical thinking abilities in reliance on genAI—unsurprising, and terrifying, at least if you care about students learning to think and write independently.   

I’ve had students submit work citing fake sources, or with fake quotes from texts—what are known endearingly as “hallucinations.”  I only very rarely see “vintage” cases of plagiarism, that is, pre-AI plagiarism.  I find myself longing for those days nostalgically.  A discerning educator will notice ‘perplexities’ in real human writing, or disjunctures between clean, technically perfect sentences and paragraphs juxtaposed with real, messy, human sentences.  

At the moment, my colleagues and I—at least those of us who aren’t in love with genAI—are in a bit of a chess match with the technology.  Without any guardrails around its use, we have to plan our teaching moves carefully.  The tasks we assign, especially in the humanities and social sciences—where so much thinking and writing are required—require an extra order of planning when it comes to attempting to dodge the inevitable, potential choice by students to substitute a robot to do work for them.  

AI detectors have been scrutinized for their lack of reliability, and of course an educator’s judgement is less fallible. But what of a situation where the educator suspects AI use, but cannot prove it?  Are we in a ‘post-plagiarism’ age, according to the fulminations of U. of Alberta prof Sarah Eaton, where people “co-create” with machines? Would you call a student submitting an assignment generated by a robot an act of ‘co-creation’?  Or a student tweaking genAI output to make it look slightly worse and more human, then submitting it?

What would education look like if it were decoupled from opportunities for students to substitute robot work? Taking a cue from the slow food movement, “slow education” advocates suggest a focus on more messy, exploratory learning opportunities not necessarily subject to the relentless ranking and sorting that our systems are constrained by. 

Consider this true story.  A student completes and submits an assignment with a genAI tool, which spits out a paragraph-based submission obligingly, in response to a quick question prompt.  The teacher catches the student’s use of genAI, admonishes the student, but says that as long as they had cited the genAI output, it would have been okay.  

That’s right—this real-world teacher would have accepted a student submitting work completed by a robot and graded it, if only the student had cited it as AI-generated.  Everyone’s happy because the result is a good grade.  More credits, higher graduation rates.  But who is actually learning here? No one.  This is genAI making us dumber, not smarter. 

Consider another true story. A parent is considering encouraging their child to use genAI when preparing materials for a university application.  After all, it’s competitive out there. Shouldn’t the parent embrace every ‘competitive edge’ their child can get?  

Some proponents even advocate for ultra-permissive policies that might prevent a teacher from banning students’ use of genAI.  Might teachers be forced to accept genAI work and grade it?  Can we create an educational system that actually encourages the replacement of student learning with robot output? It’s frightening, and the implications moreso. 

Beyond the need for basic literacy, such a system doesn’t require anyone to think.  Is that the kind of world I want?  No, in fact,it is not. How about you?