Editor’s Letter: Inside the AI crisis on campus

Editor’s Letter: Inside the AI crisis on campus

In a world where anything can be faked, how does anyone know what’s real?

Photo by Daniel Ehrenworth

Until recently, AI was known as that useful, benign thingy that optimizes Google searches and finishes text messages. Then, late last year, the San Francisco company OpenAI launched a public version of ChatGPT, and everything changed. It was faster and smarter than anything we’d seen. These days, with a few keystrokes, we can create a believable anything. Deep-fake videos. Audio recordings. A photo of that time Albert Einstein played harmonica at Glastonbury. (Wait, did that happen? Exactly.) The implications were potentially catastrophic. In a world where anything can be faked, how does anyone know what’s real?

This question landed thunderously in the academic world, where the subject of AI became urgent. With the oceanic depths of the internet at its disposal, ChatGPT could crank out a decent paper on any topic in seconds. Students, especially those in the humanities, hurriedly availed themselves of this new tech. As Simon Lewsen, an instructor at the University of Toronto, reports in this month’s cover story, one in five post-secondary students in a recent survey said they used AI to complete schoolwork. As the technology improves and word spreads, that usage rate is destined to climb.

Some students instruct ChatGPT to create essays whole-hog; others use it to weave chunks of original thinking into a coherent whole; and still others rely on it to overcome writer’s block. Which of the above is cheating? It depends whom you ask. When I attended U of T, all students in the humanities had to run their papers through a program called Turnitin, which cross-referenced every sentence against a database of existing works. If it wasn’t an original sentence, you were in trouble, no exceptions.

A student uses ChatGPT to generate essay ideas. Is he cheating? It depends on whom you ask.
Photo by Luis Mora

The days of such clarity are over. Because ChatGPT creates semi-original works, the finished products aren’t strictly plagiarism, and the resulting uncertainty has sent the entire system into shock. At Toronto Metropolitan University, one academic-integrity specialist was inundated by panicked instructors who were receiving papers with bizarre syntax and peculiar annotations. Something seemed…off. The instructors couldn’t prove anything, and it wasn’t feasible to interrogate every suspected cheater to determine whether they knew Chaucer as well as they seemed to on the page.

Faced with this existential threat, some academics suggested that, instead of fighting an unwinnable fight, they should allow AI to flourish everywhere. Treat it like the modern equivalent of the ­calculator—a tool that makes tedious tasks more efficient. AI has already wormed its way into our daily lives, the thinking goes, lurking behind every social media algorithm, customer support hotline, EV and smart thermostat. Banning it feels like banning the internet.

Related: Meet the Etobicoke-born inventor of the ChatGPT detector

Capitulation isn’t without drawbacks, though. University is less about producing the right answer than it is about the intellectual calisthenics of getting there: the weighing of ideas, the evolution of thought, the vigour of debate. Lose that, and we sacrifice something fundamental, at least according to one academic Lewsen interviewed. “People teach because they believe in the ennobling of the human soul through encounters with the great minds of the past,” he said. Professors, he added, “are heartbroken to see that enterprise subverted at its very core.” He’s got a point. Then again, university is a thousand-year-old concept. Perhaps it’s due for renewal.


Malcolm Johnston is the editor of Toronto Life. He can be reached via email at editor@torontolife.com.