AI has made it easy for post-secondary students to fake their way to a degree. They argue that ChatGPT is just another study tool. Schools say it spells the end of university education as we know it. Maybe that’s not a bad thing
At the high school Abhinash attended in India, calculators were forbidden. For tests, including the statewide university entrance exams, students wrote out their equations in longhand. They were being evaluated not only for their understanding of math but also for their ability to trudge through the steps of each equation, work that no scientist or engineer in the 21st century would ever need to perform.
In 2019, Abhinash moved to Toronto to pursue a degree in earth and environmental sciences. (Abhinash is a pseudonym. Like other students interviewed for this story, he asked me to withhold his real name because he has done things his professors may consider cheating.) In Toronto, he enrolled in a linear-algebra class, where, to his surprise, calculators were not merely permitted but required. The first time he brought one to an exam, it felt wrong, like showing up to a black-tie gala in jeans and a T-shirt. He placed the device on his desk and willed himself to touch it, instinctively feeling that doing so might violate a sacred rule. He quickly got over this fear. Soon, the very notion of a prohibition on calculators seemed ridiculous.
Abhinash was in the fourth year of his degree when a far more powerful tool hit the market. On November 30, 2022, OpenAI, a Microsoft-funded research lab in San Francisco, made its chatbot, ChatGPT, publicly available for free. In December, Abhinash was hanging out in the common room of his building with friends when one of them introduced the group to the program.
Related: This high school teacher is bringing AI into the classroom
The guys were enthralled. They crowded around their buddy’s laptop and began issuing commands to the bot, instructing it to write poems and song lyrics. Later that evening, two of the friends got into an argument over a group assignment, and one stormed out of the room. When he returned, he learned that his buddies had prompted ChatGPT to write an apology on his behalf—and to generate alternative versions in the style of a rapper, a pirate and a Shakespearean actor. Abhinash was fascinated by the program, although he couldn’t fully grasp its purpose. It seemed more interesting than useful.
He soon learned that he was wrong. In February of 2023, he went with his class on a field trip to High Park. Afterward, the professor gave students a soil sample from the mucky bottom of Grenadier Pond and instructed them to write a paper on the sediment, linking it to events in Toronto history. Abhinash was stumped: the sedimentary record didn’t seem to line up with the historical one. Roughly 25 centimetres from the top of the sample, he saw what seemed to be a layer of black tar, which he dated to the early 1970s. But what on earth could have caused it?
He scoured the university databases in the hopes of uncovering a regional event—a fire, say, or a major construction project—that might explain the sedimentary change, but nothing came up. In desperation, he wrote up a description of the soil sample and prompted ChatGPT to interpret it. The bot responded in seconds, linking the tar in the sample to the construction of the Queensway thoroughfare in the 1950s. At first, the solution seemed absurd to Abhinash—the timing made no sense—but he soon realized it was correct. In his original analysis, he’d misdated the soil sample, attributing sediment from the postwar construction boom to events 20 years later. ChatGPT had corrected the mistake.
It had saved him time, too. Soon, Abhinash was using it to produce abstracts for his scientific papers, to craft transition sentences and to break him out of writers’ block. When he hit a wall intellectually, he’d paste his half-done work into the bot and instruct it to finish the job. He never tried to pass off AI-generated text as his own. ChatGPT simply came up with ideas; if he liked them, he rewrote them. He was still thinking for himself, but he was enlisting the bot as secretary, sounding board and copy editor.
Was he cheating? Professors everywhere were saying that students using ChatGPT in their schoolwork were guilty of grievous academic misconduct. The logic of their arguments was simple enough. Ever since Yale University popularized the academic grading system in the early 19th century, grades have been the currency around which universities operate. Like any currency, grades have exchange value: they buy scholarships, reference letters from instructors, and placements in competitive graduate schools or professional programs.
By this logic, students who don’t do all the work they submit are basically scam artists, amassing unearned capital and using it to secure benefits they don’t deserve. Universities exist not only to prepare students for the professional world but also to protect the professions themselves by ensuring, or at least trying to ensure, that the most critical jobs go to the most qualified candidates. If they stop being meritocratic—if they stop selecting for the most talented or hard-working students and instead elevate those who are most willing to game the system—society’s bedrock fields, like law, medicine and engineering, could become rife with fraudsters.
When ChatGPT first appeared, instructors and administrators saw the potential for academic grift on a massive scale, an existential threat to the norms of their institutions—and perhaps to us all. But Abhinash wasn’t convinced. His teachers had once said similar things about calculators, and anyway, people always freak out when a new technology hits the market. I teach courses on long-form journalism at the University of Toronto, and over the past year, I’ve witnessed the ChatGPT dilemma up close. Every instructor knows that the technology is a big deal. But should universities fight it with everything they’ve got, or can they somehow live with it? Is it a game ender—or just a game changer?
Pundits sometimes compare the arrival of ChatGPT to the invention of the automobile, another seismic technological breakthrough. But this analogy understates the extraordinary pace of change. On November 29, few people were thinking about ChatGPT; on November 30, the program was freely available everywhere. It’s as if everyone went to bed in the horse-and-buggy era and awoke to a world of car dealerships, highways and gas stations.
Of course, AI has been with us since at least the 1950s, but mainly as a niche application. To program the AIs of the past, you needed a computer science degree or at least a solid grasp of coding. GPT, however, is part of an emerging cohort of AI tools—including Meta’s LLaMA, Google’s PaLM and Amazon’s Alexa TM—called large language models (or LLMs), which speak the same way you do. For the first time in history, any layperson can use AI by issuing commands not in Python or JavaScript but in English, French, Mandarin or Arabic.
Using deep learning, LLMs predict contextually appropriate responses to a query. Everybody knows that the phrase “please make it” frequently precedes the word “stop” and that “I love” often comes before “you.” GPT knows this too, and because it has studied a massive corpus of written sources, including most of the internet, it can make all kinds of similar predictions about all kinds of other sentences. It is a machine, basically, for guessing the next word in a sequence. And so it can respond to prompts in ways that are occasionally sophisticated, occasionally ridiculous and often somewhere in between.
It is this in-between domain that poses a problem for educators. If you ask ChatGPT to explain the causes of the First World War, interpret the metaphors in T. S. Eliot’s “The Waste Land” or come up with a novel legal argument based on the Criminal Code of Canada, it’ll instantly do as it’s told. The writing won’t be great, but in university, not-great papers frequently get B-level grades, and they almost never fail. In a single afternoon on ChatGPT, a student can generate enough mediocre prose to complete an entire undergraduate degree.
And the technology is quickly improving. When ChatGPT first went public last year, it completed the Uniform Bar Exam—the test that permits US citizens to practise law—but scored only in the bottom 10th percentile. When, in March of this year, OpenAI released GPT-4, a stronger algorithm available for paying subscribers, the bar exam results jumped from the bottom 10th to the top 10th. If and when GPT-5 comes out, those metrics will be better still.
Related: Three AI execs explain the tech’s extraordinary powers—and its potential dangers
Students are catching on. In a recent US survey, one in five post-secondary students said they used AI to complete schoolwork. And, in the first quarter of this year, the education-technology company Chegg—which sells written assignments or answers to test questions—saw an almost 50 per cent drop in its share price, suggesting that it now has a formidable new competitor.
“Usually, my office is not the most popular place on campus,” says Allyson Miller, an academic integrity specialist at Toronto Metropolitan University, whose job is to help professors craft and enforce policies on cheating. After the arrival of ChatGPT, foot traffic to her lonely corner of the university increased dramatically. “We had a massive influx of panicked instructors,” says Miller, “all of them asking, What do we do? What do we do? What do we do?”
For the most anguished instructors, ChatGPT isn’t just a nuisance; it’s an assault on intellectualism itself. Daniel Adleman is an assistant professor of writing and rhetoric at U of T who is researching ChatGPT and academic integrity. “Go to the website of any liberal-arts university,” he says, “and you’ll see marketing copy on critical thought, eloquence and the intellectual imagination. Arguably, the cultivation of these virtues can now be short-circuited by our new AI platforms.” Adleman is agnostic about the benefits and drawbacks of ChatGPT, but he understands why many of his colleagues are in acute distress. How can the storied humanistic tradition survive in a world where students routinely outsource their thinking to robots?
“People who teach don’t do it just for the paycheque,” says Joseph Keegin, a PhD candidate in philosophy at Tulane University, in New Orleans, who has argued in op-eds that schools should crack down hard on ChatGPT. “They do it because they believe in the ennobling of the human soul through encounters with the great minds of the past.” If those encounters can now be mediated by chatbots—and if the work of grappling with complex ideas can be automated—what’s left for liberal-arts professors to do?
It isn’t just the liberal arts that are affected. Vocational or skills-based programs also value intellectual autonomy. When a computer science student generates a linked list—a basic coding task—or when a medical student correctly diagnoses a case of Bell’s palsy instead of confusing it with a stroke or a migraine, they’re not doing anything new. The goal here isn’t to innovate; it’s to demonstrate competence. We trust that credentialed professionals will have a baseline level of proficiency in their fields, and we trust universities to vouch for that proficiency. In a world where every student can generate passable—if not exactly stellar—assignments via ChatGPT, university instructors might rightfully wonder what it is they should be teaching students to do, or how they should go about separating the hard workers from the slackers.
It’s unclear how university administrations should respond to the ChatGPT threat. A few schools have taken a hard-line stance. Sciences Po, the top-rated Parisian academy, has announced that any student caught using uncited ChatGPT content for class assignments could be expelled from school and, in extreme cases, banished from the French higher-education system. RV University, in Bangalore, India, has adopted a kind of stop-and-frisk policy for LLMs: students suspected of turning in AI-generated work can be asked to reproduce it on the spot—presumably to see if they can remember what they wrote.
On North American campuses, administrators have struck task forces and hosted staff workshops on the technology, but they haven’t banned it outright. On its website, TMU warns that “instructors could consider AI use to be cheating.” The word choice is revealing: ChatGPT usage could be an offence, but it also could not be. While York University offers a refreshingly forthright statement—“using text-generating tools (such as ChatGPT) would be considered to be cheating”—it goes on to assert that students “may encounter variation regarding the acceptability of these tools, which can cause confusion.”
By crafting such flexible policies, administrators are giving professors the freedom to make up their own ChatGPT rules. But with freedom comes chaos: instructors are effectively being asked to regulate a technology they know almost nothing about. Some have replaced take-home essays with in-class tests, an approach that enables staff to monitor students for cheating. Others have issued contradictory directives. An assignment from a U of T undergraduate course tells students that, while they are not allowed to use AI devices that generate original content, they’re permitted to use simple editing tools like Grammarly. This distinction is puzzling. Although Grammarly is perhaps less powerful than ChatGPT, it too uses AI, and the grammatical tweaks it suggests are surely a form of content.
Enforcement is just as muddled. To catch cheaters, many instructors have turned to AI-detection software programs, like GPT Radar or GPTZero, which scan a text and render a probabilistic verdict as to whether a human or a robot wrote it. But these applications are notoriously unreliable. Recently, I ran the Canadian Charter of Rights and Freedoms through GPTZero, which opined that Sections 11 and 12 were likely written by a machine.
Robots, in other words, are bad at sussing out other robots. But human intuition is limited too. In a recent pilot study, Rahul Kumar, an assistant professor of education at Brock University, asked more than 100 participants to read samples of both AI- and human-generated texts and to assess which was which. His subjects mistook AI writing for human writing 76 per cent of the time. “Any attempt to ban AI on campus is like throwing money down the drain,” says Kumar. “We cannot detect it with the tools in our possession. We cannot even reliably detect it ourselves.”
Related: Meet the Etobicoke-born inventor of the ChatGPT detector
Professors do occasionally catch AI prose through intuition alone, but Kumar’s research suggests that these catches make up a tiny proportion of the overall number of offences. Still, in extreme cases—those in which a student has copied an assignment prompt into ChatGPT and submitted whatever the bot churned out—the hallmarks of AI are too clear to miss. The writing feels uncanny. The ideas are meandering and repetitive. Deeply researched facts sit side-by-side with falsehoods, sophistication with cluelessness.
Michael Reid, a PhD student in literature at the University of Toronto, was working this past winter as a teaching assistant for an undergraduate course on poetry when a student submitted a bizarre essay. “The sentences didn’t follow from one another,” he says. “There didn’t seem to be a human intellect behind it.” In some respects, the work went beyond basic requirements—it cited literary texts that hadn’t come up in the course—but the content consisted mainly of quotations and the most banal kind of summary. The student—the paper’s supposed author—had barely shown up for class the entire semester.
Reid brought the essay to the course instructor, who agreed that the paper seemed fishy and asked Reid to take on the role of lead investigator. The instructor and Reid called the student into a meeting, where she flatly denied the allegations against her. Yet she knew almost nothing about the work she’d submitted. When the professor asked her whether Samuel Taylor Coleridge’s “Kubla Khan”—a piece cited extensively in the paper—was a poem or an essay, she said she wasn’t sure.
Reid contacted the university’s integrity office to ask how he might establish, definitively, that the paper was fraudulent. They told him to look out for invented citations, since ChatGPT sometimes makes up (or “hallucinates”) its textual sources. Reid could’ve found such advice in seconds on Google, and it didn’t help much with the case at hand. The citations in the paper were clean, perhaps because the student had been savvy enough to tidy them up before turning in the work.
Reid wrote up a dossier outlining the case against the student, which he submitted to the Office for Academic Integrity. He is now awaiting a decision. As he waits, he finds himself doubting other student essays that come across his desk. “I often wonder, ‘Is this one written by an AI?’ ” he says. “The thing is, I can’t prove any of it.” In the ChatGPT era, you often can’t know for sure that an assignment was AI generated—and you can’t know for sure that it wasn’t.
Students are also in crisis. They must figure out how to get by in a world where seemingly everyone else is cheating. In an article for The Chronicle of Higher Education, writer Beth McMurtrie compares the current conundrum to the steroids era in baseball. It’s tempting to blame the problem on greed, McMurtrie suggests, but the real issue was perverse incentives. In a hyper-competitive athletic environment, any player who declined to cheat would likely fall behind his less-scrupulous colleagues. So athletes had to choose between conflicting risks—getting caught versus getting bested.
Students, like athletes, can always take the path of honesty, but it’s naïve to expect them to do so. Imagine that you are a university student in 2023. Like many of your peers, you developed subpar time-management skills in high school, which you finished during the pandemic—the Zoom era, when academic standards were in freefall. At university, you soon fall behind and find yourself with one night to generate a term paper, the kind that may determine whether you pass or fail. If you use ChatGPT, you probably won’t get caught. You won’t get an A either, but a B is less ruinous than an F.
Not all students are quite so brazen. Many are leveraging the technology in subtle ways, according to personalized codes of conduct. During the past semester, a philosophy and economics student at Western whom I’ll call Phil observed his roommates using old tests to see if ChatGPT could have scored them a better grade. They had their online test open in one browser tab and ChatGPT open in the other. But Phil promised himself he’d never use AI that way during an actual test. He wanted ChatGPT to support his learning process, not to supplant it.
He began using the bot, instead, as a brainstorming device. For one assignment, he was instructed to select a character from a popular TV show and to suggest an intervention that could improve that person’s financial prospects. So he asked ChatGPT. The bot recommended an after-school program. Phil then did the necessary intellectual legwork—a deep dive into the relevant sociology and economics literature—to turn this half-formed idea into a paper. He doesn’t see this behaviour as cheating. “All ChatGPT gave me was an initial idea,” he says. “In the past, I would workshop my essay arguments by talking with other people. Now, I talk with ChatGPT. It’s like leveraging a really smart friend.”
Another student I spoke to, who just finished an undergraduate degree at the University of Toronto, took a similarly nuanced approach. When deciding whether to use ChatGPT, Norman would ask himself one question: Is this project worth my time? If a task seemed enriching, he’d complete it independently; if it felt like busywork, he wouldn’t. For a recent course on architecture, he was asked to choose an old building in Toronto, visit it in person, learn about its archival history and reassess its heritage status. The professor even arranged for students to present their arguments to members of the Department of Canadian Heritage. Were he to cheat, Norman reasoned, he’d be denying himself an opportunity for intellectual growth. So he committed fully to the endeavour.
That same semester, however, for a group assignment in a commerce class, Norman was instructed to source basic information about a publicly traded company—share prices, market capitalization and a summary of ESG practices. This seemed, to him, like research a robot could do. So, ultimately, a robot did it. Norman is perennially busy—last semester, he had a part-time job and extracurricular commitments—but he says that ChatGPT helped him optimize his schedule. “I didn’t use it to free up space for video games,” he explains. “I used it when there was a task I could be doing that’s better for my learning than the one in front of me.”
When asked why he thought he was qualified to make these determinations (didn’t his professors know best?), Norman acknowledges that his preferences were subjective. Then again, this degree was his degree, and the time it consumed was his time. Shouldn’t his opinions count? “Commerce was tedious, and I didn’t care much for it,” he says. “My mom wanted me to take it, though, and she’s paying for my education. Art and architectural history are my real passions.”
Other students are less intentional about their ChatGPT practices. They make decisions on the fly, based on time pressures and exhaustion levels. When she was in high school, Sarah would devise ingenious ways to game the system. Sometimes, she and her friends would share answers on multiple-choice quizzes using a series of elaborate hand gestures. Or she’d write up note cards with physics equations on them and stash the contraband, like Michael Corleone in The Godfather, behind the bathroom toilet, where she could easily retrieve it during a test. “If somebody says they’ve never cheated in high school,” she says, “I think they’re full of shit.”
When Sarah started university, she figured her cheating days were behind her. Secret codes and cheat sheets had helped her in high school, but they couldn’t get her through a university paper or an essay-based exam. During her second year of law school in Toronto, however, she found herself in a bind. In the late fall of 2022, she had a family emergency. Even at the best of times, she’d never been great at managing her schedule, and these weren’t the best of times. After falling behind in her schoolwork, she convinced one of her professors to extend the deadline for a paper. Her professor told her that a further extension couldn’t be granted.
But, when she returned to her parents’ home for the holidays, she still hadn’t started the assignment. Sarah comes from a family of techies, and over Christmas dinner the conversation inevitably turned to ChatGPT. It was the first time Sarah had heard of the program. “Could it help somebody write an essay?” she asked, hoping to sound more curious than desperate. Everybody around the table agreed that it could.
That night, Sarah searched for relevant academic articles, which she copied into ChatGPT. Once the program had summarized the texts, she ran them through QuillBot—an AI that rephrases sentences—and, after that, Grammarly. Having laundered her sources through three applications, she then stitched them together into a Frankenstein essay. It wasn’t her best work. Arguably, it wasn’t even her work at all. But she figured (in the end, correctly) that it was sufficient for a middling grade.
Feeling guilty about the deception, Sarah swore to herself that she would never cheat again. At the end of the next semester, however, she got an internship and was forced to pack up her Toronto apartment on short notice. Her final paper was still unfinished—and ChatGPT was only a few clicks away—so she reneged on her promise. She doubts that this relapse was the final one. “Even when I originally told myself I wouldn’t cheat,” she admits, “I knew, on some level, that I’d do it again.”
If Sarah was a gleeful schemer in high school, she’s a rueful one today. “Every time I use ChatGPT, I feel dissatisfied,” she says. “It’s like I’m taking the easy way out.” She fears that she’s cheating herself of opportunities for intellectual growth and that her attentiveness and time-management skills will continue to atrophy. She also worries she’ll become so accustomed to AI that she won’t be able to work on her own.
In a competitive field, though, these risks may be a small price to pay for an unblemished academic transcript. Plus, when she’s in a time crunch, she doesn’t fixate on big-picture anxieties. “If it wasn’t for ChatGPT, I probably wouldn’t have finished those two law school essays,” she says. “The alternative to cheating was failure.”
There may be as many ways of using ChatGPT as there are ChatGPT users. To the question of whether the app is a learning tool or a homework machine, the obvious answer is both—it all depends on what one does with it. But the complexity of the technology makes it nearly impossible to regulate. There’s no obvious way for schools to permit some uses while proscribing others. Benjamin Alarie, a professor and former associate dean at the University of Toronto Faculty of Law, has a simple solution: allow everything. Let students figure out for themselves how best to use the chatbot. And let them misuse it too.
Alarie has skin in the AI game. In 2015, he co-founded the start-up Blue J Legal, which uses AI and machine learning to predict the outcome of tax-law cases. He believes that AI can enable lawyers to do better, faster work, and he wants his students to experiment with the technology. In classes this past semester, he permitted pupils to use ChatGPT in any way they saw fit.
He hasn’t surveyed his students on what, exactly, they did with the program, although he suspects that many used it as a research assistant (it can comb through existing case law to find relevant precedents) or as an intellectual sparring partner (it can read through and critique an essay draft). The potential use cases are manifold, though, and for Alarie, the downsides are overstated. If ChatGPT enables students to do high-quality work faster than before, he argues, so much the better. Why should legal research take more time than necessary? And if lazy students uncritically accept every “fact” ChatGPT spews out, or if they instruct the bot to write their papers for them, they’ll end up with subpar work—and the low grade they deserve.
Mark Daley, a groundbreaking AI researcher and computer science professor at Western University, has a similar take. He argues that instructors today should worry less about catching so-called cheaters and more about incentivizing students to go above and beyond what AI can do. In computer science classes, this means allowing students to generate imperfect code via ChatGPT but then expecting them to debug and upgrade that code—and ultimately to produce a superior product.
That ethos, Daley argues, could prevail in the social sciences and humanities. AI may be impressive, he says, but it still has profound limitations. It can generate lists of ideas, but only humans can exercise the judgment necessary to separate the good from the bad. It can do basic research, but only humans can vet that research for accuracy. And it can write competent sentences, but only humans can arrange those sentences into coherent—and, better yet, original—papers. If a given paper exhibits such originality, he argues, there’s no need for instructors to worry about whether a robot completed the lower-order work. And if creativity is sorely lacking? Well, nobody is promised a passing grade.
What’s called for, Alarie and Daley contend, is not a ban on AI but rather a reassessment of what university students are expected to do. Telling students, in the ChatGPT era, to write every sentence themselves or to independently generate every idea they use may be as antiquated as asking mathematicians to do their arithmetic by hand. Instructors who look at a paper and wonder, Did this student cheat? may be asking the wrong question. A better question is, Does this paper demonstrate a level of mastery or sophistication that clearly surpasses what a robot can do? Thanks to AI, the baseline for what a lazy student can achieve has never been higher. But students can still be rewarded for the extent to which they exceed that baseline.
Joshua Gans, a professor at the Rotman School of Management, believes that instructors today should abandon the generic essay prompts they’re so fond of—“Explain why the Treaty of Versailles failed”—in favour of more innovative assessments. ChatGPT can easily generate a summary of John Locke’s theory of equality under the law, for instance, so there’s no point in asking a philosophy student to do this work. But that student might be asked, instead, to apply Locke’s theory to the recent spate of indictments against Donald Trump—an undertaking that, to do well, requires timely research and novel analysis. “If your students can complete your assignment adequately with ChatGPT,” says Gans, “that’s not a problem with the student. It’s a problem with your assignment.”
Perhaps the strongest argument against a blanket ban on ChatGPT is that it’s impossible to enforce. Schools have never been good at detecting cheaters: a 2021 study from Australian researchers estimates that five per cent of such infractions get caught, and research in Canada suggests that more than half of all professors refrain from reporting cheating, partly because it takes time and energy to make a case against a student. To make up for this lack of enforcement, the system selectively administers outsized punishment. The majority of cheaters get away scot-free; the unlucky few are penalized harshly.
Data from the International Center for Academic Integrity, a research institute at Rutgers University, shows that over 60 per cent of students cheat in some way during their university years. Of the small subset who get caught, most receive failing grades, and an unlucky few—often repeat offenders—find themselves suspended, sanctioned and unable to work in their chosen profession.
In light of the new cheating epidemic, it’s easy to imagine a future in which such sanctions become even more severe. Students may face expulsions; international students could lose their visas. Using ChatGPT could become a permanent mark on their record. But, given the scale of ChatGPT usage, anybody who gets punished so severely would have good reason to feel individually scapegoated for systemic misconduct.
To properly regulate ChatGPT, universities need, at a minimum, to come up with consistent rules, even-handed penalties and a foolproof way of catching cheaters. If this goal proves impossible, as it almost certainly will, they will instead need a policy akin to decriminalization, whereby they allow students to experiment with ChatGPT while discouraging—but not outright forbidding—them from using it in the most egregious ways.
A permissive approach to the technology—allowing ChatGPT usage but demanding a higher calibre of work—may actually better prepare students for the professional world into which they’ll graduate. In the field of software development, ChatGPT usage is already common: most programmers aren’t expected to write code from scratch. Alarie believes that the legal industry will soon go the same way. If, by delegating lower-order tasks—like preliminary research and proofreading—to machines, lawyers can work faster than before, thereby saving their clients money, they’ll be ethically obligated to do so. AI is likely coming for the rest of the professions too. Any job that requires its workers to quickly churn out serviceable first drafts (the kind that can be edited and improved later) is one that could benefit from LLM technology. By inviting students to work with this technology, smartly and tactically, universities may be nurturing critical professional skills.
What they won’t be nurturing is the sense of mastery that comes from doing a job autonomously, and this is a genuine loss. In the past, students embarking on a post-secondary education were expected to read widely, think slowly and submit essays in which every detail was carefully considered. The same is true for professionals. Architects who still design with cardboard and glue rather than with modelling software, doctors who nurture rock-solid diagnostic instincts, writers who mull every comma—all of them can achieve a level of expertise that would elude them were they to collaborate with robots.
Today, instructors who see value in this kind of work will have to make their case to skeptical students, many of whom will nevertheless decide that there’s a faster, better way of doing things. Those students won’t be wrong, either: in most corners of our professional world, competence and efficiency really do matter more than rigorously independent thought. Abhinash, the student who grew up in India, has made peace with his reliance on ChatGPT. If asked to confidentially advise an incoming student on AI, he says he would counsel them to use the technology judiciously. “I would tell them not to ever copy and paste an entire assignment from ChatGPT,” he explains. “Other than that, I would tell them to use the program as much as they can.”
When radical new technologies appear, they render older ethical debates obsolete. There’s little point, after the Manhattan Project, in arguing about the pros and cons of the atomic bomb; the real challenge is figuring out how to get by in a nuclear world. Questions about the ethics of ChatGPT no longer strike Abhinash as particularly relevant: the era we live in is the era we live in. “The university system needs to get with the times,” he argues. You don’t have to like what he’s saying to know he’s right.
This story appears in the September 2023 issue of Toronto Life magazine. To subscribe for just $39.99 a year, click here. To purchase single issues, click here.
NEVER MISS A TORONTO LIFE STORY
Sign up for The Vault, our free newsletter with unforgettable long reads from our archives.