Why Geoffrey Hinton is sounding the alarm about AI

Rage Against the Machine

Geoffrey Hinton spent half a century developing artificial intelligence. Now, he worries that his life’s work could spell the end of humanity. Inside his mission to warn the world

By Luc Rinaldi| Portrait by Markian Lozowchuk
| November 16, 2023

In 2023, artificial intelligence finally caught up to the hype. Last November, American research lab OpenAI released the now-ubiquitous chatbot ChatGPT. It could summarize novels in seconds. It could write computer code. Its potential to generate scripts contributed to sending Hollywood’s writers on strike. Within two months, it had 100 million users, making it the fastest-growing app of all time, and Microsoft threw $10 billion at Open-AI to keep the party going. After decades of false starts, AI was finally off to the races.

There was, however, one guy who wasn’t popping champagne: Geoffrey Hinton, the University of Toronto computer science professor better known as the godfather of AI. On paper, ChatGPT should have thrilled Hinton—he’d spent his entire career trying to perfect neural networks, the architecture that undergirds GPT, and now they worked better than ever. When he fed the chatbot jokes, it could explain why they were funny. If he gave it brain teasers, the chatbot could solve them. “That’s way more reasoning than we thought these things could do a few years ago,” he says. It seemed to him that, for the first time, machines were passing the Turing test, the benchmark at which computers demonstrate intelligence indistinguishable from a human’s. It wouldn’t take long—maybe five to 20 years, he thought—for AI to become smarter than humans altogether.

This prediction comes with some terrifying implications. Humans have dominated the earth for millennia precisely because we are the most intelligent species. What would it mean for a superior form of intelligence to emerge? Yes, AI might cure diseases, mitigate climate change and improve life on earth in other ways we can’t yet envision—if we can control it. If we can’t? Hinton fears the worst: machines taking the reins from humanity. “I don’t think there’s any chance of us maintaining control if they want control,” says Hinton. “It will be hopeless.”

Hinton wondered what to do. Having decided that AI could very well be pushing humanity to the brink, he couldn’t just carry on with his work. So, on May 2, he appeared on the front page of the New York Times announcing that he was stepping down from his job at Google and warning the world about the existential threat of AI.

Related: The 50 most influential Torontonians of 2023

Hinton wasn’t the first person to prophesy an AI apocalypse. Elon Musk, for one, has spent years harping about the impending singularity, the point at which humans irrevocably lose control of AI—but Elon says a lot of nutty stuff. Among AI experts, few people paid serious attention to the idea that machines would become extremely harmful any time soon.

Hinton changed that. After all, there is no greater authority on AI. A Brit by birth and a Canadian by choice, he has been directly—or, through the work of his students and colleagues, indirectly—involved in nearly every major deep-learning breakthrough, including the development of generative AI tools like DALL-E. When he spoke up, the world listened. Jeff Clune, an associate professor at the University of British Columbia and senior research adviser to Google’s AI research lab, DeepMind, told me that Hinton’s warning was a “thunderclap” that opened the eyes of scientists, regulators and the public. “There are people on both sides of this debate. What you rarely see is someone changing sides, so that causes people to take notice,” he says. “When that person is the most influential person in the field and, in many ways, the father of it, it is impossible to ignore.”

Hinton hoped that, by ringing the alarm, he would inspire policy makers to fast-track efforts to prevent an AI Armageddon, but not even he predicted the tsunami of attention his announcement would attract. Justin Trudeau invited him to dinner at Richmond Station in Toronto to discuss what Canada ought to do. They ended up talking for two and a half hours. Informed in part by that meeting, Trudeau’s government has rolled out a code of conduct for tech companies, which implores (but so far doesn’t force) its signatories—which include BlackBerry, Telus and the Vector Institute—to implement robust risk management strategies, openly publish information about their AI systems and maintain human oversight. The feds hope to make those rules mandatory by passing Bill C-27, which contains the Artificial Intelligence and Data Act, in 2024.

In early May, Hinton fielded a call from Margrethe Vestager, an executive vice-president of the European Commission, which has since folded a number of AI safeguards into its General Data Protection Regulation. The UK government summoned Hinton to Downing Street, where, under a towering portrait of Margaret Thatcher, he told a dozen of prime minister Rishi Sunak’s advisers that, with all the jobs AI would wipe out, they should consider instituting a universal basic income. “Just don’t tell Sunak it’s called socialism,” he said. Around the same time, Hinton talked with Bernie Sanders about potential punishments for creating AI-fuelled disinformation. He then spoke with senator Jon Ossoff, the office of Chuck Schumer and, in July, the White House, which tipped him off—“before Congress knew,” he notes—that several stateside heavyweights, including Google, Amazon, Meta, Microsoft and OpenAI, had signed on to another slate of voluntary AI safety commitments. But he declined an invitation to appear before a House of Representatives committee chaired by Freedom Caucus co-founder Jim Jordan. “The Republicans want fake news,” he says. “They live on it.”

Advertisement

Related: The AI superstars at Google, Facebook, Apple—they all studied under Geoffrey Hinton

Beyond the bureaucrats, Hinton sent shockwaves through the worlds of media, business and pop culture. The New York Times editorial board invited him to help design its policies around AI-generated copy and imagery—guidelines that newsrooms around the world will no doubt emulate. He also got a call from Musk, who breathlessly agreed that machines would soon seize control from humanity. “His view was that they’ll keep us around out of curiosity, and my view was that’s a pretty thin thread for existence to hang on,” says Hinton. “He just kept prattling on. In the end, I had to tell him I had another meeting.”

Hinton is now something of a doomsaying celebrity, echoing his worries in a never-ending stream of interviews, podcasts, conferences and panel appearances. He’s been on CNN, PBS, CBC, BBC and 60 Minutes. A New Yorker reporter accompanied him to his cottage this summer. Snoop Dogg, speaking at a conference in Beverly Hills, reported, “I heard the old dude that created AI saying, ‘This is not safe, ’cause the AI’s got their own minds, and these mother-fuckers gonna start doing their own shit.’ ” To which Hinton later responded, with dry British wit, “They probably didn’t have mothers.”

Clearly, Hinton is having fun. But some of his friends and colleagues—particularly those who don’t share his apocalyptic concerns—worry that he’s tainting his legacy. Before Hinton went public, Aaron Brindle, his long-time media handler at Google, gently advised him that, by casting his lot with the end-is-nigh crowd, he risks overshadowing everything else he’s achieved. “I think his role in AI is so much more than that,” says Brindle, who’s now a partner at AI-focused VC firm Radical Ventures.

Hinton was undeterred. “I don’t really care about my legacy,” he says. “The best thing you can do with a good reputation is squander it, because you can’t take it with you when you’re dead.”

 

Over the past several months, Hinton has received more than a thousand interview requests. When he agreed to speak to Toronto Life, he asked that the writer have a degree in STEM so that he could properly delve into the technical underpinnings of AI. When I arrive at Hinton’s charming brick house in the Annex this past September, I sheepishly admit my lack of credentials. For what it’s worth, I tell him, I was a mathlete in high school. “So you know what a polynomial is,” he concludes—incorrectly. For the first time in my life, I regret dropping Grade 12 calculus.

The home is quiet. Hinton’s two adult children, who live with him, are out. He adopted them from Latin America in the 1990s with his first wife, Ros, who died of ovarian cancer when the kids were preschoolers. His second wife, Jackie, died of pancreatic cancer five years ago. Hinton has a new partner, U of T criminology professor Rosemary Gartner. But, on the day of my visit, his two cats are the only other creatures present.

We sit down at a long wooden table in the dining room—or, more accurately, I sit down. Hinton injured his back at 19 moving a space heater for his mother and stopped sitting altogether in 2005 because it had become too painful. Lately, he’s been able to sit in 15-minute spurts, so, throughout our two-hour conversation, he alternates between pacing around the table and perching on a shoebox on a chair.

Advertisement

Before my visit, I spoke with several of Hinton’s colleagues, who variously described him as a “deep thinker,” “otherworldly” and “maybe an alien.” Juna Kollmeier, an astronomy professor at U of T, told me, “If there is intelligent life in the universe, it’s him.” Brilliance is in his blood. His family tree includes a staggering number of absurdly influential scientists, among them the creator of Boolean logic, the inventor of the jungle gym and the namesake of Mount Everest. By nature, nurture or both, Hinton has a scientist’s instinct for understanding how the world works—a polymathic curiosity that applies as much to computers as it does to carpentry. Within half an hour of my arrival, he’s riffed on pediatric psychology, pyruvic acid, dentistry, Watergate, the fallibility of human memory and, of course, artificial intelligence.

Geoffrey Hinton as a young boy
Hinton as a young boy

I ask Hinton how he came to believe that AI poses an existential threat to humanity. It began, he says, when he set out to solve one of AI’s prickliest problems: the gobsmacking amount of energy it consumes. By one estimate, ChatGPT—which, like most machine-learning models, relies on reams of power-hungry computer servers—consumes a gigawatt hour of electricity daily, enough to power more than 30,000 homes. By contrast, the human brain runs on about 12 watts, less than the average lightbulb. The reason for this discrepancy is that machines learn differently than we do. Most modern AIs use an algorithm called backpropagation, which involves repeatedly passing data—the pixels of an image, for instance—through a network of artificial neurons and adjusting the connections between those neurons until the machine can recognize features like shapes and colours and can say, for example, “That’s a cat.” Those computations activate billions of tiny silicon transistors, which generate enormous amounts of heat and, in turn, require energy-intensive cooling systems.

Hinton and his son, Thomas, whom he adopted in the 1990s with his first wife, Ros
Hinton and his son, Thomas, whom he adopted in the 1990s with his first wife, Ros

Hinton tried to create a version of AI that didn’t consume so much energy—one that more closely resembled the way the human brain works. But, as his AI became more human, it wasn’t getting better; it was getting worse. Yes, it consumed less power, but it was also less powerful. Large AI models can consume vast amounts of data—including books, web pages, audio clips and videos—much faster than humans, and they can transmit what they learn across tens of thousands of different computers, “a kind of hive mind,” as Hinton puts it. AI that used Hinton’s algorithm, meanwhile, transmitted what it knew slowly, the same way a flesh-and-bone teacher might instruct a student.

This led Hinton to a startling realization: artificial intelligence worked far better and faster than biological intelligence. For him, this was a paradigm shift. He’d always thought that, for the foreseeable future, the human brain would be superior to AI. “That’s what I believed last year, and the year before that, and the 48 years before that,” he says. “I suddenly decided I was wrong.”

Still, it wasn’t clear to me how the AI we know today—all those chatbots producing B-minus college essays—could evolve into species-slaying superpowers. “You might think we will somehow make them so they never want to take over,” Hinton says. But humans will necessarily give AI goals. And to allow them to achieve those goals with any efficiency, he adds, we’ll need to grant them the ability to make decisions without human input. In fact, we already do: some tech companies let AI acquire new server space without human sign-off. And it’s likely that AIs will one day be able to alter their own code, giving them even more agency. “It’s like soldiers,” Hinton says. “A general doesn’t have to say, ‘Point your gun here, and when I say so, pull the trigger.’ The general just says, ‘Kill the other side,’ and the soldiers do it.” Similarly, he argues, AI machines will autonomously develop their own subgoals. “The problem is that a natural subgoal for any goal is to get more control. If we ask them to be effective at doing things, they’re going to want more control. And that’s the beginning of a slippery slope.”

I wondered what the bottom of that slope would look like. HAL-9000 killing off astronauts to achieve its mission? Skynet cyborgs slaughtering helpless humans? For more specifics on how our AI overlords might seize power, I spoke to Hinton’s colleague David Duvenaud, an associate professor at U of T who specializes in AI safety. Duvenaud told me that an AI takeover doesn’t necessarily mean malevolent machines dead set on enslaving humans. Instead, he believes, we will gradually and imperceptibly marginalize ourselves. We already rely on machines to help us decide which applicant should get a job or which stocks we should invest in. As machine learning advances, countries and companies will have to either embrace AI or risk falling behind their competitors. Soon, we’ll turn to machines to decide how a business should be run or how a war should be waged. “It will often make more sense to replace a human with a machine,” says Duvenaud. And as we cede more and more decision-making to AI, “we’ll gradually get squeezed out of the important, controlling parts of our civilization.” That doesn’t exactly guarantee extinction. But, he adds, “hanging around in a civilization where you’re not providing value to almost anyone is a recipe for losing power or influence in the long run.”


“The problem is that a natural subgoal for any goal is to get more control. If we ask AI to be effective at doing things, it’s going to want more control. And that’s the beginning of a slippery slope”

Okay, but if humanity doesn’t like where AI is headed, won’t we be able to turn it off? OpenAI CEO Sam Altman supposedly carries around a kill switch in a little blue backpack to disable ChatGPT if things get dicey. We’re talking about computers, after all. I put this to Hinton, but he doesn’t seem reassured. Superintelligent AIs, he says, will be able to outsmart us and trick us into doing their bidding. They might even pretend to be dumber than they are. “They’ll be able to convince the guy with the kill switch not to pull it,” he says. “Imagine you lived in a world of two-year-olds, and the two-year-olds were in power. You’d figure out ways to manipulate them. ‘Put me in power. There’s free candy for everybody!’ And that’d be it.” Cutting the tension, he adds, “Do you think I should get one of those hats Oppenheimer had?”

Hinton rises to make us some tea. As he puts the kettle on, he explains that, over the summer, a number of speaker agencies reached out offering to book him for paid speeches. “I thought I’d try one just to see what it’s like,” he says. He picked the offer with the biggest dollar figure, which happened to be in Las Vegas. I chuckle at the image of Hinton, the buttoned-up academic, living it up in Sin City. I ask if he’s been there before. Only once, he answers, on an 11,000-mile Greyhound odyssey across America when he was 17. “I put a quarter in a slot machine and I won a dollar,” he says. “And then I stopped.”

 

Advertisement

Two weeks later, I flew to Las Vegas to see Hinton speak at Info-Tech Live. The three-day conference was held at the Cosmopolitan hotel and casino and presented by Info-Tech Research Group, a company in London, Ontario, that offers proprietary research and consulting services to IT workers and CIOs. Lately, the firm has been focused on helping businesses implement machine-learning strategies, and accordingly, nearly half of the conference’s breakaway lectures and panels were about AI. One presenter described how an AI model  had designed the floor plan of an office building in Toronto. Another spoke about using AI to detect deep-fake phone calls. In the day’s first keynote, futurist Ray Kurzweil gushed to the crowd of 1,500 about all the ways he thinks AI will benefit humanity: our lives will be longer, our democracies stronger, our cars safer, our workdays shorter and our incomes higher. “It will progress exponentially and solve medical problems literally 1,000 times the speed of conventional techniques,” he said. By the 2030s, he predicted, humans will be able to hook their brains up to the cloud, allowing us to harness the power of AI without so much as a keystroke.

Whereas Kurzweil got the crowd drunk on AI hype, Hinton’s afternoon talk obliterated their buzz—and any shred of optimism. He appeared onstage looking like the grim reaper (black sweater, black slacks, black sneakers) and stared dourly into the distance while the MC introduced him. After a few questions about Hinton’s career, the interviewer asked about the difference between biological and digital intelligence. “This will be bad news for Ray Kurzweil,” Hinton answered. You can easily transplant a machine-learning model onto a new computer that works the same way, he explained, and that makes digital knowledge immortal. But you can’t do that with the human brain. “And so everything Ray knows is going to disappear when he dies—and he will die.”

Geoffrey Hinton and The Atlantic CEO Nick Thompson at the 2023 Collision conference in Toronto
Hinton and The Atlantic CEO Nick Thompson at the 2023 Collision conference in Toronto. Photograph by Ramsey Cardy/Getty Images

After running through a checklist of the dangers of AI, Hinton added his usual caveats. “We’re entering a time of huge uncertainty. There are many very depressing dystopian possibilities, but we don’t actually know anything for sure,” he said. “We—particularly old white males—are used to thinking of ourselves as the boss and in control. We can’t get our head around the fact that these things might be much smarter than us and might decide they don’t need us. And we’ve got to prevent that from happening if we can.” The interviewer pressed Hinton: How exactly do we avoid that fate? Sounding defeated, he admitted that he didn’t know. He wasn’t even sure it was possible. It’s not like we can undo decades of technological progress and relegate AI back to the realm of science fiction. “I don’t have solutions to these problems,” he said. “I wish it was like climate change, where you can say, ‘Stop burning carbon.’ There isn’t a simple recipe like that for AI.”

Before the end of the hour, the interviewer asked Hinton what industries AI might disrupt. “There’s a fairly short answer, which is: all of them.” A few minutes later, he amended his response: plumbing is safe for now. “Particularly plumbing in an old house, because you need to be very inventive and agile and get your fingers into funny places, and they’re not good at that yet.” A throng of endangered IT guys laughed nervously.

That evening, I joined Hinton and half a dozen Info-Tech executives for dinner at a tapas restaurant. He stood at one end of our table, picking at paella and fielding questions. Someone asked what he’d been up to since quitting Google. “Plumbing,” he said without a hint of irony, before launching into a detailed 10-minute monologue, complete with photos, about how he’d fixed a leaky pipe in his upstairs washroom. A little later, trying to steer the conversation back to the topic du jour, someone asked Hinton what he considered the most exciting opportunity presented by AI. Smirking, he joked, “That it will kill us all.”

 

The big question, of course, is: What the hell do we do now? In late October, Hinton proposed a way forward. In an open letter, he and 23 other international experts called on the leading AI labs to commit one-third of their R&D budgets to making sure their systems were safe and ethical. They also advised governments to, among other things, create a registry of large AI systems, require companies to report instances of AI displaying dangerous behaviour and legally protect whistleblowers. It’s too soon to say whether AI labs and legislators will heed these recommendations. But Hinton, at 75 years old, has resigned himself to the fact that he won’t be leading the charge much longer. The dirty work of saving the world will fall to the next generation.


Someone asked Hinton what he considered the most exciting opportunity presented by AI. “That it will kill us all,” he joked

Humanity’s best hope may be Hinton’s former student Ilya Sutskever. This past July, he announced that he would be launching and co-leading a new team at OpenAI called Superalignment—a souped-up approach to alignment, the vein of research dedicated to preventing AI from going rogue and ensuring that it instead serves ethical, human-oriented goals. This is not some side-of-desk project. Sutskever, one of the world’s leading deep-learning experts, is OpenAI’s co-founder and chief scientist. He’s dedicating a fifth of the company’s computing power to solving this problem while his boss, Sam Altman, tours the world, speaking with presidents and prime ministers about the risks of AI.

During my video call with Sutskever in September, he seems, above all, busy. He speaks quickly and keeps looking over his shoulder, as if there are several world-saving coders who need his attention. “There are so many different technical challenges that need to be addressed,” he says. Owing to my aforementioned lack of a computer-science degree, I ask for a layperson’s summary of what he hopes the Superalignment team will achieve. “It’s like you want to imprint something onto the superintelligence with incredible strength, accuracy and longevity”—that something being a desire to serve humans rather than wrest control from them.

Advertisement

Speaking with Sutskever, I can practically hear the doomsday clock ticking. Humans have a tragic track record of chasing scientific and technological progress, no matter the risks those advances may pose (see: the atomic bomb). The Future of Life Institute proposed a six-month moratorium on advanced AI research earlier this year; thousands of people signed the institute’s open letter, but no one halted their work. No corporation or world power is willing to press pause on such a lucrative technology. The company that perfects self-driving cars, for example, will eat Uber. And whichever world power harnesses AI fastest and most effectively will supercharge its economy and military. Take it from Putin, a guy who knows a thing or two about trying to take over, who in 2017 predicted that “the one who becomes the leader in AI will be the ruler of the world.”

Related: Inside Uber’s self-driving car lab

Sutskever wants his Superalignment team to match, even exceed, the pace of AI progress, but in the name of safety. He hopes to crack the core technical problems of alignment within four years—before superintelligence arrives—so we can be ready when it does. He is not alone in this mission. There are plenty of other AI safety initiatives in the works, including Anthropic, a San Francisco–based company that employs Roger Grosse, a U of T professor who urged Hinton to speak up about his concerns. “The future is deeply unpredictable,” Sutskever tells me, echoing a familiar Hintonism. Then he says something that, in all my time with Hinton, I’ve never heard: “I feel like success is possible.”

 

Among AI pioneers, it is practically a rite of passage to become convinced that the machines will take over. Seventy years ago, the founding fathers of the field were already prophesying humanity’s downfall. In 1951, Alan Turing predicted that, once machines could think, it wouldn’t take them long to “outstrip our feeble powers.” At some point, he said, “we should have to expect the machines to take control.” Sixteen years later, influential MIT computer scientist Marvin Minsky said the moment was imminent. “Within a generation, I am convinced, few compartments of intellect will remain outside the machine’s realm,” he wrote. Minsky’s prediction didn’t pan out, but that didn’t stop Stephen Hawking from eventually agreeing that we are fast approaching our demise.

Curiously, the AI boom of the 2010s did not inspire much of an uptake in AI doomerism. If anything, it caused apocalyptic thinking to go out of fashion. As computer scientists started developing real-world AI products rather than ivory-tower theories, it became clear how difficult it would be to create an artificial general intelligence, or AGI: a machine able to achieve anything a human can. As a result, doom-and-gloom predictions referred to a time decades down the line. Anyone foolish enough to voice urgent concerns aloud would get laughed out of the lab. In early 2016, for example, a Washington, DC–based think tank called the Information Technology and Innovation Foundation cheekily presented its annual Luddite Award to “alarmists touting an artificial intelligence apocalypse.” A few months later, Google Brain co-founder Andrew Ng, then chief scientist at Chinese tech giant Baidu, compared anxiety over AI-induced extinction to “worrying about overpopulation on Mars.” Max Tegmark, the president of the Future of Life Institute, told me, “People would think you were insane if you started talking about this last year.”

That meant researchers who wanted to focus on alignment often did so at their own peril. When Duvenaud, the U of T prof, started specializing in AI safety two years ago, he feared it would hurt his students’ chances of getting hired. The leading AI companies wanted quick-coding whiz kids, not party-pooping sticklers. He fretted for the future of his own career too. “I was worried about opportunities drying up if I came across as some sort of end-of-the-world street-corner guy,” he says. Duvenaud was heartened to see the tide turn in early 2023, when thousands of AI researchers signed open letters like the one from the Future of Life Institute, which stated that “AI systems with human-competitive intelligence can pose profound risks to society and humanity.” But, he says, “they were dismissed pretty consistently, like, ‘These are a bunch of weird nerds and we shouldn’t trust them.’ ”

The AI community, in other words, remains deeply divided on the question of existential risk. An illustrative example of this schism is the difference of opinion between Yoshua Bengio and Yann LeCun, the two experts who won the Turing Award, the so-called Nobel Prize of computing, alongside Hinton in 2018. LeCun, now the chief AI scientist at Meta, has said that there’s no reason AI would develop self-preservation instincts in the way Hinton envisions. “AIs will not have these destructive ‘emotions’ unless we build these emotions into them,” he says. “I don’t see why we would want to do that.” Conversely, Bengio, the founder and director of Montreal AI lab Mila, has been as outspoken as Hinton about the dangers of AI, lobbying the Canadian and US governments for alignment research funding and stricter regulations, such as bans on AIs that pretend to be real people. I ask Hinton what an everyman like me should make of such an impasse. Here are three clearly brilliant men who don’t necessarily agree. Whom should I believe? Playfully, Hinton replies, “I would go for the supermajority.”

If Hinton is right, there are obvious repercussions to siding against him—namely, hastening our demise. But his detractors contend that there are also costs to agreeing with him if he’s wrong: unnecessary panic among policy makers, freezes on AI funding, delays to the life-saving innovations that AI can deliver. Nick Frosst, a co-founder of AI start-up Cohere, tells me that all the end-of-days talk is a dangerous distraction preventing mature discussion about the more immediate pitfalls of AI. “If you think there’s a legitimate risk of AI killing everybody in the next few years,” he says, “it’s really hard to talk about anything else.”

Advertisement

Yet there is plenty to talk about. Best-case scenario, taking human extinction off the table, hyper-realistic AI images and videos will almost certainly yield an explosion of disinformation, impeding people’s ability to tell fact from fiction and endangering democracy. Bad actors may use AI for nefarious purposes—fraud, cyberattacks and so on. Battle robots are already in development across the world; the US, for one, hopes to start using AI-enabled machines as soldiers by 2030. Then there are the millions, perhaps billions, of jobs AI will replace; the implausibility of turning all those displaced workers into data scientists and robotics engineers; and the widening of the world’s already immense wealth disparities. “Given this new technology, I would rather hear that people and governments are thinking about how the job market is going to change,” says Frosst. “ ‘What do we need to do to make sure that this is still working for the populace?’ rather than ‘How likely is this to wipe out all people?’ ”

Frosst was Hinton’s first employee at Google Brain Toronto. On most things, he says, the two of them agree. “I like it when smart, empathetic, caring people like Geoff are thinking about the future.” But he thinks that Hinton overestimates how fast AI will progress. When I ask Frosst how he ended up on a more optimistic path than Hinton, he points to his own work at Cohere, which helps businesses implement proprietary versions of large language models like GPT. At its core, he says, the technology is simple. It takes in a sequence of words, does a bunch of math and then outputs a corresponding sequence. It’s not thinking; it’s merely predicting the next word. He believes that today’s large language models will be better than humans at executing many tasks but will ultimately have significant limitations. “They don’t have the potential to go rogue,” he says.


Battle robots are already in development around the world. The US hopes to start using AI-enabled machines instead of human soldiers by 2030

When I put that argument to Hinton—that large language models are a glorified form of autocomplete, parroting bits of text they’ve scavenged from across the internet—he shuts it down without pause. “That’s as stupid as saying you’re composed of bits of animals you eat,” he says. When we consume meat, he continues, we break it down into tiny molecules and synthesize its proteins, which then become a part of our body. “It’s not like I could ask, ‘Which bit of you is the cow?’ ” Similarly, he argues, when you train a large language model, it becomes more than the sum of the data it sees. When someone gives it a prompt, it doesn’t spit out text it had stored somewhere. It analyzes the prompt, invents features to know what those words mean and then creates something new in response—it’s thinking and understanding. Sometimes, he says, ChatGPT will tell you that it misunderstood your prompt. “So what is it doing when it doesn’t misunderstand?”

 

This past spring, Hinton received an email from a mother in Oxfordshire. Her 17-year-old daughter hadn’t slept for four days because she’d read everything Hinton had said and was now terrified that AI would end humanity. “Was this your intention—to strike fear into the hearts of teenagers to the extent that they’re unable to function?” the mom asked. “What would you say to her if she was in front of you?”

Hinton responded with as much optimism as he could muster. “The future is very uncertain,” he wrote. He pointed out that some of the world’s brightest researchers were studying the problem and may yet come up with a way to keep a superintelligent AI in check. In fact, he added, he’d spoken up to convince people that they should dedicate resources to avoiding catastrophe. “I think it would be irresponsible not to speak out, given what I believe. But I do understand that speaking out also has a lot of negative effects.”

During my visit to Hinton’s home, I ask him how he’s coping. He has dealt with depression throughout his life, and publicly and repeatedly foretelling Armageddon isn’t exactly a pick-me-up. Yet he seems almost chipper, unafraid to mine the gallows humour of his honest conviction that the end could be around the corner. “I don’t know what to do with it,” he tells me. “I haven’t absorbed it emotionally.” He reminds me of the protagonists of the movie Don’t Look Up, astronomers who set out to warn the world about a comet that will destroy the earth—only to be refuted by rivals and dismissed by a vapid president. Retired from both industry and academia, Hinton seems to derive a sense of purpose from his new task, however Sisyphean it may be.

In the absence of a grand species-saving solution, Hinton has taken to finding satisfaction in the problems he can solve. When we finish talking about AI, he guides me to his basement, where he has a workshop filled with chisels and clamps. Hinton once considered becoming a carpenter, and it’s easy to see why. He shows me a series of shelves he made, taking deep satisfaction in the way he used the wood—no gratuitous cuts made, no nails required, no inch of material left unused. Before I leave, I ask him if he has any plans for the rest of the day. “Yes,” he responds with unbridled glee. “I’m going to stain the deck.”


This story appears in the December 2023 issue of Toronto Life magazine. To subscribe for just $39.99 a year, click here. To purchase single issues, click here.

Advertisement

NEVER MISS A TORONTO LIFE STORY

Sign up for The Vault, our free newsletter with unforgettable long reads from our archives.

By signing up, you agree to our terms of use and privacy policy.
You may unsubscribe at any time.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Big Stories

Toronto's 25 Rising Stars of 2024
Deep Dives

Toronto’s 25 Rising Stars of 2024