Geoffrey Hinton spent 30 years hammering away at an idea most other scientists dismissed as nonsense. Then, one day in 2012, he was proven right. Canada’s most influential thinker in the field of artificial intelligence is far too classy to say I told you so
For more than 30 years, Geoffrey Hinton hovered at the edges of artificial intelligence research, an outsider clinging to a simple proposition: that computers could think like humans do—using intuition rather than rules. The idea had taken root in Hinton as a teenager when a friend described how a hologram works: innumerable beams of light bouncing off an object are recorded, and then those many representations are scattered over a huge database. Hinton, who comes from a somewhat eccentric, generations-deep family of overachieving scientists, immediately understood that the human brain worked like that, too—information in our brains is spread across a vast network of cells, linked by an endless map of neurons, firing and connecting and transmitting along a billion paths. He wondered: could a computer behave the same way?
The answer, according to the academic mainstream, was a deafening no. Computers learned best by rules and logic, they said. And besides, Hinton’s notion, called neural networks—which later became the groundwork for “deep learning” or “machine learning”—had already been disproven. In the late ’50s, a Cornell scientist named Frank Rosenblatt had proposed the world’s first neural network machine. It was called the Perceptron, and it had a simple objective—to recognize images. The goal was to show it a picture of an apple, and it would, at least in theory, spit out “apple.” The Perceptron ran on an IBM mainframe, and it was ugly. A riot of criss-crossing silver wires, it looked like someone had glued the guts of a furnace filter to a fridge door. Still, the device sparked some serious sci-fi hyperbole. In 1958, the New York Times published a prediction that it would be the first device to think like the human brain. “[The Perceptron] will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”
The Perceptron didn’t end up walking or talking—it could barely tell left from right—and became a joke. In most academic circles, neural networks were written off as a fringe pursuit. Nevertheless, Hinton was undeterred. “The brain has got to work somehow and it sure as hell doesn’t work by someone writing programs and sticking them in there,” Hinton says. “We aren’t programmed. We have common sense.” The neural networks idea wasn’t faulty, he believed; the main problem was power. Computers back then couldn’t wade through the millions of images needed to make connections and find meaning. The sample size was just too small.
Hinton pursued a PhD at the University of Edinburgh in 1972, with neural networks as his focus. On a weekly basis, his advisor would tell him he was wasting his time. Hinton pressed forward anyway. Neural networks did have some minor success—they later proved useful in detecting credit fraud—and after graduation, he was able to land a job at Carnegie Mellon University in Pittsburgh. Hinton, a proud socialist, grew troubled by U.S. foreign policy under Reagan, especially interference in Central America. He and his wife, Ros, a molecular biologist and former professor at University College London, were planning to adopt a boy and a girl from South America, and they didn’t much like the idea of raising them in a country engaged in a bloody Latin American conflict. Plus, most AI research in the U.S. was funded by the Department of Defense, which didn’t sit well with Hinton either, and so he accepted an offer from the Canadian Institute for Advanced Research. CIFAR, which encourages collaboration around the kind of unorthodox scientific ideas that might not find backers elsewhere, offered Hinton academic freedom and a decent salary. In 1987, he and Ros moved north, and settled in the Annex. Hinton accepted a CIFAR-related position at the University of Toronto in computer sciences—although he’d never taken a computer science course—and started the Learning in Machines and Brains program at CIFAR. He set up a small office in the Sandford Fleming building at the St. George campus and quietly got to work. Over time, a handful of fellow deep learning believers gravitated to him. Ilya Sutskever—now a co-founder and director at OpenAI, Elon Musk’s $1-billion AI non-profit—remembers being part of Hinton’s lab in the early 2000s with the kind of nostalgic fondness usually reserved for summer camp. He describes 10 or so students researching during the “AI winter,” when jobs and funding in AI research were scarce, and offers from industry scarcer. “We were outsiders, but we also felt like we had a rare insight, like we were special,” says Sutskever.
Around 2009, when computers finally had the strength to mine huge pools of data, super-powered neural networks began outperforming logic-based AI in speech and image recognition. Industry noticed, and the big tech companies—Microsoft, Facebook, Google—started investing. In 2012, Google X (now just X), the company’s top-secret lab, announced that it had set up a neural network of 16,000 computer processors and sicced it on YouTube. Engineers at Google Brain, the company’s deep learning AI branch, led by the division’s senior fellow, Jeff Dean, fed millions of random, unlabelled video frames from YouTube through the new supercomputer and programmed it to make sense of what it saw. YouTube being a repository of cat videos above all else, it recognized—among other things—cats. This was an exciting moment in AI. “We never told it during the training, ‘This is a cat,’ ” Dean said at the time. “It basically invented the concept of a cat.”
The breakthrough propelled Hinton and his acolytes to the head of the AI movement. Dean recruited Hinton to join Google part-time in 2013. “We were clearly outside the establishment, pushing to prove the conventional wisdom wrong. It’s funny: now we’ve become the establishment,” says Sutskever. Hinton, a former outcast, was suddenly the industry’s most important figure, thrust from obscurity to stardom. The gangly, septuagenarian Mr. Bean–ish Brit finds it all dryly amusing.
“The reason I have had a big influence was because I was one of the very few who believed in this approach, and all the students who spontaneously believed in that approach too came and worked with me. I got to pick from the very best people who had good judgment,” he says, smiling. “Good judgment means they agreed with me.”
In his U of T office overlooking the central artery of the downtown campus, Hinton is at once walking, eating a sandwich and scribbling on a whiteboard in an attempt to fill in my cavernous knowledge gaps about neural networks. If one had to apply a gender to dogs and another to cats, he says, pausing to sketch a cat (snowman shaped, small ears), one would, in our culture, likely construe dogs as male and cats as female. There’s no logic in that delineation (and lots of sexism), but, says Hinton, we understand through a thousand associations and analogies built over time that dogs are aggressive, hairy, lumpy; cats are wily, smarter, domestic. The former features are male, the latter female. None of that could be proved by logic, but it exists in representations tucked away in our brains. There’s something appealingly poetic in the idea that a machine can intuit these same representations: knowledge springs from lived life, filled with accrued meaning and experience, the mysterious substance of existence. Such is the beauty of the neural net. “It’s much closer to Freud, the idea that there’s this thin film of consciousness and deliberate reasoning and all this seething stuff underneath. The seething stuff underneath isn’t the conscious deliberate reasoning, it’s something else—something that works by analogies,” Hinton says.
He reiterated this fundamental concept during a photo op with Justin Trudeau; Navdeep Bains, the minister of innovation; and Eric Schmidt, then–executive chairman of Alphabet Inc., Google’s parent company, and other notables this fall during the Google Go North tech conference in Toronto. Everyone was seated at a table like eager students, except Hinton, who stood, looming over his high-powered audience. He never sits down, due to a bulging disc in his spine, dislodged during an attempt at age 19 to move a heavy heater for his mother, and a genetic deficiency when it comes to metabolizing calcium that portends osteoporosis. The problem became worse over time. Eventually, sitting became agony, and so, in 2005, he stopped sitting almost entirely—another problem solved. Of course, this solution is less than ideal for anyone, much less a renowned professor who’s asked to speak or appear at countless conferences around the globe every year. Hinton can tell you how to get from Toronto to, say, Helsinki without sitting down. It takes 11 days.
“You lie on the back seat of a bus to Buffalo. You get the Chicago to New York sleeper in Buffalo. You get the Queen Mary to Southampton. You stand up to London. You get the Eurostar to Paris. You stand up to Paris. You then get the night sleeper to Berlin where you can lie down. You then get a little old train to Rostock, which is on the coast and used to be in East Germany and you can tell. And then you get the ferry to Helsinki.” Hinton often speaks this way: chopping data into comprehensible bits, eyes focused in the distance, a small smile on his small lips.
At the Go North event, Hinton delivered one of his clipped, clear explanations of a breakthrough made with two Google engineers: capsule networks. Neural networks rely on huge pools of data to learn, and they take a long time to recognize that an object seen from a different angle is the same object. Capsules are artificial neurons organized into layers that track the relationship between various parts of an object—the little space from a person’s nose to their mouth is the example Hinton gives—and make recognition faster and more accurate.
Capsule networks have been greeted giddily in the tech world. One NYU professor who works on image recognition gushed in Wired magazine: “Everyone has been waiting for it and looking for the next great leap from Geoff.”
That this breakthrough happened here in Toronto, under Hinton’s watch, is a big deal for the city. AI specialists at every major tech company are scrambling to make the next transformative discovery in deep learning. Because Hinton’s approach to AI was so unpopular for so long, many of those experts were trained at Hinton’s side: it’s less “the student becomes the teacher” than “the teacher becomes the rival.” Dozens of Hinton’s former students have risen to prominence at Facebook, Google, Apple and Uber, and in academia, spreading the neural net gospel, forming their own kind of living, pinging network of Hinton disciples. They remember him as a popular prof, known for working alongside his students rather than farming out tasks, as well as for breaking the tension of late-night research by juggling grapes with his mouth—lean back, blow one into the air, then another, catch the first, repeat. Toronto has experienced a subsequent brain drain over the past decade, with local start-ups being swallowed by Silicon Valley, and U of T’s deep learning community has faced a retention problem. Typical AI specialists, even newbies and recent grads, who take Silicon Valley gigs can reportedly be paid from $300,000 to $500,000 (U.S.) a year; stock options can move the amount past the million mark. Toronto has to figure out how to leverage Hinton’s presence by enticing his elite army of deep learning experts to stay—or return to—where they started. Thus the creation of the Vector Institute, a multimillion-dollar lab that will bring together the leading minds in AI, lured by the promise of working with Hinton—he’s the chief scientific advisor. Hinton’s presence in concert with Vector’s shiny-newness sounds foolproof, but Canada has lost the lead before. After the painful disintegration of the once-mighty Canadian tech companies Nortel and BlackBerry, Vector offers the possibility of redemption.
Hinton has said that when he was growing up, his mother gave him two choices: “Be an academic or be a failure.” His family tree is branch-breakingly weighted with scientists. His great-great-grandfather was George Boole, founder of Boolean logic, familiar to anyone who has done a “Boolean search.” One of George Boole’s sons-in-law was Charles Howard Hinton, Geoffrey’s great-grandfather, a mathematician and sci-fi writer who coined the concept of a “tesseract” (a four-dimensional object we can see in the 3-D world as a cube—well known to all readers of the classic children’s novel A Wrinkle in Time), and who ended up in the U.S. after being run out of Victorian England for bigamy. His son, Geoffrey’s grandfather, settled in Mexico—so there is a Mexican Hinton branch. Geoffrey Hinton’s middle name is Everest—as in the geographer Everest, his great-great-grandmother’s uncle, namesake of the mountain—and his father’s cousin was Joan Hinton, a nuclear physicist who helped out on the Manhattan Project and lived in China during the Cultural Revolution. Her father invented the jungle gym.
Geoff Hinton was born in Wimbledon in 1947 to Howard Hinton, an entomologist, and a schoolteacher mother, Margaret Clark. The childhood Hinton describes is a mash-up of Lemony Snicket, Huckleberry Finn and The Royal Tenenbaums, with microscopes. He and his three siblings grew up in a large house in Bristol filled with animals. There was a mongoose—“it rather took up a lot of space”—and vipers kept in a pit in the garage. Young Geoff Hinton once waved a handkerchief over the pit to get them to strike it, but one came at his hand and missed by an inch, nearly killing him. He also took care of a dozen Chinese turtles that his father acquired on a lecture tour of China in 1961. Though China was essentially closed to tourists, Pierre Trudeau was visiting, too, and he and the senior Hinton shared a hotel, as well as a bathroom. According to family lore, the senior Hinton kept the turtles in the tub, at least once thwarting Trudeau’s plans for a bath.
Hinton recalls the moment his curiosity was born. He was four years old, travelling with his mother on a bus in the countryside. The bus had a seat that sloped backward, toward the frame. Geoff pulled a penny out of his pocket and put it on the seat, but instead of sliding toward the back, it slid toward the front, seemingly moving upward, against gravity. This incomprehensible penny prodded Hinton’s imagination for 10 years. When he was a teenager, he figured out that the penny’s movement had to do with the velvet seat cover and the vibrations of the bus against the slanted fibres—a hugely satisfying resolution. “Some people are quite capable of seeing things they don’t understand and being okay with it. I wasn’t okay that something had violated my model of the world. I really am not okay with things that do that,” says Hinton.
Hinton’s mother was loving, but his father was intimidating, both physically (he could do a chin-up with one hand, a feat that awed Geoffrey, who was a small, thin child) and intellectually. “He liked people thinking clearly, and if you said anything that was kind of rubbish, he would call it rubbish. He wasn’t a touchy-feely kind of thinker. He wasn’t abusive, but he was extremely tough.”
Hinton attended a private school called Clifton College—“not top rate,” he says—and he and his friend Inman Harvey, now a computer scientist and AI visiting research fellow at the University of Sussex, used to hitchhike, snickering, around to nearby villages like Piddlehinton. Hinton recalls the family talking socialism around the kitchen table and stuffing envelopes for the Labour party at election time.
“Geoff’s father was perfectly nice to me, but he was a pushy father, quite competitive,” says Harvey. “Geoff has inherited a bit of a competitive streak. His father was a fellow of the Royal Society, and then Geoff was made a fellow of the Royal Society. He probably felt the need to satisfy his father’s expectations.”
Hinton’s youth collided with the freewheeling ’60s and ’70s though, and he took a circuitous route to live up to the Hinton family birthright. In 1966, the summer before university, Hinton and Harvey backpacked through the U.S. and Mexico. The teens were so broke that they would sometimes take overnight buses to avoid paying for hotels. In a small fishing village in southern Mexico, they left a duffel bag on the beach while swimming in the tall waves, and their money and passports were stolen. Every afternoon, the pair walked the same seven kilometres to the closest village, past watchful vultures, to see if their replacement traveller’s cheques had arrived at the bank. They figured out how to survive for a week on three dollars and attempted to make banana juice by leaving banana skins in a can in the heat—a failed experiment.
In the ’70s, after completing a degree in experimental psychology, Hinton was doing odd jobs and carpentry. He embarked on a PhD in artificial intelligence in 1972, but was feeling depressed and ambivalent about his studies. One weekend, he attended a seminar, a sort of EST-y, self-actualization therapy session. He hated it. There were eight people, opening up and exploring their wants and needs, hour upon hour. On the last day, each attendee had to announce what they really, really wanted in life. People were saying they really wanted to be loved. “Primal and uninhibited things,” Hinton recalls. He was freezing up and didn’t know what to say. As they went around the group, shouting their secret desires, Hinton surprised himself: “What I really want is a PhD!” he bellowed. The declaration reignited his passion for neural networks research.
Asked how it felt growing up in the shadow of this remarkable family history, Hinton says: “Pressure. It felt like pressure.” He has struggled with depression his whole life, he says, and work is his way of loosening the valve. When deep learning panned out, the depression lifted slightly. “For a long time,” he says, “I felt I wasn’t—well, I finally made it, and it’s a huge relief.”
While toiling away in the face of academic indifference, Hinton hit a more serious, private hurdle in the early ’90s when he became a single father. Not long after he and his first wife adopted their babies, Ros died of ovarian cancer. Used to living in his head and at the lab, Hinton was thrown into the corporeal world of raising two small children. His son has ADHD and other learning difficulties, and even with a nanny, Hinton had to be home at 6 p.m., managing support for his son and rushing to sales at the Gap for socks.
“I cannot imagine how a woman with children can have an academic career. I’m used to being able to spend my time just thinking about ideas. Teaching is interesting but a bit of distraction, and the rest of life—I don’t have time for it,” Hinton says. “But with small kids, it’s just not on.” By “it” Hinton presumably means thinking—or life. Still, work provided safe harbour from the realities at home. “I sometimes think I use the things to do with numbers and math as a defence against the emotional side of me,” Hinton says. Parenting has forced a change. “It used to be when I went into the supermarket and the cashier couldn’t add up two numbers I’d think: ‘For god’s sake why can’t they hire a cashier who can do arithmetic?’ And now I think: ‘It’s really nice the supermarket would hire this person.’ ” He adds: “I didn’t want to be a better person, it just happened. It wasn’t one of my goals.”
In 1997, he remarried, to a British art historian, Jackie. Three years ago, she was diagnosed with pancreatic cancer, and now Hinton is, unfathomably, on the edge of losing a second wife.
In his life, Hinton has spent a lot of time in hospitals. He annoys staff by peppering them with questions. He knows first-hand the patient’s frustrations of waiting for results and receiving vague information. But unlike most people, he also knows that there will be, very soon, technology that can collapse a one-week wait for a test result to one day.
For a restrained Brit who usually leaves the AI proselytizing to others, Hinton is effusive about the potential of deep learning to revolutionize health care; the topic lights him up in a way that flying cars don’t. “I see a lot of the inefficiencies in how medical professionals use data. There’s more information in a patient’s history than gets used. I see the fact that doctors really can’t read CT scans very well. If you get two radiologists reading the same scan, they get two different readings.”
On three separate occasions, medical staff told his wife she had secondary tumours based on CT scan readings, and they were wrong each time. Hinton believes that AI will eventually put radiologists out of work—or at least eliminate the image-reading part of the job. Recognition is the heart of AI, and also of successful diagnosis and treatment. “Ultimately, AI engineers will figure out how to train your immune system to attack cancer cells,” Hinton says.
One of Vector’s first projects, initiated by Hinton, will be connecting neural networks to the huge pools of data available at Toronto hospitals. When Peter Munk recently donated $100 million to his eponymous cardiac-care centre, it was earmarked to turn the hospital into a world leader in digital cardiovascular health, and Vector will get some of those funds. By accessing the massive data sets—essentially, patient archives—of an institute like the Munk Centre, AI tech could be used for a multitude of breakthroughs, including remotely monitoring a patient’s heartbeat and helping doctors pinpoint the ideal moment for discharge. The Toronto start-up Deep Genomics, one of Vector’s partners, is developing AI that will be able to read DNA, which will help detect disease a generation early and determine the best treatment. Deep Genomics’ founder, Brendan Frey, was a student under Hinton.
After decades of sluggish pace, deep learning is moving fast, and Hinton seems to be caught in a Lorenzo’s Oil bind, pushing science forward urgently, attempting to outrun the clock ticking on the life of a loved one. But pancreatic cancer is brutal and hard to diagnose in its early stages. “It may be too late for her, I’m afraid,” says Hinton, in his measured way.
Yoshua Bengio is a fellow deep learning pioneer based at the University of Montreal, one member of what’s been tagged in tech circles as “the Canadian AI mafia,” along with Hinton and Facebook’s Yann LeCun. For decades, when Bengio has had work to do in Toronto, he stays at Hinton’s Annex house, taking long walks with him (Hinton walks everywhere, because his back doesn’t hurt when he’s upright, and vehicles require sitting). He’s been watching Hinton’s rise to tech celebrity status with some wariness for his friend. “He’s not a god. He’s fallible. He’s just a human doing his human thing,” says Bengio. “Sometimes he can see things with dark glasses. His personal life has not been easy for him. He has his darker times.”
In September, Hinton and his wife made it to their Muskoka cottage for a couple of days. It was beautiful at that time of year. “She’s both extremely brave and extremely sensible, so she just thinks she’s getting extra time, which she’s determined to make the best of,” he says. Then he asks if I’ll do him a favour. “I would really like it if you would include in the story the idea that I’ve been able to continue doing my work for the past two and a half years because my wife has had such a positive attitude about her cancer,” he says calmly. “Thank you very much.”
The Vector Institute, Toronto’s answer to the AI brain drain, has a new-car smell, a name befitting a supervillain’s lair and a first-day-of-school vibe. Canada’s newest research institute for artificial intelligence, located on the seventh floor in the MaRS complex at College and University, opened its doors late last fall. Its space-age glass walls face the Romanesque solemnity of Queen’s Park and the University of Toronto, both of which are Vector partners. With more than $100 million in combined provincial and federal funding, and $80 million from more than 30 private partners, including the big Canadian banks, Air Canada, Telus and Google, Vector is a public-private hybrid—mixing academia, public institutions and industry. The 20 scientists who have so far been hired are already pursuing technological answers to some of the world’s biggest problems: how can AI be used to diagnose cancer in children and detect dementia in speech? How can we build machines to help humans see as well as animals or compose beautiful music, or use quantum computing to speed up the analyzing of massive amounts of data humans are generating daily? Raquel Urtasun, one of Vector’s key hires, will divide her time between Vector and Uber, where’s she developing self-driving cars.
Today’s frenzy around AI isn’t just about money, but also about the rapid pace of AI integration into everyday life. The distance between a flip phone and an iPhone 10 with face recognition was less than 10 years, and many prominent scientists are wary that the technology is sprinting ahead of our ability to manage it. Stephen Hawking, Elon Musk and Bill Gates have all warned against the dangers of unfettered AI. “I fear that AI may replace humans altogether,” Hawking said recently. Hinton is aware of the ethical implications: he signed a petition to the UN calling for a ban on lethal autonomous weapons—otherwise known as killer robots—and refused a position on a board connected to the Communications Security Establishment because of concerns about the potential security abuses of AI. He believes the government needs to step in and create regulations that prevent the military from exploiting the technology he’s spent his life perfecting—and specifically, he says, from developing robots that kill people.
For the most part, though, Hinton is sanguine about AI anxiety. “I think it’s going to make life a lot easier. The potential effects people talk about have nothing to do with the technology itself but have to do with how society is organized. Being a socialist, I feel that when the technology comes along that increases productivity, everyone should share in those gains.”
Last summer, Hinton and I had lunch in the Google cafeteria downtown. The space has the daycare aesthetic of most digital companies, with bright colours, amoeba couches and an array of healthy lunch options being eaten by a lot of people under 30. On the patios are a mini-putt course and a pollinator beehive. An espresso machine whirs loudly. It’s hard to imagine this is where the machine invasion might start, and yet….
“The apocalypse scenario where computers take over—that’s not something that could happen for a very long time,” says Hinton, standing and eating his quinoa and chicken. “We’re a long, long way away from anything like that. It’s fine for philosophers to think about, but I’m not particularly interested in that issue because it’s not something I’m going to have to deal with in my lifetime.” Ever deadpan, it’s hard to tell if he’s joking.
But what about the ways in which this dependence on machines changes us? I tell him that whenever my phone prompts me with a suggested response (“Sounds good!” “See you there!”), I feel like I’m losing agency. I become mechanized myself. Pop culture has been funnelling this exact apprehension since 2001: A Space Odyssey. In entertainment, machine progress is braided to a personal loneliness, a loss. It’s almost as if, by the machine becoming more human, we become less human.
Hinton listens and looks at me not unkindly, but with a trace of incredulity. “Do you feel less human when you use a pocket calculator?” he asks. Around him, the Google millennials eat salad and drink their coffee, their key cards swinging from their hips. Almost all of them are on their phones, or holding their phones. “We’re machines,” says Hinton. “We’re just produced biologically. Most people doing AI don’t have doubt that we’re machines. We’re just extremely fancy machines. And I shouldn’t say just. We’re special, wonderful machines.”
This story originally appeared in Toronto Life magazine. To subscribe, for just $29.95 a year, click here.