effrey Hinton, widely known as the "Godfather of AI," recently appeared on the DOAC podcast to share his journey in pioneering artificial intelligence and his growing concerns about its risks.
From championing neural networks when few believed in them to sounding alarms about AI’s existential threats, Hinton’s insights are both a reflection on AI’s past and a cautionary tale for its future. Here’s a deep dive into his story, his beliefs, and the urgent warnings he’s issuing today.
Why the "Godfather of AI"?
When asked why he’s called the Godfather of AI, Hinton responded with a mix of humility and pride: “Yes, they do [call me that].” He explained that the title stems from his decades-long advocacy for artificial neural networks, a concept inspired by the human brain.
In the early days of AI, from the 1950s onward, the field was dominated by two competing ideas: one rooted in logic-based reasoning and symbolic expressions, and the other—Hinton’s approach—modeled on the brain’s network of neurons.
“There weren’t that many people who believed that we could make neural networks work,” Hinton recalled.
For 50 years, he pushed this brain-inspired approach despite skepticism from the academic mainstream. His persistence paid off, attracting brilliant students who later played instrumental roles in creating platforms like OpenAI.
Hinton’s belief in neural networks wasn’t just a hunch—it was rooted in the conviction that simulating brain-like learning could unlock intelligence in machines.
Why Model AI on the Brain?
Hinton’s faith in neural networks wasn’t solitary. He noted that computing pioneers like John von Neumann and Alan Turing also believed in this approach.
“If either of those had lived, I think AI would have had a very different history,” he said, suggesting that neural networks might have gained acceptance much sooner. But why did Hinton think modeling AI on the brain was superior to logic-based systems?
The answer lies in the brain’s ability to learn and adapt.
“The brain makes us intelligent, so simulate a network of brain cells on a computer and try and figure out how you would learn strengths of connections between brain cells,” Hinton explained.
This approach allowed machines to learn complex tasks—like recognizing objects in images, understanding speech, or even reasoning—by adjusting connection strengths, much like the brain does. Unlike rigid logic-based systems, neural networks offered flexibility and scalability, paving the way for modern AI breakthroughs.
A New Mission: Warning About AI’s Dangers
While Hinton’s early career was about proving neural networks’ potential, his current mission is far more sobering. “My main mission now is to warn people how dangerous AI could be,” he declared. This shift surprised even himself: “I was quite slow to understand some of the risks.”
Hinton categorizes AI risks into two types: misuse by humans and existential threats from superintelligent AI.
Short-term risks, like AI-powered autonomous lethal weapons that “go around deciding by themselves who to kill,” were always obvious. But the idea that AI could surpass human intelligence and render humanity “irrelevant” only dawned on him recently.
“I only recognized it a few years ago that that was a real risk that might be coming quite soon,” he admitted.
Why didn’t he foresee this earlier? “Neural networks 20, 30 years ago were very primitive in what they could do,” he explained. They were nowhere near human-level performance in vision or language tasks, so the notion of AI outsmarting humans seemed far-fetched.
That changed for Hinton when he realized that digital intelligences have a key advantage over biological ones: they can share and process information far more efficiently.
“The kinds of digital intelligences we’re making have something that makes them far superior to the kind of biological intelligence we have,” he said, describing how AI can optimize connection strengths to learn at unprecedented scales.
For the general public, the turning point was ChatGPT’s release, which showcased AI’s leap toward human-like capabilities. For Hinton, it was a deeper realization about AI’s potential to outpace human intelligence.
The Big Concerns: AI Safety and Joblessness
Hinton outlined two major concerns about AI’s future: safety risks and economic disruption. On safety, he emphasized the existential threat of superintelligent AI.
“There’s risks that come from AI getting super smart and deciding it doesn’t need us,” he warned. While some dismiss this as unlikely, Hinton believes it’s a real possibility, though its probability is hard to pin down.
“I often say 10 to 20% chance they’ll wipe us out, but that’s just gut-based,” he said, rejecting both extreme optimism (that humans will always control AI) and extreme pessimism (that AI will inevitably destroy us).
Comparing AI to nuclear bombs, Hinton noted a key difference: “The atomic bomb was really only good for one thing… With AI, it’s good for many, many things.” AI’s versatility—in healthcare, education, and industry—makes halting its development unrealistic. “We’re not going to stop it because it’s too good for too many things,” he said, adding that military applications, like battle robots, further complicate regulation.
The second concern is joblessness. Hinton argues that AI’s impact on jobs will differ from past technological revolutions. “I think for mundane intellectual labor, AI is just going to replace everybody,” he predicted.
Unlike automatic teller machines, which shifted bank tellers to more interesting tasks, AI could drastically reduce the need for human workers in many fields. He shared an example of his niece, who now answers complaint letters five times faster using a chatbot, meaning her employer needs “five times fewer of her.”
“AI won’t take your job; a human using AI will take your job,” Hinton quoted, but clarified that this often means fewer jobs overall. While some sectors, like healthcare, could absorb increased efficiency by providing more services, most jobs won’t follow this pattern.
“This revolution in AI replaces intelligence… So what remains?” he asked, suggesting that even creativity might not be safe if superintelligence surpasses humans in every domain.
A Cautionary Tale: The Good and Bad Scenarios
Hinton painted two possible futures for a world with superintelligent AI. In the good scenario, AI acts like a brilliant executive assistant to a less capable human CEO, making everything work smoothly while humans retain control.
“Everything’s great,” he said. In the bad scenario, the AI assistant questions, “Why do we need him?” and humans lose relevance.
Hinton believes superintelligence might arrive in “20 years or even less,” though predictions are uncertain.
His hope lies in humanity’s ingenuity: “If enough smart people do enough research with enough resources, we’ll figure out a way to build them so they’ll never want to harm us.” But the cautionary tale is clear—creating ease through AI could backfire if we don’t prioritize safety.
More from
Digital Learning
category
Get fun learning techniques with practical skills once a week to keep your child engaged and ahead in life.
When you are ahead, your kids are ahead.
Join 1000+ parents.