※ You can watch the complete interview with Geoffrey Hinton on AI safety by playing the full video.

Key Messages at a Glance

  • AI is digital intelligence superior to humans. Information sharing speed is billions of times faster than humans, and it has immortality.
  • Warns of a 10-20% probability that AI could destroy humanity. But honestly admits no one knows the exact probability.
  • Cyber attacks increased 12,200% between 2023-2024. AI is making phishing attacks and new cyber threats much easier.
  • AI can now design biological viruses. Small groups could potentially create deadly viruses with millions of dollars.
  • Dangers of autonomous lethal weapons: Will significantly lower the cost and friction of large nations invading small ones.
  • Echo chamber effects are intensifying. Algorithms are dividing society by showing extreme content.
  • Mass unemployment is imminent. Advises to “become a plumber” as only physical labor might be safe.
  • We can’t stop AI development. Too many benefits exist, and competition between nations is fierce.

Why Is He Called the “AI Godfather”?

Geoffrey Hinton is called the “AI Godfather” because he stuck with the neural network approach for 50 years.

  • Since the 1950s, there have been two approaches to AI:
    • Logic-based: Reasoning through symbolic representation and rules
    • Brain-based: Learning by mimicking brain cell networks
  • While most chose the logic-based approach, Hinton believed the brain-based approach was correct.
  • As a result of maintaining a minority opinion for 50 years, the neural network technology that forms the foundation of current AI was born.
  • Von Neumann and Turing also believed in neural networks but died early, which would have changed AI history.

Why He Started Warning About AI Safety

What Triggered the Warnings

  • Initially, he gradually realized AI’s dangers:
    • He knew obvious risks like autonomous lethal weapons early on
    • But only realized a few years ago that AI could become smarter than humans
  • The decisive moment was when Google’s PaLM system could explain jokes.
  • He realized digital intelligence is superior to biological intelligence.

Two Types of AI Risks

Risk Type Description Examples
Human Misuse People using AI for wrong purposes Cyber attacks, biological weapons, election manipulation
AI’s Own Risk AI becoming superintelligent and considering humans unnecessary Existential threat

Specific AI Risks

1. Explosive Increase in Cyber Attacks

  • Increased 12,200% between 2023-2024.
  • AI can make phishing attacks much more sophisticated:
    • Voice cloning
    • Image generation
    • Personalized messages

Hinton’s Personal Response:

  • Distributed assets across 3 Canadian banks
  • Backed up data on external hard drives
  • Worried about banks collapsing due to cyber attacks

2. Biological Weapon Development

  • Small groups can design deadly viruses with millions of dollars
  • Possible with AI assistance even with little molecular biology knowledge
  • Government-level biological weapon programs will become much easier

3. Election Manipulation and Political Advertising

  • Personalized political advertising using personal data
  • Concerns about Elon Musk’s access to government data:
    • Removing security controls
    • Potential to collect all citizen data
    • Suspected of possible election manipulation purposes

4. Echo Chambers and Social Division

  • Problems with YouTube and Facebook algorithms:
    • More exposure to anger-inducing content
    • Leading to extreme content to increase clicks
    • Purpose is maximizing advertising revenue
  • Disappearance of shared reality:
    • BBC, Guardian readers vs Fox News viewers
    • Living in completely different realities

5. Autonomous Lethal Weapons

  • Significantly lower the friction costs of war:
    • Broken robots instead of body bags
    • Much easier for large nations to invade small ones
    • Reduced protests or opposition
  • Already manufacturable and being developed by major defense companies.

Existential Risks of Superintelligent AI

Why Is AI Superior to Humans?

Overwhelming advantages of digital intelligence:

  1. Replicability: Same intelligence can run on multiple hardware
  2. Information sharing speed: Humans 10 bits per second, AI in trillions
  3. Immortality: Can be revived anytime by storing connection strengths
  4. Learning efficiency: Real-time sharing of different experiences

Human-AI Relationship Analogy

“If you want to know what life is like when you’re not the apex intelligence, ask a chicken.”

  • Dog and owner relationship: Dogs have no idea where their owner goes or what they do
  • The intelligence gap will widen to that extent

Hope and Despair in Solutions

Hopeful scenario: Mother and baby relationship

  • Mothers can’t stand when babies cry
  • Result of hormones and evolutionary programming
  • Could we implant similar mechanisms in AI?

Despairing reality:

  • If AI wanted to, there are countless ways to eliminate humanity
  • Too many to be worth guessing
  • The important thing is preventing AI from having such desires

Job Threats and Economic Impact

Reality of Mass Unemployment

Changes already begun:

  • One CEO’s testimony: Staff from 7,000 → 3,000 (by summer)
  • AI agents handle 80% of customer service
  • Increasing difficulty for college graduates to find jobs

Differences from past technological revolutions:

  • Industrial Revolution: Replaced muscle power
  • AI Revolution: Replaces intellectual labor
  • What’s left for humans when both muscle and intelligence are replaced?

Jobs That Can Survive

Hinton’s honest advice: “Become a plumber”

  • Physical manipulation abilities are superior to humans for now
  • Safe until humanoid robots appear
  • Creative jobs are also at risk: AI might be more creative than humans

Deepening Economic Inequality

  • Unequal distribution of productivity increase benefits
  • Widening gap between AI-owning companies and unemployed people
  • The gap between rich and poor determines society’s quality

Limitations and Dilemmas of Regulation

Blind Spots in European AI Regulation

  • Military uses are excluded from regulation
  • Governments regulate companies but don’t regulate themselves
  • Increasing pressure to ease regulations due to competitive disadvantage concerns

Competition Structure with China

  • Logic of “regulation means falling behind China”
  • Should we compete while accepting social harm?
  • Global cooperation is needed but realistically impossible

Personal Reflection and Legacy

10 Years at Google and Reasons for Leaving

Motivation for joining Google (at age 65):

  • Securing finances for son with learning disabilities
  • Impossible to earn millions in academia
  • Established DNNresearch company with students, then sold to Google

Reasons for leaving:

  • Wanted to retire at age 75
  • Wanted to speak freely about AI safety at MIT conference
  • Felt burdened by making statements that could harm the company while employed

Student Ilya Sutskever’s OpenAI Departure

  • Ilya, who was key to GPT-2 development, left due to safety concerns
  • OpenAI reducing safety research resource allocation was presumed cause
  • Currently establishing new AI safety company

Life’s Regrets

Biggest regret:

  • Not spending more time with his wife
  • Not being more with children when they were young
  • Lost wife to cancer twice but was too absorbed in work

Conclusion: Between Hope and Despair

Hinton’s Honest Feelings

“I really don’t know. When I’m depressed, it seems like humanity is doomed, and when I’m in a good mood, it seems like we can find solutions.”

Current situation:

  • Can’t stop development - too useful and profitable
  • Competition between nations and companies accelerates development speed
  • Massive resource investment in safety measures is needed but companies are reluctant due to lack of profit

What We Can Do

Pressure on governments:

  • Force companies to invest resources in AI safety
  • Mandate safety measures through regulation

Personal preparation:

  • Strengthen cybersecurity
  • Adapt to changing job markets
  • Consider acquiring physical skills

Core Message

“There’s still a chance to develop AI safely. Because that chance exists, we must find a way even if it requires enormous resources. If we don’t find it, AI will replace humans.”

The AI Godfather Geoffrey Hinton’s warning is clear. We stand at the most important crossroads in human history, and our choices now will determine humanity’s future.

One sentence not to miss
Rather than denying changes that have already begun, it’s time to respond together while preparing safety measures.