• 2026.05.09 (Sat)
  • All articles
  • LOGIN
  • JOIN
Global Economic Times
fashionrunwayshow2026
  • Synthesis
  • World
  • Business
  • Industry
  • ICT
  • Distribution Economy
  • Well+Being
  • Travel
  • Eco-News
  • Education
  • Korean Wave News
  • Opinion
  • Arts&Culture
  • Sports
  • People & Life
    • International Student Report
    • With Ambassador
  • Column
    • Cho Kijo Column
    • Cherry Garden Story
    • Ko Yong-chul Column
    • Kim Seul-Ong Column
    • Lee Yeon-sil Column
  • Photo News
  • New Book Guide
MENU
 
Home > Synthesis

Nobel Laureate Hinton Warns of AI Dangers, Calls for Government Regulation

Global Economic Times Reporter / Updated : 2024-11-02 09:50:41
  • -
  • +
  • Print

 

Geoffrey Hinton, a Nobel laureate and renowned AI expert, has sounded the alarm about the potential dangers of artificial intelligence. Comparing the weights of AI models to nuclear fission material, Hinton argued that governments must impose strict regulations on AI development and prioritize safety research.

During his keynote speech at the Global Talent Forum 2024 held in Seoul, Hinton expressed deep concern about the proliferation of AI-generated fake content. He cited the recent surge of deepfake videos during the US election as a prime example of the potential misuse of AI.

Hinton proposed using QR codes to verify the authenticity of digital content, such as embedding them in the beginning of campaign videos. However, he acknowledged the challenges of implementing such a system, given the malicious intent of some actors to spread misinformation.

Drawing a parallel between AI and nuclear weapons, Hinton explained that just as the refinement of uranium or plutonium is necessary for creating nuclear weapons, the weights in AI models serve as a critical component. These weights, which determine the importance of different inputs, are numerous and complex. Developing a large AI model requires significant investment, but once these weights are publicly available, malicious actors could easily fine-tune them for malicious purposes.

Hinton, a pioneer in the field of neural networks and a recipient of the 2024 Nobel Prize in Physics, has been a vocal critic of AI's potential risks. He resigned from his position at Google last year to speak freely about the dangers of AI.

During his speech, Hinton praised Google for its initial cautious approach to AI development but noted that the company was forced to accelerate its efforts due to competition from OpenAI and Microsoft. He emphasized the need for government intervention to regulate the development and deployment of AI.

"Governments are the only entities that can impose requirements on large corporations," Hinton stated.

Hinton's warnings underscore the growing concerns about the potential negative impacts of AI. As AI technology continues to advance at a rapid pace, the need for robust regulations and ethical guidelines becomes increasingly urgent.

[Copyright (c) Global Economic Times. All Rights Reserved.]

Global Economic Times Reporter
Global Economic Times Reporter
Reporter Page

Popular articles

  • From the Alps to Seoul: Life in the Heart of Europe

  • BOK Holds Rate Steady for Seventh Consecutive Meeting, Signaling End of Easing Cycle

  • Welcome to Cherry Garden Restaurant!  

I like it
Share
  • Facebook
  • X
  • Kakaotalk
  • LINE
  • BAND
  • NAVER
  • https://www.globaleconomictimes.kr/article/1065574189031045 Copy URL copied.
Comments >

Comments 0

Weekly Hot Issue

  • Samsung Electronics Shifts Strategy in China: Moving from Hardware Sales to Platform-Based Business
  • Banking War 2.0: South Korean Banks Race to Transition into 'AI-First' Institutions
  • Tesla Model Y Becomes First to Pass Grueling New U.S. Autonomous Safety Tests
  • Celltrion’s Zymfentra Sees Explosive 300% Growth, Hits Record Quarterly Prescriptions in the U.S.
  • BMW Korea Ignites May with Exclusive 9-Model Online Limited Edition Lineup
  • Hyundai Mobis Completes Independent EV 'Heart' Lineup: A Major Leap Toward Global Leadership in Power Electric Systems

Most Viewed

1
Iran Imposes Transit Fees on Strait of Hormuz Amid Escalating Maritime Tensions
2
Korea and Vietnam Forge Strategic Partnership in Science, Technology, and Innovation
3
Kurly Abandons 'All-Paper' Packaging Strategy Amid Rising Cost Pressures
4
80% of Enterprises Hit by 'AI Agent Anomalies': SailPoint Calls for Integrated Identity Governance
5
Tradition Meets the Public: Chungju’s Gugak Busking
광고문의
임시1
임시3
임시2

Hot Issue

Tensions Flare in Strait of Hormuz: U.S.-Iran Clashes Threaten Fragile Truce

Tesla Model Y Becomes First to Pass Grueling New U.S. Autonomous Safety Tests

U.S. Trade Court Strikes Down Trump’s ‘Global 10% Tariff,’ Citing Executive Overreach

Hyundai Motor Group Bets $700 Million on Mexico Amid Trade Policy Volatility

Fashion Runway Show 2026

Global Economic Times
korocamia@naver.com
CEO : LEE YEON-SIL
Publisher : KO YONG-CHUL
Registration number : Seoul, A55681
Registration Date : 2024-10-24
Youth Protection Manager: KO YONG-CHUL
Singapore Headquarters
5A Woodlands Road #11-34 The Tennery. S'677728
Korean Branch
Phone : +82(0)10 4724 5264
#304, 6 Nonhyeon-ro 111-gil, Gangnam-gu, Seoul
Copyright © Global Economic Times All Rights Reserved
  • 에이펙2025
  • APEC2025가이드북TV
  • 반달곰 프로젝트
Search
Category
  • All articles
  • Synthesis
  • World
  • Business
  • Industry
  • ICT
  • Distribution Economy
  • Well+Being
  • Travel
  • Eco-News
  • Education
  • Korean Wave News
  • Opinion
  • Arts&Culture
  • Sports
  • People & Life 
    • 전체
    • International Student Report
    • With Ambassador
  • Column 
    • 전체
    • Cho Kijo Column
    • Cherry Garden Story
    • Ko Yong-chul Column
    • Kim Seul-Ong Column
    • Lee Yeon-sil Column
  • Photo News
  • New Book Guide
  • Multicultural News
  • Jobs & Workers