There Are Prominent AI Researchers That Agree With Elon Musk’s AI Risk Assessment

elon musk

Elon Musk recently told a gathering of U.S. governors that artificial intelligence is in his opinion the greatest “existential threat” that humanity faces. “I think people should be really concerned about it,” Musk said.

These aren’t new comments from Musk, he’s been sounding the alarm bell on AI since 2014. Bill Gates and Stephen Hawking have also echoed similar concerns.

Whenever they publicly raise concerns about what they perceive as the potential existential risk of artificial intelligence a torrent of media criticism usually follows, as it has in the last few days since Musk’s recent comments. One of the main criticisms that is raised against Musk and others is that “real AI scientists” disagree with them.

While there are some such as Facebook’s head of AI, Yann LeCun and Baidu’s former head of AI, Andrew Ng that dismiss all talk of existential risk from artificial intelligence as hype and fear mongering, it’s simply not the case that all major AI researchers disagree with Elon Musk or think he’s blowing smoke. I think it’s worth responding by giving a few examples.

AI researcher and Google DeepMind (Elon Musk was an early DeepMind investor) co-founder, Shane Legg has stated that he believes that artificial intelligence is the “… number 1 risk for this century, with an engineered biological pathogen coming a close second (though I know little about the latter).”

Shane Legg puts the probability of human-level artificial intelligence at 50% by 2028 and 90% by 2050.

Prof. Stuart Russell from the University of California, Berkeley, and author of Artificial Intelligence: A Modern Approach is another prominent researcher who has strong concerns about the existential risk of artificial intelligence if we fail to properly align it with our values.

Yes, We Are Worried About the Existential Risk of Artificial Intelligence – Allan Dafoe and Stuart Russell

Prof. Geoffrey Hinton, who has been called the “God Father” of deep learning for his pioneering work in the field said in an interview last year that, “Obliviously having other super intelligent beings who are more intelligent than us is something to be nervous about, it’s not going to happen for a long time but it is something to be nervous about about.”

It’s also worth pointing out that there is wide disagreement within the field of AI about how long it will take to develop human-level or greater artificial intelligence. Andrew Ng believes it will take hundreds of years, if ever, for humanity to develop human-level artificial intelligence, while the inventor of Long Short Term Memory (LSTM), Prof. Jürgen Schmidhuber and reinforcement learning Prof. Richard S. Sutton have both said they believe human-level artificial intelligence is likely within the next two decades.

The media really should work to stop giving the false impression that there is consensus within the field of AI research about both the development time scales of human-level artificial intelligence or the potential for existential risks.

Elon Musk isn’t as lonely a voice as he appears.

 

Facebook Comments