Google Brain founder calls AI threat to human survival a big tech conspiracy

Artificial Intelligence

According to Andrew Ng, a world leader in developing artificial intelligence and the founder of Google Brain, claims of an existential threat from AI to humankind have been greatly exaggerated.

Ng, currently serving as a professor at Stanford University, was a co-founder of Google Brain and would work as a chief scientist at Baidu’s Artificial Intelligence Group, as well as the co-founder of DeepLearning AI and Coursera.

While at Stanford, Ng taught machine learning to OpenAI co-founder Sam Altman. The two men hold opposing views on the danger AI poses to us.

In May this year, Altman, along with another 375 computer scientists, business leaders, and academics, signed a letter detailing the first principle of ‘mitigating the risk of extinction from AI.’ Altman’s views are similar to those held by many leaders in the industry who co-signed the letter.

Ng’s view

Ng spoke to The Australian Financial Review and said the doom myth propagated in the tech industry around the belief that humans are tangling with a powerful tool that could end civilization is a “bad idea” that could “impose burdensome licensing requirements.”

He explained that when you combine those “two bad ideas” together, you get the “massively, colossally dumb idea of policy proposals that try to require licensing of AI.” He contends that such a move would crush innovations and thinks that the large tech companies most interested in developing AI would rather not have to deal with an open-source AI, which is why “they’re creating fear of AI leading to human extinction.”

Ng says he supports thoughtful regulation but not the kind that would hamper development. In that regard, he agrees with Marc Andreessen, the venture capitalist and one of the more outspoken proponents of the gift AI could be for society.


Elon Musk disagrees with Ng and has, in the past, whipped people into a frenzy with apocalyptic ‘warnings’ related to AI. Musk had something to say about Ng’s recent comments, stating that large supercomputer clusters that cost billions of dollars are the risk, “not some startup in a garage.”

The so-called Godfather of AI, Geoffrey Hinton, weighed in, saying that even though Andrew Ng is claiming that the idea that AI could make humans extinct is a big-tech conspiracy, one “data point… does not fit this conspiracy theory.”

Hinto says he left Google to “speak freely about the existential threat.”

Share to...