Elon Musk: AI is a ‘fundamental existential risk for human civilisation’ and creators must slow down
Tesla Motors CEO Elon Musk speaks during the National Governors Association Summer Meeting in Providence, Rhode Island, U.S., July 15, 2017 / REUTERS/Brian Snyder |
Elon Musk has branded artificial intelligence “a fundamental existential risk for human civilisation”.
He says we mustn’t wait for a disaster to happen before deciding to regulate it, and that AI is, in his eyes, the scariest problem we now face.
He also wants the companies working on AI to slow down to ensure they don’t unintentionally build something unsafe.
The CEO of Tesla and SpaceX was speaking on-stage at the National Governor’s Association at the weekend.
“I have exposure to the most cutting-edge AI and I think people should be really concerned about it,” he said. “I keep sounding the alarm bell but until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal.
“I think we should be really concerned about AI and I think we should… AI’s a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.
“Normally the way regulations are set up is that a whole bunch of bad things happen, there’s a public outcry, and then after many years, a regulatory agency is set up to regulate that industry. There’s a bunch of opposition from companies who don’t like being told what to do by regulators. It takes forever.
“That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation. AI is a fundamental risk to the risk of human civilisation, in a way that car accidents, airplane crashes, faulty drugs or bad food were not. They were harmful to a set of individuals in society, but they were not harmful to society as a whole.
“AI is a fundamental existential risk for human civilisation, and I don’t think people fully appreciate that.”
However, he recognises that this will be easier said than done, since companies don’t like being regulated.
Also, any organisation working on AI will be “crushed” by competing companies if they don’t work as quickly as possible, he said. It would be up to a regulator to control all of them.
“When it’s cool and regulators are convinced that it’s safe to proceed, then you can go. But otherwise, slow down.”
He added: “I think we’d better get on [introducing regulation] with AI, pronto. There’ll certainly be a lot of job disruption because what’s going to happen is robots will be able to do everything better than us. I’m including all of us.”
Earlier this year, Mr Musk said that humans will have to merge with machines to avoid becoming irrelevant.
Ray Kurzweil, a futurist and Google’s director of engineering, believes that computers will have “human-level intelligence” by 2029.
0 comments