Talk of artificial intelligence (AI) rising up to thwart humanity in a Terminator-style apocalypse is holding back research and development into the technology, and potentially harming society as a result, claims director of Microsoft Research Chris Bishop. The potential benefit from AI, such as the development of driverless cars and more efficient manufacturing, could revolutionize vast tracts of society, if only we could stop seeing it as something to fear.
Talking ahead of the discussion about machine learning at the Royal Society in London, Bishop told the Guardian that humanity “may end up throwing the baby out with the bathwater” if too much of our attention is focused on the negative aspects of AI as depicted by "The Terminator" and Skynet. He says that while he disagrees with the high-profile people who have spoken out against AI, there are still some potential risks, though not in the humanity-ending way that is often depicted. Instead, he says it is likely that AI will develop potentially harmful biases. One only needs to look to Microsoft’s Tay bot to see how that one panned out.
Notably, Stephen Hawking and Elon Musk have both gone public about the risk they think is posed by the development of artificial intelligence. “The development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC last year, while Musk has said: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful.” Since then, Bill Gates has also joined in the chorus.
But Bishop says that the conversation and coverage of AI is not helped when these big public figures perpetuate these ideas in such a way. “I think Stephen is looking far into the future,” Bishop said to Cambridge News earlier this year. “I find it very difficult to predict where technology goes in 10 years, let alone 1,000 years! It’s clear humanity faces many challenges but I'm very optimistic about the potential of technology to give us a head start in tackling these problems.”
In light of the fear of AI development, there are some legitimate concerns about current technology. This week saw the Human Rights Watch release a report calling on the UN to ban “killer robots” used in warfare before they become the norm. They claim that the production of military drones that are able to not only identify, but also autonomously kill targets without the input of a human controller is only a few years away, and that the UN should get ahead of the curve and ban them before they are rolled out.