The advancement of artificial intelligence (AI) going horribly wrong and our created companions turning on us is a common theme, cropping up again and again in books, film, and popular culture.
Just off the top of my head I, Robot, the Terminator series, Ex-Machina, Philip K Dick’s Do Androids Dream Of Electric Sheep?, and thus Blade Runner, Black Mirror, and Westworld spring to mind. The potential to tap into that fear offers a large and fruitful scope for writers and directors to mess with us.
But should we actually be afraid of a potential robot uprising? Probably best to have some rules in place, just in case.
With that in mind, DeepMind, Google’s AI research facility has announced the launch of its “ethics and society” research unit to study the impact of the rapidly progressing technologies on society, in an attempt to allay fears that AI may spin wildly out of our control.
“We believe AI can be of extraordinary benefit to the world, but only if held to the highest ethical standards,” wrote the new unit’s co-leads, Verity Harding and Sean Legassick, in a blog post explaining the launch.
“Technology is not value-neutral, and technologists must take responsibility for the ethical and social impact of their work.”
The company is bringing in a host of external advisors, from the United Nation’s former climate change chief, Christiana Figueres to AI development, policy, and computer science professors, philosophers, and economists, as it says understanding AI’s beneficial applications for humanity requires “rigorous scientific inquiry”.
“This new unit will help us explore and understand the real-world impacts of AI,” Harding and Legassick continued. “It has a dual aim: to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all.”
UK-based DeepMind was acquired by Google in 2014. In 2016, it hit the headlines around the world for being the first AI machine to beat the world champion at the complicated 2,000-year-old Asian board game, Go.
Scientists often use games, whether it’s Go, chess, or Texas hold ‘em poker, to test the progress and limits of AI. And DeepMind isn’t the first group to look into the implications of rapidly progressing technology on the future of mankind.
But I don’t think we’re at panic stations just yet, because when AI fails, it fails rather adorably, so right now we’re possibly not in too much trouble.