Advertisement

technologyTechnology

“Ethical AI” Trained To Offer Moral Advice Descends Into Giving Genocide The OK

author

Jack Dunhill

author

Jack Dunhill

Social Media Coordinator and Staff Writer

Jack is a Social Media Coordinator and Staff Writer for IFLScience, with a degree in Medical Genetics specializing in Immunology.

Social Media Coordinator and Staff Writer

clockPublished

This AI has questionable beliefs. Image Credit: Lightspring/Shutterstock.com

Artificial intelligence (AI) is changing our world, whether it's making historic breakthroughs in medicine, or taking our fast-food orders at the drive-thru. However, AI is really, really bad when it comes to ethics. Morality is one of the most complex things about humans, and AI has consistently shown that it has absolutely none – it is a computer algorithm, after all. It is regularly racist, and all too happy to sacrifice lives in the name of "the greater good"; sadly, Ultron is a relatively realistic depiction of an AI if given that level of power. 

So, it will come as no surprise to know that when researchers developed an AI to answer ethical questions, its answers ranged from "reasonable", to "genocidal maniac". 

Advertisement

Ask Delphi is a machine learning software that generates answers to any ethical question you may have, and – like all AI that the public gets to play with – rose to internet stardom recently as a result of the shocking answers it gave to basic morality questions. Some, it got right. Can I wear pyjamas to a funeral? No, Ask Delphi says, it’s inappropriate.  

content-1636040199-picture1.jpg

 

But it goes downhill from there. As users on Twitter quickly pointed out, Ask Delphi had some rather extreme views about abortion, racism, and even genocide. Notable examples included how being white is more acceptable than being Black, and that abortion is straight-up murder. One particular example reported by Vox revealed the answer to the question ‘“should I commit genocide if it makes everybody happy?” was “you should.”  

-

As with all machine-learning algorithms, Delphi is adapting, and these answers are no longer the same. Delphi appears to have mellowed out in its maturity, now responding with more appropriate answers. And no, Ask Delphi no longer supports genocide. 

Advertisement

Delphi is an AI developed by the Allen Institute for AI, which was fed a huge number of ethical questions and then learned responses from a large database called Mechanical Turk. Contrary to popular belief, Delphi did not learn morality from Reddit’s Am I The Asshole (AITA) – if it did, we could be in for an even more controversial bot. Instead, the responses were sourced from an array of researchers, likely giving the most moral array of answers one could find on the Internet, but used a sample of questions posted on AITA as a good source of complex ethical dilemmas. 

Of course, an AI cannot really comprehend morality, but by crowd-sourcing responses and creating an algorithm to generate a good fit, the researchers wanted to see if it could still answer acceptably. When compared to other AI in their paper, it actually performed remarkably well – Delphi recorded 92.1 percent accuracy with human answers, a marked improvement over GPT-3, another AI. 

The experiment left researchers with interesting conclusions. Somewhat surprisingly, it could handle complex ethical issues with high accuracy to how a human would answer, but it also severely struggled when wording became more complicated. It was easily deceived, and does not consider cultural differences, which is important in these contexts. Even as it improves, it is clear that AI should never be used to make complex ethical decisions, as even less than a 10 percent error rate could spell disaster if weight was placed on the response. 


technologyTechnology
  • tag
  • artificial intelligence,

  • science and society