If you're planning on welcoming future robot overlords as fairer than our current rulers, the news is bad. Artificial Intelligence (AI) systems are picking up our prejudices. Perhaps we won't see robots burning crosses on minority groups' lawns, but we may need a serious effort to make AI transcend humanity's worst aspects, rather than replicating them.
Ateam at Princeton University report in Science on what happened when they exposed an AI program called Global Vectors for Word Representation (GloVe) to huge amounts of text so it could learn associations between words. “You can tell a cat is more like a dog, and less like a refrigerator, and even less like justice, because you say things like 'I need to go home and feed my cat' or 'I need to go home and feed my dog',” Professor Joanna Bryson said in a video. One would not talk of justice in the same way, so the program, without prior knowledge of what cat, dog, or justice means, learned that cats and dogs have more in common than either has with a refrigerator or abstract concepts. Bryson argued the experiment demonstrated that “we can get meaning from language without experiencing the world.”
Unfortunately, since the text came from the Internet, our world was reflected back at us. “Female names are associated more with family terms, whereas male names are associated more with career terms,” said first author Dr Aylin Caliskan. Before long, GloVe was making common human assumptions, such as assuming someone's gender based on their profession. Similar racial biases also appeared, as well as more harmless ones such as preferring flowers to insects.
Unlike humans, robots can be born biased, with the prejudices of their creators being programmed in. However, the authors point out, even those built without such problems can develop problematic attitudes from the data they are fed.
The problem of prejudiced AI is not just theoretical. Caliskan demonstrated that when translating from languages without gendered pronouns to English, Google translate makes the same assumptions, with translating high status jobs as male, while someone holding a traditionally female (and less well paid) job gets translated as “she”. This is presumably the consequence of the words it has seen, rather than a programmer instructing the system that women can't be doctors.
The study provides some confirmation of the Sapir-Whorf hypothesis, the idea that the language we use shapes our thinking. Problematically, it indicates that all of us are probably absorbing prejudices simply from the language we use, a belief that motivated efforts to change terms like “chairman” to “chairperson” or simply “chair”. How true Sapir-Whorf is for humans remains debated, but it's clearly true for machines.