Over the last few days, people online have been asking an AI tool to categorize their photos, to see what an AI trained to classify humans sees when it looks at their face. The results have been surprising, sometimes flattering, and often quite racist.
ImageNet Roulette uses a neural network to classify pictures of people uploaded to the site. You simply go to the site and enter the address of a photo you want categorized (or else upload your own photo) and it will tell you what the algorithm sees in your photograph.
Sometimes it can be astonishingly accurate. For instance, when I tested it on my own face I was labeled a psycholinguist whereas my colleague Dr Alfredo Carpineti got classed as a "commoner, common man, common person: a person who holds no title". Fact after fact after fact.
If you try it and get a bad result, be satisfied that there are much worse things it can call you.
Whilst it is sometimes complimentary...
It's also quite offensive.
And sometimes just odd. In this photo, for instance, it labels President Obama as a demagogue and Joe Biden simply as "incurable".
Much like the chatbot that after spending just a day on Twitter learned to be racist and misogynistic, spouting out tweets like "Hitler was right" and “I fucking hate feminists and they should all die and burn in hell,” ImageNet Roulette has problems, caused by learning from problematic data input by humans. It's like that by design.
This tool, created by artist Trevor Paglen and co-founder of New York University's AI Institute Kate Crawford, uses an algorithm from one of the most "historically significant training sets" in AI – ImageNet. In 2009, computer scientists at Stanford and Princeton tried to train computers how to recognize pretty much any object there is. To do this, they amassed a huge database of photographs of everything from Formula 1 cars to olives. They then got humans – paid through Amazon’s Mechanical Turk program – to sort the photos into categories.
The result was ImageNet, a huge (and most-cited) object-recognition data set, with inbuilt biases put there by humans and propagated by AI.
ImageNet Roulette (which has 2,500 labels to classify users with) is showing as part of the Training Humans photography exhibition at the Fondazione Prada Osservertario museum in Milan, Italy, highlighting this bias.
"We want to shed light on what happens when technical systems are trained on problematic training data. AI classifications of people are rarely made visible to the people being classified. ImageNet Roulette provides a glimpse into that process – and to show the ways things can go wrong," Paglen and Crawford explain on the tool's website.
"ImageNet Roulette is meant in part to demonstrate how various kinds of politics propagate through technical systems, often without the creators of those systems even being aware of them."
Essentially, the machines become racist and misogynistic because humans are racist and misogynistic.
"ImageNet contains a number of problematic, offensive, and bizarre categories – all drawn from WordNet. Some use misogynistic or racist terminology. Hence, the results ImageNet Roulette returns will also draw upon those categories."
You can try it for yourself here.