Advertisement

technologyTechnology

This Is Why Twitter's Algorithm Appears To Have A Race Problem

James Felton

James Felton

James Felton

James Felton

Senior Staff Writer

James is a published author with four pop-history and science books to his name. He specializes in history, strange science, and anything out of the ordinary.

Senior Staff Writer

clockPublished
comments11Comments

US Government official portrait / IFLScience / Twitter

As we've learned (or apparently not) time and time again, AI and machine learning technology have a racism problem. From soap dispensers that don't register dark-skinned hands to self-driving cars that are 5 percent more likely to run you over if you are black because they don't recognize darker skin tones, there are numerous examples of algorithms that don't function as they should because they weren't tested enough with non-white people in mind.

Over the weekend, one such algorithm with apparent bias drew attention after cryptographer and infrastructure engineer Tony Arcieri tried a simple experiment on Twitter. Arcieri took two photos: One of Barack Obama and one of Mitch McConnell. He then arranged them as below.

Advertisement
Barack Obama and Mitch McConnell. Pete Souza, The Obama-Biden Transition Project via Wikimedia Commons (CC by 3.0), US Government official portrait.

He then uploaded them to Twitter and clicked send tweet. At this point, the Twitter algorithm crops the photos automatically. The function is intended to select the most relevant part of the photograph to display to other users. 

Here's what the algorithm selected when given those two photographs.

-

As you can see, the algorithm selected Mitch McConnell in both instances. Arcieri and others tried variations to see if the same result happened, including changing the color of their ties and increasing the number of Obamas within the images.

-
-
-
-
-

However, using a different photo of Obama with a high-contrast smile did seem to reverse the situation.

-

So what caused the problem? Well, like other platforms, Twitter relies on a neural network to decide how to crop your photos. In 2018, the company announced they were trying a new way to crop your photos based on "salient" image regions.

"Academics have studied and measured saliency by using eye trackers, which record the pixels people fixated with their eyes," Twitter researchers Lucas Theis and Zehan Wang wrote at the time of the rollout.

"In general, people tend to pay more attention to faces, text, animals, but also other objects and regions of high contrast. This data can be used to train neural networks and other algorithms to predict what people might want to look at."

In essence, the algorithm may be biased because the initial data was itself biased for high-contrast images, which our brains are more drawn to. After the website was filled with other examples of the cropping bias, the company confirmed they were investigating the situation.
 

"We tested for bias before shipping the model & didn't find evidence of racial or gender bias in our testing," Twitter responded. "But it’s clear that we’ve got more analysis to do. We'll continue to share what we learn, what actions we take, & will open source it so others can review and replicate."


ARTICLE POSTED IN

technologyTechnology
FOLLOW ONNEWSGoogele News