Advertisement

technologyTechnology

Self-Driving Cars Are More Likely To Run You Over If You Are Black, Study Suggests

James Felton

James Felton

James Felton

James Felton

Senior Staff Writer

James is a published author with four pop-history and science books to his name. He specializes in history, strange science, and anything out of the ordinary.

Senior Staff Writer

clockPublished
comments22Comments

Shutterstock / Mopic

Congresswoman Alexandria Ocasio-Cortez took a lot of criticism in January for suggesting that algorithms could have biases.

“Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions,” she said at the annual MLK Now event. “They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.”

Advertisement

Of course, she is correct. There are numerous examples of this in consumer technologies, from facial recognition tech that doesn't recognize non-white skin tones, to cameras that tell Asian people to stop blinking, to racist soap dispensers that won't give you soap if you're black.

-

Especially alarming is when this tech is scaled up from soap dispensers and mobile phones, which brings us to a new problem: It appears that self-driving cars could also have a racism problem.

A new study from the Georgia Institute of Technology has found that self-driving vehicles may be more likely to run you over if you are black. The researchers found that, just like the soap dispenser, systems like those used by automated cars are worse at spotting darker skin tones.

According to the team's paper, which is available to read on Arxiv, they were motivated by the "many recent examples of [machine learning] and vision systems displaying higher error rates for certain demographic groups than others." They point out that a "few autonomous vehicle systems already on the road have shown an inability to entirely mitigate risks of pedestrian fatalities," and recognizing pedestrians is key to avoiding deaths.

Advertisement

They collected a large set of photographs showing pedestrians of various skin tones (using the Fitzpatrick scale for classifying skin tones) in a variety of lights, and fed them into eight different image-recognition systems. The team then analyzed how often the machine-learning systems correctly identified the presence of people across all skin tones.

They found a bias within the systems, meaning it's less likely that an automated vehicle would spot someone with darker skin tones and so would carry on driving into them. On average, they found that the systems were 5 percent less accurate at detecting people with darker skin tones. This held true even when taking into account time of day and partially-obstructing the view of the pedestrians.

-

The study did have limits; it used models created by academics rather than the car manufacturers themselves, but it's still useful in flagging up the recurring problem to tech companies, which could easily be solved by simply including a wide and accurate variety of humans when rigorously testing new products.

After all, it's not just skin tones that algorithms can be biased against. Voice recognition systems seem to struggle more recognizing women's voices than men's, and women are 47 percent more likely to sustain an injury while wearing a seat belt because car safety is mostly designed with men in mind.

Advertisement

"We hope this study provides compelling evidence of the real problem that may arise if this source of capture bias is not considered before deploying these sort of recognition models," the authors concluded in the study.

Fingers crossed Tesla and Google are feeding their machine-learning algorithms more data from people with varied skin tones than the academic models, otherwise we could soon face a situation where AI is physically able to kill you and is more likely to do so if you are not white.


ARTICLE POSTED IN

technologyTechnology
FOLLOW ONNEWSGoogele News