At the end of May 2020, Microsoft took the decision to sack dozens of journalists in favor of replacing them with artificial intelligence (AI).
The journalists and editors fired ran the MSN News website, the automatic homepage of the Microsoft Edge browser. The site doesn't write news itself, but draws news from other sources and splits advertising revenue with the original publishers. It used to be curated by humans, who would select stories adhering to their editorial guidelines and edit articles, photos, and headlines wherever necessary.
Now, just a few weeks after replacing those humans with software, robots at MSN News have been accused of racism by UK band Little Mix member Jade Thirlwall, after MSN posted a story of her opening up about the racism she experienced at school, accompanied by an image of fellow Little Mix bandmate Leigh-Anne Pinnock.
Thirlwall was unaware that the news story had been placed on the MSN site by an algorithm, and put it down to lazy journalism.

“This shit happens to @leighannepinnock and I ALL THE TIME that it’s become a running joke," Thirlwall wrote on Instagram.
“It’s lazy journalism. It’s ignorant. It’s rude. It offends me that you couldn’t differentiate the two women of colour out of four members of a group. There’s even images of me in the article followed by an image of Leigh as if they couldn’t tell we’re not the same mixed-race person?!?!
“DO BETTER.”
As reported by the Guardian, the images accompanying the article were selected by AI software. As we've learned (or apparently not) time and time again, AI and machine learning technology has a racism problem. From racist soap dispensers that won't dispense soap to self-driving cars that are more likely to run you over if you are black because they don't recognize darker skin tones, there are numerous examples of machine learning technology that doesn't function as it should because it wasn't tested enough (or at all) with non-white people in mind.
Image recognition is no different. Google, for instance, once had to apologize for their auto-tag photo app, which labeled two black people as gorillas. The software is only as good as the data it's trained on. One classic dataset is ImageNet.
In 2009, computer scientists at Stanford and Princeton tried to train computers how to recognize pretty much any object there is. To do this, they amassed a huge database of photographs of everything from asparagus to watering cans. They then got people to sort the photos – including many, many photos of humans – into categories.
The result was ImageNet, a huge (and most-cited) object-recognition data set, complete with inbuilt biases put there by humans and propagated by AI.
The software used by MSN was unable to distinguish between Thirlwall and Pinnock, two mixed-race women of color, suggesting it may have similar issues. The problem was corrected by a human, but too late.
In a weird turn for the story, the Guardian reports that the remaining human MSN staff have been warned that the AI software could try to publish the Guardian's article on its AI software racist mix-up, and they should try to delete the story if it does so. The AI may then try to "overrule" the human's attempt at deletion, and republish the story to the website.
The remaining humans have reportedly already had to delete stories published by the algorithm that criticize its publication of the Little Mix mix-up, before it was revealed that it was published by AI in the first place. Maybe news outlets still need humans after all, and it's the systemic racism that needs to be tackled first before we move into the future.
[H/T: The Guardian]