Facebook's 'Race Blind' Algorithm Found 90% Of Hate Speech Directed Toward White People And Men

Chris Menahan
InformationLiberation
Nov. 27, 2021

We now know why Facebook decided to change its "race-blind" hate speech detection algorithm last year to allow more anti-white hatred.

The Washington Post reported last week that an "April 2020 document said roughly 90 percent of 'hate speech' subject to content takedowns were statements of contempt, inferiority and disgust directed at White people and men."

They viewed this as a failure of the system because white people are supposed to be the targets of all hate.

From The Washington Post, "Facebook's race-blind practices around hate speech came at the expense of Black users, new documents show":
Facebook spokesman Andy Stone defended the company's decisions around its hate speech policies and how it conducted its relationship with the civil rights auditors.

"The Worst of the Worst project helped show us what kinds of hate speech our technology was and was not effectively detecting and understand what forms of it people believe to be the most insidious," Stone said in a statement.

He said progress on racial issues included policies such as banning white nationalist groups, prohibiting content promoting racial stereotypes — such as people wearing blackface or claims that Jews control the media — and reducing the prevalence of hate speech to 0.03 percent of content on the platform.

[...] These findings about the most objectionable content held up even among self-identified White conservatives that the market research team traveled to visit in Southern states. Facebook researchers sought out the views of White conservatives in particular because they wanted to overcome potential objections from the company's leadership, which was known to appease right-leaning viewpoints, two people said.

Yet racist posts against minorities weren't what Facebook's own hate speech detection algorithms were most commonly finding. The software, which the company introduced in 2015, was supposed to detect and automatically delete hate speech before users saw it. Publicly, the company said in 2019 that its algorithms proactively caught more than 80 percent of hate speech.

But this statistic hid a serious problem that was obvious to researchers: The algorithm was aggressively detecting comments denigrating White people more than attacks on every other group, according to several of the documents. One April 2020 document said roughly 90 percent of "hate speech" subject to content takedowns were statements of contempt, inferiority and disgust directed at White people and men, though the time frame is unclear.
We can't have that, now can we?

Follow InformationLiberation on Twitter, Facebook, Gab, Minds, Parler and Telegram.













All original InformationLiberation articles CC 4.0



About - Privacy Policy