“bad hair”, “good hair”, “ugly skin” and “beautiful skin”. Have you ever paid attention to the images that appear online when you search for these terms? It is common to find BLACK PEOPLE.
Algorithmic racism is a contemporary manifestation of structural racism, perpetuated and reinforced by digital technologies. These technologies, often referred to as artificial intelligence – a term Silva personally disapproves of – can make discriminatory decisions, usually in pursuit of higher profits and business for companies. This behavior is largely unchecked and, in some cases, may even be intentional, driven by white supremacy and racism,” Silva emphasizes.
In Brazil, a striking example of the effects of racism and sexism on search engines occurred in 2019. Bahian businesswoman and public relations specialist Cáren Cruz conducted an online search for “black woman teaching” and came across images associating black women with pornographic content. I was preparing a corporate presentation and had initiated the search due to the lack of images depicting black women in teaching positions. Explicit images have since been removed from the search engine by the platform.
The incident provoked public outrage. At the time, Google informed the Bahia Notícias website that it was also surprised and acknowledged that the images should not have been explicit. “When people use search, our goal is to provide relevant results for the search terms used and we do not intend to show explicit results to users unless they are looking for them. Clearly, the result set for the above term does not comply with this principle and we apologize to those who felt impacted or offended,” the company wrote in a note sent to the website.
Despite search engine claims that their results are driven by relevance or keyword density, Safiya Noble, an American professor and researcher and a leading figure in the concept of “algorithmic racism,” challenges the supposed “neutrality” of search engines in categorizing search results.
In his 2021 book, “Algorithms of Oppression: How Search Engines Reinforce Racism,” Noble illuminates the replication of structural inequalities in the digital realm. “Structural inequalities in society are being replicated on the Internet and the struggle for a race-, gender- and class-free cyberspace can only ‘perpetuate and reinforce current systems of domination,'” says the expert, who also spent more than a decade in the marketing field.
Reporters contacted the Google Brazil team to inquire about the workings of the platform’s search algorithm and the company’s efforts/studies to prevent and correct the perpetuation of harmful results for historically marginalized groups. In response, Google noted that because its systems are organized around the “open internet,” the platform may reflect biases that are already labeled on the internet.
Google also mentioned that in May last year it announced the launch of the “Monk Skin Tone Scale (MST)”, designed to include more skin tones in the platform’s image search. The tool was based on research by Harvard professor and sociologist Dr. Ellis Monk, who has studied how skin tone and colorism affect people’s lives for more than 10 years.
Black women are 84% more likely than white women to be mentioned in negative tweets.
Google’s image recognition software labels black people as gorillas.
Black women are more likely to appear in sexually explicit contexts in online pornography.
Facial recognition technology is less accurate in identifying people with darker skin tones.
Black Women Are the Most Harassed in Virtual Environment
Although there is no specific data on algorithmic racism in Brazil readily available, the research indicates an increasingly hostile virtual environment for black women. A doctoral thesis by sociologist Luiz Valério Trindade reveals a shocking fact: black women represent 81% of the victims of discriminatory discourse on social networks. Alarmingly, the majority (65%) of online users spreading racial bigotry are men in the 20-25 age group.
Against the backdrop of the most recent municipal elections in 2020, the Marielle Franco Institute conducted a groundbreaking study on political violence. The results were disheartening: black female candidates were the most affected by virtual violence, with 78% of respondents reporting such experiences. Other forms of violence followed closely behind, with 62% for moral and psychological violence, 55% for institutional violence and 44% for racial violence.
According to researcher Tarcízio Silva, the lack of transparency of digital platforms is a significant obstacle to formulating strategies and mapping algorithmic racism in Brazil.
“When we talk about the internet, digital platforms don’t provide transparency about these types of information, or about almost any type of information relevant to society and the potential harm, whether it’s discrimination, inappropriate moderation of content, misinformation, and so on. What is at stake now in platform regulation, for example, involves forcing platforms to provide data related to this. In Brazil, I would say that there is no quantitative data on algorithmic racism,” Silva emphasizes.
How Algorithms Perpetuate Racial Beauty Standards
Algorithms, like the social structures on which they are based, are not immune to the ingrained biases that pervade our world. Their apparent neutrality belies a disconcerting reality: algorithms can and often do perpetuate and amplify existing racist beauty standards.
Algorithms, by their very nature, are trained on large amounts of data, data that reflect the human societies that create them. As such, these algorithms are programmed to favor certain standards of beauty that prevail in society. In the Western world, these standards often lean heavily toward preference for white features and aesthetics, thus marginalizing black women and other people of color.
“Algorithms are not born in a vacuum. They are created by human beings who carry with them their conscious and unconscious biases. And when these biases are fed into the algorithm, it learns to mimic them, often in ways that exacerbate existing inequalities,” says Dr. Ruha Benjamin, a leading sociologist and author on the subject of algorithmic racism.
Online, this bias manifests itself in multiple ways:
Examples of Algorithmic Bias in Popular Social Networking Platforms
In the vast and complex global network of the Internet, several popular social networking platforms serve as important centers of human interaction. These platforms are driven by complex algorithms that have shown evidence of racial bias, especially against black women. Considered as invisible biases embedded in digital landscapes, these biases not only reflect societal prejudices, but also propagate them, thus intensifying the plight of black women online.
Instagram and Facebook
Instagram and Facebook, both owned by Meta Platforms Inc, have been embroiled in controversy over the discriminatory effects of their algorithms. Researchers at AlgorithmWatch, a German nonprofit organization, have found evidence that Instagram’s algorithm promotes images of black women less frequently than their white counterparts, contributing to harmful stereotypes and discriminatory practices.
“The algorithm is biased toward certain types of content and against others, and that bias is not neutral. It has impacts on people’s real lives,” –
Matthias Spielkamp, CEO of AlgorithmWatch.
The possible consequences of ignoring algorithmic racism
The persistent ignorance of algorithmic racism in our digital age has significant implications, especially for Black women, whose image and voices are often distorted or silenced by discriminatory algorithms. The potential consequences of ignoring such a pressing problem are far-reaching, jeopardizing not only individuals, but also societal efforts to foster digital inclusion and combat systemic racism.
Systemic Disempowerment: By perpetuating racial biases, algorithmic racism effectively marginalizes Black women’s voices and images, undermining their representation and participation in the digital space. This systemic disempowerment spreads harmful damage…
“It’s not just about biased representation or invisibility,” says Dr. Safiya Noble, author of ‘Algorithms of Oppression,’ “is about how these biases embedded in algorithms can impact real lives, shape perceptions and reinforce racial inequalities.”
Perpetuation of Racial Stereotypes: When algorithms continue to project distorted images of black women, they perpetuate harmful racial stereotypes. These algorithms, coded with a predominantly white and Eurocentric perspective, often fail to correctly identify, categorize or value the beauty and diversity of black women. The result is a digital space that reflects and amplifies long-standing racial biases, further deepening inequality. This algorithmic racism can lead to the marginalization of black women, adversely affecting their social, economic and psychological well-being.