I know. We just finished talking about our doubts regarding autonomous vehicles, and now I’m asking if AI algorithms can be racist. The simple answer is “no.” The complete answer is more complex.
An algorithm on its own does not have consciousness. It does what it is ordered to do and that’s it. So No. AI by itself is not racist. It can be different if it is applied with such intentions or if we unintentionally transmit our biases to it, and it maximizes them.
A study published in 2017 by the journal Science found that a computer that self-taught English quickly began to show biases against African Americans.
How was this possible? By using a data repository collected from numerous internet sources through a technique known as scraping and compiled by the people at Common Crawl[72]. The sample contained more than two million words[73]. An entire ocean of words. The program learned to associate words from texts written by humans. Although at this point, I assume this might not surprise you, these associations reflected the lessons from humans. Thus, the program was able to associate pleasure with flowers and unpleasantness with insects. So far, everything is relatively fine. The problem arose when the program began to absorb stereotypes, and there, Artificial Intelligence detected that the names of white, European, or American people were more likely to be associated with pleasant terms than those of African Americans. In fact, if we look for words like “CEO” or “firefighter” and our AI identifies that these words are associated more often with men, an AI designed to hire people for these positions might be inclined to accept more men than women for these roles.
These algorithms understand how our vocabulary works and analyze the frequency with which one word can appear next to another. The problem will always be that the information we give to the algorithms will be data created and selected by humans, so there can be biases from the start. Paradoxically, here the source of the problem and its solution is us, human beings. Let’s advocate for a feminist AI, in favor of diversities and minorities!
[72] Common Crawl. Commoncrawl.org. Retrieved July 13, 2021, from https://commoncrawl.org.
[73] Hsu, J. (2017). AI Learns Gender and Racial Biases From Language. IEEE Spectrum: Technology, Engineering, and Science News. Retrieved July 15, 2021, from https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/ai-learns-gender-and-racial-biases-from-language.