Logarithmic justice

 

Joy Buolamwini[74], a scientist at MIT and founder of the “Algorithmic Justice League,” discovered racial and gender biases in AI-powered systems sold and distributed by tech giants like IBM, Microsoft, and Amazon. Thus, if we are not careful, we could throw away the progress made by social struggles throughout history. History is full of injustices and unequal treatment of human groups based on skin color, religious beliefs, gender, income, or country of origin. Make no mistake: cleaning the data we use to feed our AIs is as important as the knowledge that allowed us to create and work with them. Understanding feminism as part of the process of completing the legacy of the French Revolution in terms of rights is fundamental, and therefore; applying this perspective to ensure that the development of the tools that will govern our future offers everyone the same opportunities is of utmost importance. Of course, we must understand that not everything we consider right will be seen the same way by all of society, and that is why it is important to have an inclusive and empathetic education towards others, to set aside our differences and recognize the injustices others face, in order to repair them together. The values of one country will not apply to everyone, and what the West considers morally ethical may not be the same in China and vice versa.

 

In the United States, some states use an algorithm for managing and profiling criminals to predict if there is a risk that a person will reoffend if released and thus provide alternative sanctions to judges in each case. One could argue that this is a good idea because it could help judges deliver fairer sentences. The issue is, once again, not forgetting with what information we feed this algorithm to make its decisions. If we load all past rulings and sentences into it, in a system marked by historically racist decisions, it is very likely that the algorithm will not be fair to everyone. It has indeed been shown that the calculation made by this algorithm, called Compas, is inaccurate and sometimes issues more racist and controversial rulings than those usually seen in court[75].

 

The mechanism used by Compas is still interesting. Before hearing their sentences, people who committed crimes in some states of this northern hemisphere country must answer a test of 137 questions on various topics, such as the type of crime they committed, their parents’ substance use when they were minors, their perception of their happiness, their economic level, among several other aspects. This way, the algorithm tries to predict the risk probability of a person reoffending and seeks to issue sentences based on risk management, almost like an insurance provider. The questionnaire does not include questions about the color of people’s skin, but questions related to whether their parents were imprisoned before them, the crimes in their neighborhood, and their economic stability are responses that often show a disadvantageous bias against the African American population, which has historically been disproportionately impoverished and incarcerated.

 

As Ijeoma Oluo says:

 

The beauty of anti-racism is that you don’t have to pretend to be free of racism to be an anti-racist. Anti-racism is the commitment to fight racism wherever you find it, including in yourself. And that is the only way forward.

 

In this same vein, it is worth mentioning, as a wake-up call, that in 2018 Amazon had to stop using the algorithm it relied on to select whom to hire. The problem occurred because Amazon fed this algorithm with the resumes they had processed over the last decade, and therefore the algorithm tended to hire profiles similar to those previously hired by the company. This caused the algorithm to quickly discard the profiles of people who had attended women’s schools or played on women’s sports teams, as this did not match the profile of their previous hires for certain positions.

 

We need the development of Artificial Intelligence to have a perspective on the importance of diversity to prevent and reduce negative biases. If we want to create an AI that serves legal advice, besides teaching it the current laws and all the court rulings we have available, it may also be appropriate to try to get this algorithm to understand, in some way, the senses of mercy and justice that are also part of the law. This means that when producing these algorithms, we must also teach them values. The punishment for a person who steals a piece of bread for the first time out of necessity to feed their family cannot be the same as the penalty for a public official who greedily and corruptly decides to steal 1 million dollars to safeguard their family’s future. On the surface, both actions pursue the same goal, but the nature and conditions in which these acts occur are entirely different.

 

Click here to read the next chapter 👉 
 


Click here to return to the Index 🔍 


[74] Buolamwini, J. (2016). How I’m fighting bias in algorithms. TED Talks. Retrieved July 27, 2021, from https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms.

[75] Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. ProPublica; ProPublica. Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.