Decision-Making in AI: Algorithmic Discrimination and Equality

In today’s world, AI is often seen discrete from the social contexts even though it constructs society’s future. Such divergence comes with many problems and the biased decision-making processes experienced in machine learning systems is an instance. There are many cases of biases and discriminations experienced in AI through years and such cases are subject to many kinds of discrimination. However, both in terms of societal order and AI bias, the discrimination in gender and race remain to be the most common kinds of discriminations experienced and should be examined more.

Retrieved from: https://towardsdatascience.com/https-medium-com-mauriziosantamicone-is-artificial-intelligence-racist-66ea8f67c7de

The ImageNet project started in 2012, significantly showed such bias even if it was a successful project in terms of quantity of variables processed and algorithmic order. Paid participants of the project labeled more than 14 million pictures, with the impressive amount of data contained, soon the algorithm originated into a point where it can classify the pictures as accurate as a human. However, the results of the project in some terms made its developers question the accuracy of human decision-making. For instance, it has been realized that the algorithm assumed that “the programmers” were only white men. Excavating AI, also uncovered that there were intolerable racist words in the ImageNet data. Afterwards, the developers of the project detected such biases and added more diverse data in the learning system in order to solve the problem. However, such biases continue to be experienced and cause discriminations in different contexts.  

The bias with gender stereotypes has been seen on systems empowered with machine learning backgrounds in following years, too. Word embedding systems’, in simple words the systems that match words and documents to each other with the help of the data provided, matchings are instances for such biases. A system matched gender pronouns as “Man is to computer programmer as woman is to homemaker”. Another system correlated female names with the word “family”, while male names were correlated with “career”. Such stereotypical biased machine learning outputs show that the decision-making processes of such systems may cause serious discriminations.

Biased decision-making by AI also has been seen on loan approval orders repeatedly. Such systems’ ability to store and process massive quantities of data has been suggested to be the reason for the problem. In other words, when the system stores a large quantity of the information about places and people, its output comes out as negative with the applicants who are related with non-creditable information stored; even the applicant is individually approvable. Especially by many person of color, such discrimination has been experienced even just because of their birth place.

Many researches suggest that AI could be a tool to sustain fairer decisions, since the machine learning algorithms objectively evaluate the variables dependent on their precisions. At this point, comprehension shows that such discriminations’ main source is not the computer itself but the executed decision-making process of the algorithms. For that reason, it is important for AI developers to be sensitive about social issues in the society and technology workplaces to be diverse.

In the age of advanced technology, algorithmic discriminations should not be undermined since such advanced systems are used in decision-making for every segment of today’s society. Whether it’s a hiring process or credit loan, AI is the most reliable source for many companies and government agencies. Moreover, decision-making should not be processed biased in such institutions if we collectively want to take a step forward for a more equal world. We may have a long way for achieving an equal world for all; however, sensitizing the technology shaping the future, AI, can be a momentous leap for it.

References:

https://www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/

https://towardsdatascience.com/https-medium-com-mauriziosantamicone-is-artificial-intelligence-racist-66ea8f67c7de

https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans

https://www.wired.com/story/ai-biased-how-scientists-trying-fix/

https://hbr.org/2019/11/4-ways-to-address-gender-bias-in-ai

Leave a Reply

Your email address will not be published. Required fields are marked *