Racial and gender bias seep into Artificial Intelligence

As use of AI widens, issues of bias arise

OpenAI+CEO+Sam+Altman+speaks+to+press+during+the+Introduction+of+the+integration+of+the+Bing+search+engine+and+Edge+browser+with+OpenAI

AP

OpenAI CEO Sam Altman speaks to press during the Introduction of the integration of the Bing search engine and Edge browser with OpenAI

Artificial Intelligence, or AI, has grown to be a popular tool for private companies and the U.S. government. 

AI, a programmed set of algorithms used to automate tasks, has been used  by police and government bodies, private companies and selective schools. To some, AI seems like a fantasy come true: a computer completing tedious and difficult tasks in a fraction of the time. It’s already made job applications easier to sort through, school admissions offices have automated their admissions process, and the police have been able to find criminals through video footage in record times. 

But racial and gender biases of humans has seeped into the fabric of artificial intelligence.

The problem stems from the way that AI “learns.” To advance its capabilities, AI is fed a plethora of data, whether in the form of statistics, images, or sample problems. As AI processes the data, it becomes more and more accurate, learning to mirror the decisions that it predicts people would make. In theory, this is quite a fool-proof method of training, but there are multiple issues with this method. 

When dealing with applicants and having to make admissions decisions, AI learns to mirror the decision that humans would make in the situation. However, humans, due to their own conscious or unconscious biases, make prejudiced decisions. According to Harvard Business Review details, in 1988, the UK Commission for Racial Equality found a British medical school guilty of discrimination. According to the commission, the computer program the school was using to choose which applicants would be invited for interviews was determined to be biased against women and those with non-European names. The program was developed to match human admissions decisions, and did so with 90 to 95 percent accuracy.

AI does what it is taught to do, and allows our implicit biases to be fed into Artificial intelligence, then it’s no surprise that it will make decisions as flawed as our own. It becomes even more dangerous when there’s so much at stake.