• Pin It
  • Pin It

Just how biased are computer algorithms?

May 24, 2016, 2:55 PM EDT
A social network graph. (Source: justgrimes/flickr)
A social network graph. (Source: justgrimes/flickr)

To the layperson, computer algorithms are enigmatic and allow a computer to solve a problem (for example, suggesting movies, mapping, or prediction). One assumes that such computer programs are void of human bias. Over the last few months, reports have indicated the contrary.

A ProPublica report published this week says that risk assessment algorithms, such as ones that are used in courts to better inform judges during criminal sentencing, are biased against black people. Certain algorithms can be used to rate a person’s risk of future crime. The report notes that the Justice Department’s National Institute of Corrections now “encourages the use of such combined assessments at every stage of the criminal justice process. And a landmark sentencing reform bill currently pending in Congress would mandate the use of such assessments in federal prisons.”

Risk scores have long been suspected of reflecting racial bias, but efforts to find out exactly how biased they are — or how racist they are — have stagnated. According to ProPublica, research shows that risk scores of more than 7,000 people indicated that the algorithm used to predict their risk of future crime made mistakes such as falsely flagging black defendants and mislabeling white defendants as low-risk. 

This is not the first report of widely-used algorithms producing racist results.

A late April report from Bloomberg found that an algorithm employed by Amazon excluded minority neighborhoods in Boston, Atlanta, Chicago, Dallas, New York City, and Washington, D.C. from its Prime Free Same-Day Delivery service while extending the service to white neighborhoods. Amazon argued that its algorithm used data to tell the company that it wouldn’t make a profit in certain neighborhoods with its same-day delivery service, and therefore the company excluded those regions. The subsequent uproar saw lawmakers calling on Amazon to change its system to include those neighborhoods; the company has complied in New York, Chicago, and Boston.

Some algorithms in Google services such as Google Photos have come under fire in recent months for labeling people of certain races as animals, rather than labeling them by their human names, as Google claims the service is designed to do.

In a sexist turn, an NPR report from March describes studies that have found how women are more likely to be shown lower-paying jobs than men in online ads via those advertising algorithms.

At the core of this issue is the the fact that, even though algorithms should be as unbiased as mathematical equations, they are still created and implemented by humans. And humans are rarely unbiased. This realization should act as a red flag as the world looks ahead to the next generation of artificial intelligence.

YOU MIGHT ALSO LIKE