AI programs exhibit racial and gender biases, research reveals

According to The Guardian “Machine learning algorithms are picking up deeply ingrained race and gender prejudices concealed within the patterns of language use.” So the question is, will robots display human prejudices?

An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases.

The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons.

In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained.

View article →