Researchers stated that hate speech detectors were biased against African-American Vernacular English and automated hiring decisions were proven to be biased in favour of upholding status quo.
(Subscribe to our Today’s Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)
Researchers at New York University (NYU) have identified how cultural stereotypes found their way into artificial intelligence (AI) models in the early years of their development.
The team’s findings help understand the factors that influence a search engine’s result page and other AI-powered tools, including translation systems, personal assistants, and resume screening software.
In recent years, advances in applied language understanding technology have primarily been driven by the use of language representation models that are trained by exposing them to huge amounts of internet text.
These models not only learn about the language during the training process, they also learn from it by picking up ideas on how the world works from what people write about. This makes for systems that perform well on typical AI benchmarks, but it also causes problems, NYU said in a study titled ‘Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing’.
Models acquire social biases that are reflected in the data. This can be dangerous when the models are used for decision making, especially when they’re asked to make a decision about some piece of text that describes people of colour, or any other social group that faces widespread stereotyping.
Also read | AI assists NASA to spot craters on Mars
In order to mitigate the occurrences of social biases in AI-powered models, a team of writers were asked to note down sentences that express a stereotypical view of a specified social group, as well as incongruous ‘anti-stereotypical’ sentences that expressed the same view about a different social group.
Using sentence pairs, they then created a metric to measure bias in three widely used language representation models, and deployed that metric to show that each of the three masked language models (MLMs) readily recognized stereotyped sentences as more typical than the anti-stereotyped sentences, demonstrating their knowledge and use of the stereotypes.
The state-of-the-art model among the three, the one that does best on typical applied benchmarks, also demonstrated the most extensive use of stereotypes.