Evaluating Language Model Bias with 🤗 Evaluate

While the size and capabilities of large language models have drastically increased over the past couple of years, so too has the concern around biases imprinted into these models and their training data. In fact, many popular language models have been found to be biased against specific religions and genders, which can result in the promotion of discriminatory ideas and the perpetuation of harms against marginalized groups.

To help the community explore these kinds of biases and strengthen our understanding of the social issues that language models encode, we have been working on adding bias

 

 

 

To finish reading, please visit source site