July 3, 2024

Trained AI models found to exhibit bias against disabilities, say researchers

Researchers from the Penn State College of Information Sciences and Technology (IST) have found that sentiment analysis tools powered by artificial intelligence (AI) services often contain biases against individuals with disabilities. These tools are increasingly used by organizations to categorize large amounts of text into positive, neutral, or negative sentences for various social applications, including healthcare and policymaking. The researchers conducted an analysis of natural language processing (NLP) algorithms and models to identify biases against people with disabilities.

The study, led by Shomir Wilson, assistant professor in IST and director of the Human Language Technologies Lab, was awarded the Best Short Paper Award at the 2023 Workshop on Trustworthy Natural Language Processing. The research aimed to examine whether the nature of discussions or learned associations within NLP models contributed to disability bias. Pranav Narayanan Venkit, a doctoral student in the College of IST and first author on the paper, emphasized the importance of this research, as organizations that outsource their AI needs may unknowingly deploy biased models.

For the research, the team defined disability bias as treating a person with a disability less favorably than someone without a disability in similar circumstances, and explicit bias as the intentional association of stereotypes towards a specific population. Many organizations use AI as a Service (AIaaS) to utilize easy-to-use NLP tools with minimal investment or risk. These tools include sentiment and toxicity analyses, which allow organizations to categorize and score large volumes of textual data.

Sentiment analysis is a technique used to extract subjective information, such as thoughts and attitudes, from social media posts, product reviews, and surveys. Toxicity detection models are designed to identify inflammatory or offensive content that can disrupt civil conversations. The researchers carried out a two-stage study to investigate disability bias in NLP tools. They first examined social media conversations on Twitter and Reddit related to people with disabilities to understand how bias is disseminated in real-world social settings.

By analyzing blog posts and comments from a one-year period, the researchers were able to identify any disability bias and harm present in the conversations. Statements referring to people with disabilities received significantly more negative and toxic scores compared to statements from control categories. The authors found that the bias was mainly originating from the trained sentiment and toxicity analysis models rather than the context of the conversations themselves.

To further demonstrate the presence of bias, the researchers created the Bias Identification Test in Sentiment (BITS) corpus. This corpus helps users identify explicit disability bias in AIaaS sentiment analysis and toxicity detection models. The researchers discovered that all the public models they studied exhibited significant bias against disabilities. These models tend to classify sentences as negative and toxic solely based on the presence of disability-related terms, without considering the contextual meaning, indicating explicit bias against disability-associated terms.

Identifying explicit bias in large-scale models can help us understand the social harm caused by training models with a skewed dominant viewpoint, both for developers and users. The researchers argue that this work is an important step in identifying and addressing disability bias in sentiment and toxicity analysis models. It also raises awareness about the presence of bias in AIaaS. According to Venkit, nearly everyone experiences a disability at some point in their life, making it crucial to address bias and promote inclusivity in AI systems.

*Note:
1. Source: Coherent Market Insights, Public sources, Desk research
2. We have leveraged AI tools to mine information and compile it