Download | - View final version: Challenges in applying explainability methods to improve the fairness of NLP models (PDF, 275 KiB)
|
---|
Author | Search for: Balkir, Esma1; Search for: Kiritchenko, Svetlana1; Search for: Nejadgholi, Isar1; Search for: Fraser, Kathleen1 |
---|
Affiliation | - National Research Council of Canada. Digital Technologies
|
---|
Format | Text, Article |
---|
Conference | 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022), July 14, 2022, Seattle, U.S.A. |
---|
Abstract | Motivations for methods in explainable artificial intelligence (XAI) often include detecting, quantifying and mitigating bias, and contributing to making machine learning models fairer. However, exactly how an XAI method can help in combating biases is often left unspecified. In this paper, we briefly review trends in explainability and fairness in NLP research, identify the current practices in which explainability methods are applied to detect and mitigate bias, and investigate the barriers preventing XAI methods from being used more widely in tackling fairness issues. |
---|
Publication date | 2022-07-14 |
---|
Publisher | Association for Computational Linguistics |
---|
Licence | |
---|
In | |
---|
Language | English |
---|
Peer reviewed | Yes |
---|
Export citation | Export as RIS |
---|
Report a correction | Report a correction (opens in a new tab) |
---|
Record identifier | 46dba455-4f74-4521-a227-5c02dd74108a |
---|
Record created | 2022-09-09 |
---|
Record modified | 2022-09-14 |
---|