Leveraging Bias in Pre-Trained Word Embeddings for Unsupervised Microaggression Detection

Fiche du document

Date

20 octobre 2022

Discipline
Périmètre
Langue
Identifiants
Collection

OpenEdition Books

Organisation

OpenEdition

Licences

https://creativecommons.org/licenses/by-nc-nd/4.0/ , info:eu-repo/semantics/openAccess



Sujets proches En

Bias

Citer ce document

Ògúnrẹ̀mí Tolúlọpẹ et al., « Leveraging Bias in Pre-Trained Word Embeddings for Unsupervised Microaggression Detection », Accademia University Press, ID : 10.4000/books.aaccademia.10749


Métriques


Partage / Export

Résumé 0

Microaggressions are subtle manifestations of bias (Breitfeller et al. 2019). These demonstrations of bias can often be classified as a subset of abusive language. However, not as much focus has been placed on the recognition of these instances. As a result, limited data is available on the topic, and only in English. Being able to detect microaggressions without the need for labeled data would be advantageous since it would allow content moderation also for languages lacking annotated data. In this study, we introduce an unsupervised method to detect microaggressions in natural language expressions. The algorithm relies on pre-trained word-embeddings, leveraging the bias encoded in the model in order to detect microaggressions in unseen textual instances. We test the method on a dataset of racial and gender-based microaggressions, reporting promising results. We further run the algorithm on out-of-domain unseen data with the purpose of bootstrapping corpora of microaggressions “in the wild”, and discuss the benefits and drawbacks of our proposed method.

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines

Exporter en