Téléchargement | - Voir la version finale : A multilingual evaluation of NER robustness to adversarial inputs (PDF, 634 Kio)
|
---|
DOI | Trouver le DOI : https://doi.org/10.18653/v1/2023.repl4nlp-1.4 |
---|
Auteur | Rechercher : Srinivasan, Akshay; Rechercher : Vajjala, Sowmya1 |
---|
Affiliation | - Conseil national de recherches du Canada. Technologies numériques
|
---|
Format | Texte, Article |
---|
Conférence | 8th Workshop on Representation Learning for NLP (RepL4NLP 2023), July 13, 2023, Toronto, Ontario |
---|
Résumé | Adversarial evaluations of language models typically focus on English alone. In this paper, we performed a multilingual evaluation of Named Entity Recognition (NER) in terms of its robustness to small perturbations in the input. Our results showed the NER models we explored across three languages (English, German and Hindi) are not very robust to such changes, as indicated by the fluctuations in the overall F1 score as well as in a more fine-grained evaluation. With that knowledge, we further explored whether it is possible to improve the existing NER models using a part of the generated adversarial data sets as augmented training data to train a new NER model or as fine-tuning data to adapt an existing NER model. Our results showed that both these approaches improve performance on the original as well as adversarial test sets. While there is no significant difference between the two approaches for English, re-training is significantly better than fine-tuning for German and Hindi. |
---|
Date de publication | 2023-07-13 |
---|
Maison d’édition | Association for Computational Linguistics |
---|
Licence | |
---|
Dans | |
---|
Langue | anglais |
---|
Publications évaluées par des pairs | Oui |
---|
Exporter la notice | Exporter en format RIS |
---|
Signaler une correction | Signaler une correction (s'ouvre dans un nouvel onglet) |
---|
Identificateur de l’enregistrement | 6d202be0-1804-4124-90ee-7f46ddf69982 |
---|
Enregistrement créé | 2023-07-17 |
---|
Enregistrement modifié | 2023-11-03 |
---|