Téléchargement | - Voir la version finale : Extended study on using pretrained language models and YiSi-1 for machine translation evaluation (PDF, 4.8 Mio)
|
---|
Lien | https://aclanthology.org/2020.wmt-1.99/ |
---|
Auteur | Rechercher : Lo, Chi-Kiu1 |
---|
Affiliation | - Conseil national de recherches du Canada. Technologies numériques
|
---|
Format | Texte, Article |
---|
Conférence | Fifth Conference on Machine Translation, November 19-20, 2020, Online |
---|
Résumé | We present an extended study on using pretrained language models and YiSi-1 for machine translation evaluation. Although the recently proposed contextual embedding based metrics, YiSi-1, significantly outperform BLEU and other metrics in correlating with human judgment on translation quality, we have yet to understand the full strength of using pretrained language models for machine translation evaluation. In this paper, we study YiSi-1’s correlation with human translation quality judgment by varying three major attributes (which architecture; which inter- mediate layer; whether it is monolingual or multilingual) of the pretrained language models. Results of the study show further improvements over YiSi-1 on the WMT 2019 Metrics shared task. We also describe the pretrained language model we trained for evaluating Inuktitut machine translation output. |
---|
Date de publication | 2020-11-19 |
---|
Maison d’édition | Association for Computational Linguistics |
---|
Licence | |
---|
Dans | |
---|
Autre format | |
---|
Langue | anglais |
---|
Publications évaluées par des pairs | Oui |
---|
Exporter la notice | Exporter en format RIS |
---|
Signaler une correction | Signaler une correction (s'ouvre dans un nouvel onglet) |
---|
Identificateur de l’enregistrement | cd8d16f5-2b67-41aa-955f-84b4b6dc4e31 |
---|
Enregistrement créé | 2022-05-16 |
---|
Enregistrement modifié | 2023-06-22 |
---|