Post-edited quality, post-editing behaviour and human evaluation: a case study

Fiche du document

Date

2014

Discipline
Périmètre
Langue
Identifiants
Collection

Archives ouvertes


Mots-clés Fr

post-édition

Sujets proches En

Assessment

Citer ce document

Ilse Depraetere et al., « Post-edited quality, post-editing behaviour and human evaluation: a case study », HAL-SHS : linguistique, ID : 10670/1.hqjzxp


Métriques


Partage / Export

Résumé 0

In this chapter, we address the correlation between post-editing similarity and the human evaluation of machine translation. We were interested to find out whether a high similarity score corresponded to a high quality score and vice versa in the sample that we compiled for the purposes of the case study. A group of translation trainees post-edited a sample and a number of these informants also rated the MT output for quality on a five-point scale. We calculated Pearson's correlation coefficient as well as the relative standard deviation per informant for each activity with a view to determining which of the two evaluation methods appeared to be the more reliable measurement given the project settings. Our sample also enabled us to test whether MT enhances the productivity of translation trainees, and whether the quality of post-edited sentences is different from the quality of sentences translated 'from scratch'.

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines

Exporter en