Human Annotation of ASR Error Regions: is "gravity" a Sharable Concept for Human Annotators?

Fiche du document

Date

26 mai 2014

Discipline
Type de document
Périmètre
Langue
Identifiants
Collection

Archives ouvertes



Sujets proches En

Mistakes

Citer ce document

Daniel Luzzati et al., « Human Annotation of ASR Error Regions: is "gravity" a Sharable Concept for Human Annotators? », HAL-SHS : linguistique, ID : 10670/1.guj874


Métriques


Partage / Export

Résumé En

This paper is concerned with human assessments of the severity of errors in ASR outputs. We did not design any guidelines so that each annotator involved in the study could consider the " seriousness " of an ASR error using their own scientific background. Eight human annotators were involved in an annotation task on three distinct corpora, one of the corpora being annotated twice, hiding this annotation in duplicate to the annotators. None of the computed results (inter-annotator agreement, edit distance, majority annotation) allow any strong correlation between the considered criteria and the level of seriousness to be shown, which underlines the difficulty for a human to determine whether a ASR error is serious or not.

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines

Exporter en