An audiovisual talking head for augmented speech generation: models and animations based on a real speaker's articulatory data

Fiche du document

Date

juillet 2008

Périmètre
Langue
Identifiants
Relations

Ce document est lié à :
info:eu-repo/semantics/altIdentifier/doi/10.1007/978-3-540-70517-8_14

Collection

Archives ouvertes



Sujets proches En

Talking

Citer ce document

Pierre Badin et al., « An audiovisual talking head for augmented speech generation: models and animations based on a real speaker's articulatory data », HALSHS : archive ouverte en Sciences de l’Homme et de la Société, ID : 10.1007/978-3-540-70517-8_14


Métriques


Partage / Export

Résumé En

We present a methodology developed to derive three-dimensional models of speech articulators from volume MRI and multiple view video images acquired on one speaker. Linear component analysis is used to model these highly deformable articulators as the weighted sum of a small number of basic shapes corresponding to the articulators' degrees of freedom for speech. These models are assembled into an audiovisual talking head that can produce augmented audiovisual speech, i.e. can display usually non visible articulators such as tongue or velum. The talking head is then animated by recovering its control parameters by inversion from the coordinates of a small number of points of the articulators of the same speaker tracked by Electro-Magnetic Articulography. The augmented speech produced points the way to promising applications in the domain of speech therapy for speech retarded children, perception and production rehabilitation of hearing impaired children, and pronunciation training for second language learners.

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines

Exporter en