Identifying Listener-informed Features for Modeling Time-varying Emotion Perception

Fiche du document

Date

14 octobre 2019

Type de document
Périmètre
Langue
Identifiants
Collection

Archives ouvertes

Licence

info:eu-repo/semantics/OpenAccess




Citer ce document

Simin Yang et al., « Identifying Listener-informed Features for Modeling Time-varying Emotion Perception », HAL SHS (Sciences de l’Homme et de la Société), ID : 10670/1.ddfb7a...


Métriques


Partage / Export

Résumé En

Music emotion perception can be highly subjective and varies over time, making it challenging to find salient explanatory acoustic features for listeners. In this paper, we dig deeper into the reasons listeners produce different emotion annotations in a complex classical music piece in order to gain a deeper understanding of the factors that influence emotion perception in music performance. An initial study collected time-varying emotion ratings (valence and arousal) from listeners of a live performance of a classical trio; a follow-up study interrogates the reasons behind listeners' emotion ratings through the re-evaluation of several preselected music segments of various agreement levels informed from the initial study. Thematic analysis of the time-stamped comments revealed themes pertaining primarily to musical features of loudness, tempo, and pitch contour as the main factors influencing emotion perception. The analysis uncovered features such as instrument interaction, repetition, and expression embellishments, which are less mentioned in computational music emotion recognition studies. Our findings lead to proposals for ways to incorporate these features into existing models of emotion perception and music information retrieval researches. Better models for music emotion provide important information for music recommendation systems and applications in music and music-supported therapy.

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines