Multimodal Affect Models: An Investigation of Relative Salience of Audio and Visual Cues for Emotion Prediction

Wu, Jingyao and Dang, Ting and Sethu, Vidhyasaharan and Ambikairajah, Eliathamby (2021) Multimodal Affect Models: An Investigation of Relative Salience of Audio and Visual Cues for Emotion Prediction. Frontiers in Computer Science, 3. ISSN 2624-9898

[thumbnail of pubmed-zip/versions/1/package-entries/fcomp-03-767767/fcomp-03-767767.pdf] Text
pubmed-zip/versions/1/package-entries/fcomp-03-767767/fcomp-03-767767.pdf - Published Version

Download (1MB)

Abstract

People perceive emotions via multiple cues, predominantly speech and visual cues, and a number of emotion recognition systems utilize both audio and visual cues. Moreover, the perception of static aspects of emotion (speaker's arousal level is high/low) and the dynamic aspects of emotion (speaker is becoming more aroused) might be perceived via different expressive cues and these two aspects are integrated to provide a unified sense of emotion state. However, existing multimodal systems only focus on single aspect of emotion perception and the contributions of different modalities toward modeling static and dynamic emotion aspects are not well explored. In this paper, we investigate the relative salience of audio and video modalities to emotion state prediction and emotion change prediction using a Multimodal Markovian affect model. Experiments conducted in the RECOLA database showed that audio modality is better at modeling the emotion state of arousal and video for emotion state of valence, whereas audio shows superior advantages over video in modeling emotion changes for both arousal and valence.

Item Type: Article
Subjects: Eurolib Press > Computer Science
Depositing User: Managing Editor
Date Deposited: 02 Jan 2023 11:30
Last Modified: 02 Jan 2024 12:54
URI: http://info.submit4journal.com/id/eprint/695

Actions (login required)

View Item
View Item