Increased Discriminability of Authenticity from Multimodal Laughter is Driven by Auditory Information

Nadine Lavan, Carolyn McGettigan

Research output: Contribution to journalArticlepeer-review

42 Downloads (Pure)

Abstract

We present an investigation of the perception of authenticity in audiovisual laughter, in which we contrast spontaneous and volitional samples and examine the contributions of unimodal affective information to multimodal percepts. In a pilot study, we demonstrate that listeners perceive spontaneous laughs as more authentic than volitional ones, both in unimodal (audio-only, video-only) and multimodal (audiovisual) contexts. In the main experiment, we show that the discriminability of volitional and spontaneous laughter is enhanced for multimodal laughter. Analyses of relationships between affective ratings and the perception of authenticity show that, while both unimodal percepts significantly predict evaluations of audiovisual laughter, it is auditory affective cues that have the greater influence on multimodal percepts. We discuss differences and potential mismatches in emotion signaling through voices and faces, in the context of spontaneous and volitional behavior, and highlight issues that should be addressed in future studies of dynamic multimodal emotion processing.
Original languageEnglish
Pages (from-to)2159-2168
Number of pages10
JournalThe Quarterly Journal of Experimental Psychology
Volume70
Issue number10
DOIs
Publication statusPublished - 1 Oct 2017

Cite this