Increased Discriminability of Authenticity from Multimodal Laughter is Driven by Auditory Information. / Lavan, Nadine; McGettigan, Carolyn.

In: The Quarterly Journal of Experimental Psychology, Vol. 70, No. 10, 01.10.2017, p. 2159-2168.

Research output: Contribution to journalArticle

Published

Standard

Increased Discriminability of Authenticity from Multimodal Laughter is Driven by Auditory Information. / Lavan, Nadine; McGettigan, Carolyn.

In: The Quarterly Journal of Experimental Psychology, Vol. 70, No. 10, 01.10.2017, p. 2159-2168.

Research output: Contribution to journalArticle

Harvard

APA

Vancouver

Author

Lavan, Nadine ; McGettigan, Carolyn. / Increased Discriminability of Authenticity from Multimodal Laughter is Driven by Auditory Information. In: The Quarterly Journal of Experimental Psychology. 2017 ; Vol. 70, No. 10. pp. 2159-2168.

BibTeX

@article{1c221e253bb448cfa9cc806f3a1ad0e5,
title = "Increased Discriminability of Authenticity from Multimodal Laughter is Driven by Auditory Information",
abstract = "We present an investigation of the perception of authenticity in audiovisual laughter, in which we contrast spontaneous and volitional samples and examine the contributions of unimodal affective information to multimodal percepts. In a pilot study, we demonstrate that listeners perceive spontaneous laughs as more authentic than volitional ones, both in unimodal (audio-only, video-only) and multimodal (audiovisual) contexts. In the main experiment, we show that the discriminability of volitional and spontaneous laughter is enhanced for multimodal laughter. Analyses of relationships between affective ratings and the perception of authenticity show that, while both unimodal percepts significantly predict evaluations of audiovisual laughter, it is auditory affective cues that have the greater influence on multimodal percepts. We discuss differences and potential mismatches in emotion signaling through voices and faces, in the context of spontaneous and volitional behavior, and highlight issues that should be addressed in future studies of dynamic multimodal emotion processing.",
author = "Nadine Lavan and Carolyn McGettigan",
year = "2017",
month = "10",
day = "1",
doi = "10.1080/17470218.2016.1226370",
language = "English",
volume = "70",
pages = "2159--2168",
journal = "The Quarterly Journal of Experimental Psychology",
issn = "1747-0218",
publisher = "Taylor and Francis Ltd.",
number = "10",

}

RIS

TY - JOUR

T1 - Increased Discriminability of Authenticity from Multimodal Laughter is Driven by Auditory Information

AU - Lavan, Nadine

AU - McGettigan, Carolyn

PY - 2017/10/1

Y1 - 2017/10/1

N2 - We present an investigation of the perception of authenticity in audiovisual laughter, in which we contrast spontaneous and volitional samples and examine the contributions of unimodal affective information to multimodal percepts. In a pilot study, we demonstrate that listeners perceive spontaneous laughs as more authentic than volitional ones, both in unimodal (audio-only, video-only) and multimodal (audiovisual) contexts. In the main experiment, we show that the discriminability of volitional and spontaneous laughter is enhanced for multimodal laughter. Analyses of relationships between affective ratings and the perception of authenticity show that, while both unimodal percepts significantly predict evaluations of audiovisual laughter, it is auditory affective cues that have the greater influence on multimodal percepts. We discuss differences and potential mismatches in emotion signaling through voices and faces, in the context of spontaneous and volitional behavior, and highlight issues that should be addressed in future studies of dynamic multimodal emotion processing.

AB - We present an investigation of the perception of authenticity in audiovisual laughter, in which we contrast spontaneous and volitional samples and examine the contributions of unimodal affective information to multimodal percepts. In a pilot study, we demonstrate that listeners perceive spontaneous laughs as more authentic than volitional ones, both in unimodal (audio-only, video-only) and multimodal (audiovisual) contexts. In the main experiment, we show that the discriminability of volitional and spontaneous laughter is enhanced for multimodal laughter. Analyses of relationships between affective ratings and the perception of authenticity show that, while both unimodal percepts significantly predict evaluations of audiovisual laughter, it is auditory affective cues that have the greater influence on multimodal percepts. We discuss differences and potential mismatches in emotion signaling through voices and faces, in the context of spontaneous and volitional behavior, and highlight issues that should be addressed in future studies of dynamic multimodal emotion processing.

U2 - 10.1080/17470218.2016.1226370

DO - 10.1080/17470218.2016.1226370

M3 - Article

VL - 70

SP - 2159

EP - 2168

JO - The Quarterly Journal of Experimental Psychology

JF - The Quarterly Journal of Experimental Psychology

SN - 1747-0218

IS - 10

ER -