TY - JOUR
T1 - Can AI Enhance People’s Support for Online Moderation and Their Openness to Dissimilar Political Views?
AU - Wojcieszak, Magdalena
AU - Thakur, Arti
AU - Gonçalves, João Fernando Ferreira
AU - Casas, Andreu
AU - Menchen-Trevino, Ericka
AU - Boon, Miriam
PY - 2021/7
Y1 - 2021/7
N2 - Although artificial intelligence is blamed for many societal challenges, it also has underexplored potential in political contexts online. We rely on six preregistered experiments in three countries (N = 6,728) to test the expectation that AI and AI-assisted humans would be perceived more favorably than humans (a) across various content moderation, generation, and recommendation scenarios and (b) when exposing individuals to counter-attitudinal political information. Contrary to the preregistered hypotheses, participants see human agents as more just than AI across the scenarios tested, with the exception of news recommendations. At the same time, participants are not more open to counter-attitudinal information attributed to AI rather than a human or an AI-assisted human. These findings, which—with minor variations—emerged across countries, scenarios, and issues, suggest that human intervention is preferred online and that people reject dissimilar information regardless of its source. We discuss the theoretical and practical implications of these findings.
AB - Although artificial intelligence is blamed for many societal challenges, it also has underexplored potential in political contexts online. We rely on six preregistered experiments in three countries (N = 6,728) to test the expectation that AI and AI-assisted humans would be perceived more favorably than humans (a) across various content moderation, generation, and recommendation scenarios and (b) when exposing individuals to counter-attitudinal political information. Contrary to the preregistered hypotheses, participants see human agents as more just than AI across the scenarios tested, with the exception of news recommendations. At the same time, participants are not more open to counter-attitudinal information attributed to AI rather than a human or an AI-assisted human. These findings, which—with minor variations—emerged across countries, scenarios, and issues, suggest that human intervention is preferred online and that people reject dissimilar information regardless of its source. We discuss the theoretical and practical implications of these findings.
KW - artificial intelligence
KW - ai
KW - content moderation
KW - polarization
U2 - 10.1093/jcmc/zmab006
DO - 10.1093/jcmc/zmab006
M3 - Article
SN - 1083-6101
VL - 26
SP - 223
EP - 243
JO - Journal of Computer-Mediated Communication
JF - Journal of Computer-Mediated Communication
IS - 4
ER -