Trust or Mistrust in Algorithmic Grading? An Embedded Agency Perspective

Stephen Jackson, Niki Panteli

Research output: Contribution to journalArticlepeer-review

7 Downloads (Pure)

Abstract

Artificial Intelligence (AI) has the potential to significantly impact the educational sector. One application of AI that has increasingly been applied is algorithmic grading. It is within this context that our study takes a focus on trust. While the concept of trust continues to grow in importance among AI researchers and practitioners, an investigation of trust/mistrust in algorithmic grading across multiple levels of analysis has so far been under-researched. In this paper, we argue the need for a model that encompasses the multi-layered nature of trust/mistrust in AI. Drawing on an embedded agency perspective, a model is devised that examines top-down and bottom-up forces that can influence trust/mistrust in algorithmic grading. We illustrate how the model can be applied by drawing on the case of the International Baccalaureate (IB) program in 2020, whereby an algorithm was used to determine student grades. This paper contributes to the AI-trust literature by providing a fresh theoretical lens based on institutional theory to investigate the dynamic and multi-faceted nature of trust/mistrust in algorithmic grading—an area that has seldom been explored, both theoretically and empirically. The study raises important implications for algorithmic design and awareness. Algorithms need to be designed in a transparent, fair, and ultimately a trustworthy manner. While an algorithm typically operates like a black box, whereby the underlying mechanisms are not apparent to those impacted by it, the purpose and an understanding of how the algorithm works should be communicated upfront and in a timely manner.
Original languageEnglish
Article number102555
JournalInternational Journal of Information Management
Volume69
Early online date13 Aug 2022
DOIs
Publication statusPublished - Apr 2023

Cite this