A probabilistic argumentation framework for reinforcement learning agents. / Riveret, Régis; Gao, Yang; Governatori, Guido; Rotolo, Antonino; Pitt, Jeremy; Sartor, Giovanni.
In: Autonomous Agents and Multi-Agent Systems, Vol. 33, No. 1-2, 03.2019, p. 216-274.Research output: Contribution to journal › Article › peer-review
A probabilistic argumentation framework for reinforcement learning agents. / Riveret, Régis; Gao, Yang; Governatori, Guido; Rotolo, Antonino; Pitt, Jeremy; Sartor, Giovanni.
In: Autonomous Agents and Multi-Agent Systems, Vol. 33, No. 1-2, 03.2019, p. 216-274.Research output: Contribution to journal › Article › peer-review
}
TY - JOUR
T1 - A probabilistic argumentation framework for reinforcement learning agents
AU - Riveret, Régis
AU - Gao, Yang
AU - Governatori, Guido
AU - Rotolo, Antonino
AU - Pitt, Jeremy
AU - Sartor, Giovanni
PY - 2019/3
Y1 - 2019/3
N2 - A bounded-reasoning agent may face two dimensions of uncertainty: firstly, the uncertainty arising from partial information and conflicting reasons, and secondly, the uncertainty arising from the stochastic nature of its actions and the environment. This paper attempts to address both dimensions within a single unified framework, by bringing together probabilistic argumentation and reinforcement learning. We show how a probabilistic rule-based argumentation framework can capture Markov decision processes and reinforcement learning agents; and how the framework allows us to characterise agents and their argument-based motivations from both a logic-based perspective and a probabilistic perspective. We advocate and illustrate the use of our approach to capture models of agency and norms, and argue that, in addition to providing a novel method for investigating agent types, the unified framework offers a sound basis for taking a mentalistic approach to agent profiles.
AB - A bounded-reasoning agent may face two dimensions of uncertainty: firstly, the uncertainty arising from partial information and conflicting reasons, and secondly, the uncertainty arising from the stochastic nature of its actions and the environment. This paper attempts to address both dimensions within a single unified framework, by bringing together probabilistic argumentation and reinforcement learning. We show how a probabilistic rule-based argumentation framework can capture Markov decision processes and reinforcement learning agents; and how the framework allows us to characterise agents and their argument-based motivations from both a logic-based perspective and a probabilistic perspective. We advocate and illustrate the use of our approach to capture models of agency and norms, and argue that, in addition to providing a novel method for investigating agent types, the unified framework offers a sound basis for taking a mentalistic approach to agent profiles.
U2 - 10.1007/s10458-019-09404-2
DO - 10.1007/s10458-019-09404-2
M3 - Article
VL - 33
SP - 216
EP - 274
JO - Autonomous Agents and Multi-Agent Systems
JF - Autonomous Agents and Multi-Agent Systems
SN - 1387-2532
IS - 1-2
ER -