Abstract
A bounded-reasoning agent may face two dimensions of uncertainty: firstly, the uncertainty arising from partial information and conflicting reasons, and secondly, the uncertainty arising from the stochastic nature of its actions and the environment. This paper attempts to address both dimensions within a single unified framework, by bringing together probabilistic argumentation and reinforcement learning. We show how a probabilistic rule-based argumentation framework can capture Markov decision processes and reinforcement learning agents; and how the framework allows us to characterise agents and their argument-based motivations from both a logic-based perspective and a probabilistic perspective. We advocate and illustrate the use of our approach to capture models of agency and norms, and argue that, in addition to providing a novel method for investigating agent types, the unified framework offers a sound basis for taking a mentalistic approach to agent profiles.
Original language | English |
---|---|
Pages (from-to) | 216-274 |
Number of pages | 59 |
Journal | Autonomous Agents and Multi-Agent Systems |
Volume | 33 |
Issue number | 1-2 |
Early online date | 6 Mar 2019 |
DOIs | |
Publication status | Published - Mar 2019 |