Adapting Reinforcement Learning For Trust: Effective Modeling in Dynamic Environments (Short Paper)

Ozgur Kafali, Pinar Yolum

Research output: Chapter in Book/Report/Conference proceedingConference contribution


In open multiagent systems, agents need to model their environments in order to identify trustworthy agents. Models of the environment should be accurate so that decisions about whom to interact with can be done soundly. Traditional trust models are based on modeling specific properties of agents, such as their expertise or reliability. Building those models requires too many prior interactions to be accurate. This paper proposes an approach that is based on keeping track of outcomes of agent's actions towards others rather than modeling other agents' performances explicitly. Contrary to existing modeling approaches that require domain knowledge to build models, our proposed approach can be effectively realized in multiagent systems when the agent's actions are clearly identified. Comparisons with other modeling approaches in various environments reveal that our proposed approach can create more precise models in short time and can adjust its behavior quickly when other agents' behaviors change.
Original languageEnglish
Title of host publicationIEEE / WIC / ACM International Conference on Web Intelligence
Number of pages4
Publication statusPublished - 2009

Cite this