Potential based reward shaping for hierarchical reinforcement learning

Yang Gao, Francesca Toni

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Hierarchical Reinforcement Learning (HRL) outperforms many ‘flat’ Reinforcement Learning (RL) algorithms in some application domains. However, HRL may need longer time to obtain the optimal policy because of its large action space. Potential Based Reward Shaping (PBRS) has been widely used to incorporate heuristics into flat RL algorithms so as to reduce their exploration. In this paper, we investigate the integration of PBRS and HRL, and propose a new algorithm: PBRS-MAXQ-0. We prove that under certain conditions, PBRS- MAXQ-0 is guaranteed to converge. Empirical results show that PBRS-MAXQ-0 significantly outperforms MAXQ-0 given good heuristics, and can converge even when given misleading heuristics.
Original languageEnglish
Title of host publicationIJCAI'15 Proceedings of the 24th International Conference on Artificial Intelligence
PublisherAAAI Press
Pages3504-3510
Number of pages7
ISBN (Electronic)978-1-57735-738-4
Publication statusPublished - 25 Jul 2015

Cite this