Exploiting Reward Machines with Deep Reinforcement Learning in Continuous Action Domains

Loading...
Thumbnail Image

Date

2023-09-07

Authors

Haolin Sun
Lesperance, Yves

Journal Title

Journal ISSN

Volume Title

Publisher

Springer Cham

Abstract

In this paper, we address the challenges of non-Markovian rewards and learning efficiency in deep reinforcement learning (DRL) in continuous action domains by exploiting reward machines (RMs) and counterfactual experiences for reward machines (CRM). RM and CRM were proposed by Toro Icarte et al. A reward machine can decompose a task, convey its high-level structure to an agent, and support certain non-Markovian task specifications. In this paper, we integrate state-of-the-art DRL algorithms with RMs to enhance learning efficiency. Our experimental results demonstrate that Soft Actor-Critic with counterfactual experiences for RMs (SAC-CRM) facilitates faster learning of better policies, while Deep Deterministic Policy Gradient with counterfactual experiences for RMs (DDPG-CRM) is slower, achieves lower rewards, but is more stable. Option-based Hierarchical Reinforcement Learning for reward machines (HRM) and Twin Delayed Deep Deterministic (TD3) with CRM generally underperform compared to SAC-CRM and DDPG-CRM. This work contributes to the ongoing development of more efficient and robust DRL approaches by leveraging the potential of RMs in practical problem-solving scenarios.

Description

Keywords

Deep reinforcement learning, Reward machines

Citation

Sun, H., Lespérance, Y. (2023). Exploiting Reward Machines with Deep Reinforcement Learning in Continuous Action Domains. In: Malvone, V., Murano, A. (eds) Multi-Agent Systems. EUMAS 2023. Lecture Notes in Computer Science(vol 14282). Springer, Cham. https://doi.org/10.1007/978-3-031-43264-4_6