Haolin SunLesperance, Yves2024-11-042024-11-042023-09-07Sun, H., Lespérance, Y. (2023). Exploiting Reward Machines with Deep Reinforcement Learning in Continuous Action Domains. In: Malvone, V., Murano, A. (eds) Multi-Agent Systems. EUMAS 2023. Lecture Notes in Computer Science(vol 14282). Springer, Cham. https://doi.org/10.1007/978-3-031-43264-4_6978-3-031-43264-4978-3-031-43263-71611-33490302-9743https://doi.org/10.1007/978-3-031-43264-4_6https://hdl.handle.net/10315/42389In this paper, we address the challenges of non-Markovian rewards and learning efficiency in deep reinforcement learning (DRL) in continuous action domains by exploiting reward machines (RMs) and counterfactual experiences for reward machines (CRM). RM and CRM were proposed by Toro Icarte et al. A reward machine can decompose a task, convey its high-level structure to an agent, and support certain non-Markovian task specifications. In this paper, we integrate state-of-the-art DRL algorithms with RMs to enhance learning efficiency. Our experimental results demonstrate that Soft Actor-Critic with counterfactual experiences for RMs (SAC-CRM) facilitates faster learning of better policies, while Deep Deterministic Policy Gradient with counterfactual experiences for RMs (DDPG-CRM) is slower, achieves lower rewards, but is more stable. Option-based Hierarchical Reinforcement Learning for reward machines (HRM) and Twin Delayed Deep Deterministic (TD3) with CRM generally underperform compared to SAC-CRM and DDPG-CRM. This work contributes to the ongoing development of more efficient and robust DRL approaches by leveraging the potential of RMs in practical problem-solving scenarios.enDeep reinforcement learningReward machinesExploiting Reward Machines with Deep Reinforcement Learning in Continuous Action DomainsConference Paper