Handling sparse rewards in reinforcement learning using model predictive control
Reinforcement learning (RL) has recently proven great success in various domains. Yet, the design of the reward function requires detailed domain expertise and tedious fine-tuning to ensure that agents are able to learn the desired behaviour. Using a sparse reward conveniently mitigates these challenges. However, the sparse reward represents a challenge on its own, often resulting in unsuccessful training of the agent. In this paper, we therefore address the sparse reward problem in RL. Our goal is to find an effective alternative to reward shaping, without using costly human demonstrations, that would also be applicable to a wide range of domains. Hence, we propose to use model predictive control (MPC) as an experience source for training RL agents in sparse reward environments.
Without the need for reward shaping, we successfully apply our approach in the field of mobile robot navigation both in simulation and real-world experiments with a Kuboki Turtlebot 2. We furthermore demonstrate great improvement over pure RL algorithms in terms of success rate as well as number of collisions and timeouts. Our experiments show that MPC as an experience source improves the agent’s learning process for a given task in the case of sparse rewards.
- Published in:
IEEE International Conference on Robotics and Automation - Type:
Inproceedings - Authors:
Dawood, Murad; Dengler, Nils; de Heuvel, Jorge; Bennewitz, Maren - Year:
2023
Citation information
Dawood, Murad; Dengler, Nils; de Heuvel, Jorge; Bennewitz, Maren: Handling sparse rewards in reinforcement learning using model predictive control, IEEE International Conference on Robotics and Automation, 2023, 879--885, https://ieeexplore.ieee.org/document/10161492, Dawood.etal.2023a,
@Inproceedings{Dawood.etal.2023a,
author={Dawood, Murad; Dengler, Nils; de Heuvel, Jorge; Bennewitz, Maren},
title={Handling sparse rewards in reinforcement learning using model predictive control},
booktitle={IEEE International Conference on Robotics and Automation},
pages={879--885},
url={https://ieeexplore.ieee.org/document/10161492},
year={2023},
abstract={Reinforcement learning (RL) has recently proven great success in various domains. Yet, the design of the reward function requires detailed domain expertise and tedious fine-tuning to ensure that agents are able to learn the desired behaviour. Using a sparse reward conveniently mitigates these challenges. However, the sparse reward represents a challenge on its own, often resulting in unsuccessful...}}