Guided Reinforcement Learning via Sequence Learning

Applications of Reinforcement Learning (RL) suffer from high sample complexity due to sparse reward signals and inadequate exploration. Novelty Search (NS) guides as an auxiliary task, in this regard to encourage exploration towards unseen behaviors. However, NS suffers from critical drawbacks concerning scalability and generalizability since they are based off instance learning. Addressing these challenges, we previously proposed a generic approach using unsupervised learning to learn representations of agent behaviors and use reconstruction losses as novelty scores. However, it considered only fixed-length sequences and did not utilize sequential information of behaviors. Therefore, we here extend this approach by using sequential auto-encoders to incorporate sequential dependencies. Experimental results on benchmark tasks show that this sequence learning aids exploration outperforming previous novelty search methods.

  • Published in:
    ICANN 2020: Artificial Neural Networks and Machine Learning International Conference on Artificial Neural Networks (ICANN)
  • Type:
    Inproceedings
  • Authors:
    R. Ramamurthy, R. Sifa, M. Lübbering, C. Bauckhage
  • Year:
    2020

Citation information

R. Ramamurthy, R. Sifa, M. Lübbering, C. Bauckhage: Guided Reinforcement Learning via Sequence Learning, International Conference on Artificial Neural Networks (ICANN), ICANN 2020: Artificial Neural Networks and Machine Learning, 2020, https://doi.org/10.1007/978-3-030-61616-8_27, Ramamurthy.etal.2020a,