Skip to main content
Article
ADARL: WHAT, WHERE, AND HOW TO ADAPT IN TRANSFER REINFORCEMENT LEARNING
ICLR 2022 - 10th International Conference on Learning Representations
  • Biwei Huang, Carnegie Mellon University
  • Fan Feng, City University of Hong Kong
  • Chaochao Lu, University of Cambridge
  • Sara Magliacane, Universiteit van Amsterdam
  • Kun Zhang, Carnegie Mellon University & Mohamed bin Zayed University of Artificial Intelligence
Document Type
Conference Proceeding
Abstract

One practical challenge in reinforcement learning (RL) is how to make quick adaptations when faced with new environments. In this paper, we propose a principled framework for adaptive RL, called AdaRL, that adapts reliably and efficiently to changes across domains with a few samples from the target domain, even in partially observable environments. Specifically, we leverage a parsimonious graphical representation that characterizes structural relationships over variables in the RL system. Such graphical representations provide a compact way to encode what and where the changes across domains are, and furthermore inform us with a minimal set of changes that one has to consider for the purpose of policy adaptation. We show that by explicitly leveraging this compact representation to encode changes, we can efficiently adapt the policy to the target domain, in which only a few samples are needed and further policy optimization is avoided. We illustrate the efficacy of AdaRL through a series of experiments that vary factors in the observation, transition and reward functions for Cartpole and Atari games.

Publication Date
1-29-2022
Keywords
  • Transfer RL,
  • Graphical models,
  • Efficient adaptation
Comments

IR conditions: non-described

Open Access version available on OpenReview.

Citation Information
B. Huang, F. Feng, C. Lu, S. Magliacane, and K. Zhang, "AdaRL: What, Where, and How to Adapt in Transfer Reinforcement Learning", in 10th Intl. Conf. on Learning Representations (ICLR 2022), [Online], Apr 2022.