Skip to main content
Other
Relativized Options: Choosing the Right Transformation
Proceedings of the Twentieth International Conference on Machine Learning
  • Balaraman Ravindran, University of Massachusetts - Amherst
  • Andrew G. Barto, University of Massachusetts - Amherst
Publication Date
2003
Abstract

Relativized options combine model minimization methods and a hierarchical reinforcement learning framework to derive compact reduced representations of a related family of tasks. Relativized options are defined without an absolute frame of reference, and an option's policy is transformed suitably based on the circumstances under which the option is invoked. In earlier work we addressed the issue of learning the option policy online. In this article we develop an alogrithm for choosing, from among a set of candidate transformations, the right transformation for each member of the family of tasks.

Disciplines
Comments
This paper was harvested from CiteSeer
Citation Information
Balaraman Ravindran and Andrew G. Barto. "Relativized Options: Choosing the Right Transformation" Proceedings of the Twentieth International Conference on Machine Learning (2003)
Available at: http://works.bepress.com/andrew_barto/6/