Skip to main content
Unpublished Paper
Dynamic Control Models as State Abstractions
NIPS'98 Workshop on Abstraction and Hierarchy in Reinforcement Learning (1998)
  • Jefferson A. Coelho
  • Roderic Grupen, University of Massachusetts - Amherst
Abstract
This work proposes a methodology for the construction of state abstraction from a set of empirically derived models of system behavior. The idea is to treat the agent in its environment as a dynamical system and augment its observation space with contextual cues extracted empirically as the agent exercises each element out of the set of available control policies--the control bias. Contextual cues are provided by the correlation between dynamic features of the agent-environment interaction and agent performance. The resulting state abstraction (observations + context information) defines also a temporal abstraction, and offers interesting answers to some of the issues pertinent to the development of hierarchical systems. Initial experiments involving an agent with impoverished sensing capabilities in a simulated, dynamic environment demonstrate that relevant contextual information can be extracted and used to enhance the agent's performance.
Publication Date
1998
Comments
Harvested from Citeseer.
Citation Information
Jefferson A. Coelho and Roderic Grupen. "Dynamic Control Models as State Abstractions" NIPS'98 Workshop on Abstraction and Hierarchy in Reinforcement Learning (1998)
Available at: http://works.bepress.com/roderic_grupen/4/