Using policy gradient reinforcement learning on autonomous robot controllersDepartmental Papers (MEAM)
Document TypeConference Paper
Date of this Version10-27-2003
AbstractRobot programmers can often quickly program a robot to approximately execute a task under specific environment conditions. However, achieving robust performance under more general conditions is significantly more difficult. We propose a framework that starts with an existing control system and uses reinforcement feedback from the environment to autonomously improve the controller’s performance. We use the Policy Gradient Reinforcement Learning (PGRL) framework, which estimates a gradient (in controller space) of improved reward, allowing the controller parameters to be incrementally updated to autonomously achieve locally optimal performance. Our approach is experimentally verified on a Cye robot executing a room entry and observation task, showing significant reduction in task execution time and robustness with respect to un-modelled changes in the environment.
- Policy Gradient Reinforcement Learning (PGRL)
Citation InformationGregory Z Grudic, R. Vijay Kumar and Lyle H Ungar. "Using policy gradient reinforcement learning on autonomous robot controllers" (2003)
Available at: http://works.bepress.com/vijay_kumar/29/