![](https://d3ilqtpdwi981i.cloudfront.net/0I8QeUNQrvWEpImlFiTnCKxycy8=/425x550/smart/https://bepress-attached-resources.s3.amazonaws.com/uploads/81/56/17/8156176c-dc35-49d8-8a90-b76665e60ccf/thumbnail_9427b128-faa0-43d1-995e-a4e9a3b2fa2c.jpg)
An ability to adjust to changing environments and unforeseen circumstances is likely to be an important component of a successful autonomous space robot. This paper shows how to augment reinforcement learning algorithms with a method for automatically discovering certain types of subgoals online. By creating useful new subgoals while learning, the agent is able to accelerate learning on a current task and to transfer its expertise to related tasks through the reuse of its ability to attain subgoals. Subgoals are created based on commonalities across multiple paths to a solution. We cast the task of finding these commonalities as a multiple-instance learning problem and use the concept of diverse density to find solutions. We introduced this approach in [10] and here we present additional results for a simulated mobile robot task.
Available at: http://works.bepress.com/andrew_barto/8/