Skip to main content
Other
Proto-Value Functions: Developmental Reinforcement Learning
Computer Science Department Faculty Publication Series
  • Sridhar Mahadevan, University of Massachusetts Amherst
Publication Date
2005
Abstract

This paper presents a novel framework called proto-reinforcement learning (PRL), based on a mathematical model of a proto-value function: these are task-independent basis functions that form the building blocks of all value functions on a given state space manifold. Proto-value functions are learned not from rewards, but instead from analyzing the topology of the state space. Formally, proto-value functions are Fourier eigenfunctions of the Laplace-Beltrami diffusion operator on the state space manifold. Proto-value functions facilitate structural decomposition of large state spaces, and form geodesically smooth orthonormal basis functions for approximating any value function. The theoretical basis for proto-value functions combines insights from spectral graph theory, harmonic analysis, and Riemannian manifolds. Protovalue functions enable a novel generation of algorithms called representation policy itera- tion, unifying the learning of representation and behavior.

Disciplines
Comments
This paper was harvested from CiteSeer
Citation Information
Sridhar Mahadevan. "Proto-Value Functions: Developmental Reinforcement Learning" (2005)
Available at: http://works.bepress.com/sridhar_mahadevan/4/