Skip to main content
Presentation
Reinforcement Learning for Categorical Data and Marginalized Transition Models
Joint Statistical Meetings (JSM)
  • Stephen W. Carden, Georgia Southern University
Document Type
Presentation
Presentation Date
8-12-2015
Abstract or Description

Reinforcement Learning concerns algorithms tasked with learning optimal control policies from interacting with or observing a system. Fitted Q-iteration is a framework in which a regression method is iteratively used to approximate the value of states and actions. Because the state-action value function rarely has a predictable shape, non-parametric supervised learning methods are typical. This greater modeling flexibility comes at a cost of large data requirements. If only a small amount of data is available, the supervised learning method is likely to over-generalize and approximate the value function poorly. In this paper, we propose using Marginalized Transition Models to estimate the process which produces observations. From this estimated process, additional observations are generated. Our contention is that using these additional observations reduces the bias produced by the regression method's over-smoothing, and can produce better policies than only using the original data. This approach is applied to a scenario mimicking medical prescription policies for a disease with sporadically appearing symptoms as a proof-of-concept example.

Location
Seattle, WA
Citation Information
Stephen W. Carden. "Reinforcement Learning for Categorical Data and Marginalized Transition Models" Joint Statistical Meetings (JSM) (2015)
Available at: http://works.bepress.com/stephen_carden/10/