Skip to main content
Unpublished Paper
Training Factor Graphs with Reinforcement Learning for Efficient MAP Inference
  • Michael Wick
  • Khashayar Rohanimanesh
  • Sameer Singh
  • Andrew McCallum, University of Massachusetts - Amherst
Large, relational factor graphs with structure defined by first-order logic or other languages give rise to notoriously difficult inference problems. Because unrolling the structure necessary to represent distributions over all hypotheses has exponential blow-up, solutions are often derived from MCMC. However, because of limitations in the design and parameterization of the jump function, these sampling-based methods suffer from local minima--the system must transition through lower-scoring configurations before arriving at a better MAP solution. This paper presents a new method of explicitly selecting fruitful downward jumps by leveraging reinforcement learning (RL). Rather than setting parameters to maximize the likelihood of the training data, parameters of the factor graph are treated as a log-linear function approximator and learned with methods of temporal difference (TD); MAP inference is performed by executing the resulting policy on held out test data. Our method allows efficient gradient updates since only factors in the neighborhood of variables affected by an action need to be computed|we bypass the need to compute marginals entirely. Our method yields dramatic empirical success, producing new state-of-the-art results on a complex joint model of ontology alignment, with a 48% reduction in error over state-of-the-art in that domain.
Publication Date
This is the pre-published version harvested from CIIR.
Citation Information
Michael Wick, Khashayar Rohanimanesh, Sameer Singh and Andrew McCallum. "Training Factor Graphs with Reinforcement Learning for Efficient MAP Inference" (2009)
Available at: