Skip to main content
Unpublished Paper
Piecewise Training for Undirected Models
(2005)
  • Charles Sutton
  • Andrew McCallum, University of Massachusetts - Amherst
Abstract
For many large undirected models that arise in real-world applications, exact maximumlikelihood training is intractable, because it requires computing marginal distributions of the model. Conditional training is even more difficult, because the partition function depends not only on the parameters, but also on the observed input, requiring repeated inference over each training example. An appealing idea for such models is to independently train a local undirected classifier over each clique, afterwards combining the learned weights into a single global model. In this paper, we show that this piecewise method can be justified as minimizing a new family of upper bounds on the log partition function. On three natural-language data sets, piecewise training is more accurate than pseudolikelihood, and often performs comparably to global training using belief propagation.
Disciplines
Publication Date
2005
Comments
This is the pre-published version harvested from arXiv.
Citation Information
Charles Sutton and Andrew McCallum. "Piecewise Training for Undirected Models" (2005)
Available at: http://works.bepress.com/andrew_mccallum/23/