Skip to main content
Unpublished Paper
Efficient Computation of Entropy Gradient for Semi-Supervised Conditional Random Fields
(2007)
  • Gideon S. Mann
  • Andrew McCallum, University of Massachusetts - Amherst
Abstract
Entropy regularization is a straightforward and successful method of semi-supervised learning that augments the traditional conditional likelihood objective function with an additional term that aims to minimize the predicted label entropy on unlabeled data. It has previously been demonstrated to provide positive results in linear-chain CRFs, but the published method for calculating the entropy gradient requires significantly more computation than supervised CRF training. This paper presents a new derivation and dynamic program for calculating the entropy gradient that is significantly more efficient---having the same asymptotic time complexity as supervised CRF training. We also present efficient generalizations of this method for calculating the label entropy of all sub-sequences, which is useful for active learning, among other applications.
Disciplines
Publication Date
2007
Comments
This is the pre-published version harvested from CIIR.
Citation Information
Gideon S. Mann and Andrew McCallum. "Efficient Computation of Entropy Gradient for Semi-Supervised Conditional Random Fields" (2007)
Available at: http://works.bepress.com/andrew_mccallum/110/