Skip to main content
Article
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
Departmental Papers (CIS)
  • John Lafferty, Carnegie Mellon University
  • Andrew McCallum, WhizBang! Labs
  • Fernando C.N. Pereira, University of Pennsylvania
Date of this Version
6-28-2001
Document Type
Conference Paper
Comments

Postprint version. Copyright ACM, 2001. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the 18th International Conference on Machine Learning 2001 (ICML 2001), pages 282-289.
Publisher URL: http://portal.acm.org/citation.cfm?id=655813

Abstract

We present conditional random fields, a framework for building probabilistic models to segment and label sequence data. Conditional random fields offer several advantages over hidden Markov models and stochastic grammars for such tasks, including the ability to relax strong independence assumptions made in those models. Conditional random fields also avoid a fundamental limitation of maximum entropy Markov models (MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased towards states with few successor states. We present iterative parameter estimation algorithms for conditional random fields and compare the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.

Citation Information
John Lafferty, Andrew McCallum and Fernando C.N. Pereira. "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data" (2001)
Available at: http://works.bepress.com/andrew_mccallum/4/