Skip to main content
Unpublished Paper
Composition of Conditional Random Fields for Transfer Learning
(2005)
  • Charles Sutton
  • Andrew McCallum, University of Massachusetts - Amherst
Abstract
Many learning tasks have subtasks for much training data exists. Therefore, we want to transfer learning from the old, general-purpose subtask to a more specific new task, for which there is often less data. While work in transfer learning often considers how the old task should affect learning on the new task, in this paper we show that it helps to take into account how the new task affects the old. Specifically, we perform joint decoding of separately-trained sequence models, preserving uncertainty between the tasks and allowing information from the new task to affect predictions on the old task. On two standard text data sets, we show that joint decoding outperforms cascaded decoding.
Disciplines
Publication Date
2005
Comments
This is the pre-published version harvested from CIIR.
Citation Information
Charles Sutton and Andrew McCallum. "Composition of Conditional Random Fields for Transfer Learning" (2005)
Available at: http://works.bepress.com/andrew_mccallum/146/