We describe a data-driven approach that allows us to quantify the costs of various types of errors made by the utterance-level conﬁdence annotator in the Carnegie Mellon Communicator system. Knowing these costs we can determine the optimal tradeoff point between these errors, and tune the conﬁdence annotator accordingly. We describe several models, based on concept transmission efﬁciency. The models ﬁt our data quite well and the relative costs of errors are in accordance with our intuition. We also ﬁnd,
surprisingly, that for a mixed-initiative system such as the CMU Communicator, false positive and false negative errors trade-off equally over a wide operating range.
Available at: http://works.bepress.com/alexander_rudnicky/19/