Many classifiers are trained with massive training sets only to be applied at test time on data from a different dis- tribution. How can we rapidly and simply adapt a classifier to a new test distribution, even when we do not have ac- cess to the original training data? We present an on-line approach for rapidly adapting a “black box” classifier to a new test data set without retraining the classifier or ex- amining the original optimization criterion. Assuming the original classifier outputs a continuous number for which a threshold gives the class, we reclassify points near the ori g- inal boundary using a Gaussian process regression scheme. We show how this general procedure can be used in the context of a classifier cascade, demonstrating performance that far exceeds state-of-the-art results in face detectio n on a standard data set. We also draw connections to work in semi-supervised learning, domain adaptation, and informa - tion regularization.
Available at: http://works.bepress.com/erik_learned_miller/50/