Skip to main content
Article
Super Learner
Statistical Applications in Genetics and Molecular Biology (2007)
  • Mark J. van der Laan, University of California, Berkeley
  • Eric C Polley, University of California, Berkeley
  • Alan E. Hubbard, University of California, Berkeley
Abstract
When trying to learn a model for the prediction of an outcome given a set of covariates, a statistician has many estimation procedures in their toolbox. A few examples of these candidate learners are: least squares, least angle regression, random forests, and spline regression. Previous articles (van der Laan and Dudoit (2003); van der Laan et al. (2006); Sinisi et al. (2007)) theoretically validated the use of cross validation to select an optimal learner among many candidate learners. Motivated by this use of cross validation, we propose a new prediction method for creating a weighted combination of many candidate learners to build the super learner. This article proposes a fast algorithm for constructing a super learner in prediction which uses V-fold cross-validation to select weights to combine an initial set of candidate learners. In addition, this paper contains a practical demonstration of the adaptivity of this so called super learner to various true data generating distributions. This approach for construction of a super learner generalizes to any parameter which can be defined as a minimizer of a loss function.
Keywords
  • cross-validation,
  • loss-based estimation,
  • machine learning,
  • prediction
Publication Date
2007
Citation Information
Mark J. van der Laan, Eric C Polley and Alan E. Hubbard. "Super Learner" Statistical Applications in Genetics and Molecular Biology Vol. 6 Iss. 1 (2007)
Available at: http://works.bepress.com/mark_van_der_laan/201/