Recently, a class of machine learning-inspired procedures, termed kernel machine methods, has been extensively developed in the statistical literature. It has been shown to have large power for a wide class of problems and applications in genomics and brain imaging. Many authors have exploited an equivalence between kernel machines and mixed effects models and used attendant estimation and inferential procedures. In this note, we explore the theoretical foundations of the kernel machine using a spectral decomposition. This leads to simple characterizations of the kernel machine procedure and its bias and variance properties. In addition, we construct a so-called `adaptively minimax' kernel machine. Such a construction highlights the role of thresholding in the observation space and limits on the interpretability of such kernel machines.
- Data mining; Decision theory; hard-thresholding; high-dimensional data; nonparametric regression; support vector machines
Available at: http://works.bepress.com/debashis_ghosh/57/