Skip to main content
Contribution to Book
Robust L_{2}E Parameter Estimation of Gaussian Mixture Models: Comparison with Expectation Maximization
Proceedings of the International Conference on Neural Information Processing (ICONIP 2015, Part III, LNCS 9491) (2015)
  • Umashanger Thayasivam, Rowan University
  • Chinthaka Kuruwita, Hamilton College
  • Ravi P. Ramachandran, Rowan University
Abstract
The purpose of this paper is to discuss the use of L2E estimation that minimizes integrated square distance as a practical robust estimation tool for unsupervised clustering. Comparisons to the expectation maximization (EM) algorithm are made. The L2E approach for mixture models is particularly useful in the study of big data sets and especially those with a consistent numbers of outliers. The focus is on the comparison of L2E and EM for parameter estimation of Gaussian Mixture Models. Simulation examples show that the L2E approach is more robust than EM when there is noise in the data (particularly outliers) and for the case when the underlying probability density function of the data does not match a mixture of Gaussians.
Keywords
  • Robust L2E estimation,
  • Gaussian mixture model,
  • Expectation maximization,
  • · Unsupervised learning,
  • · Big data
Publication Date
November 9, 2015
Editor
S. Arik et al. (Eds.)
Publisher
Springer International Publishing
DOI
10.1007/978-3-319-26555-1_32
Citation Information
Umashanger Thayasivam, Chinthaka Kuruwita and Ravi P. Ramachandran. "Robust L_{2}E Parameter Estimation of Gaussian Mixture Models: Comparison with Expectation Maximization" SwitzerlandProceedings of the International Conference on Neural Information Processing (ICONIP 2015, Part III, LNCS 9491) Vol. 2015 (2015) p. 281 - 288
Available at: http://works.bepress.com/ravi-ramachandran/4/