Skip to main content
Contribution to Book
Robust L2E Parameter Estimation of Gaussian Mixture Models: Comparison with Expectation Maximization
Neural Information Processing (2015)
  • Umashanger Thayasivam, Rowan University
  • Chinthaka Kuruwita
  • Ravi P. Ramachandran, Rowan University
Abstract
The purpose of this paper is to discuss the use of L2E estimation that minimizes integrated square distance as a practical robust estimation tool for unsupervised clustering. Comparisons to the expectation maximization (EM) algorithm are made. The L2E approach for mixture models is particularly useful in the study of big data sets and especially those with a consistent numbers of outliers. The focus is on the comparison of L2E and EM for parameter estimation of Gaussian Mixture Models. Simulation examples show that the L2E approach is more robust than EM when there is noise in the data (particularly outliers) and for the case when the underlying probability density function of the data does not match a mixture of Gaussians.
Keywords
  • Robust L2E estimation,
  • Gaussian mixture model,
  • Expectation maximization,
  • Unsupervised learning,
  • Big data
Publication Date
2015
Editor
Sabri Arik, Tingwen Huang, Weng Kin Lai, Qingshan Liu
Publisher
Springer
Series
Lecture Notes in Computer Science
DOI
10.1007/978-3-319-26555-1_32
Citation Information
Umashanger Thayasivam, Chinthaka Kuruwita and Ravi P. Ramachandran. "Robust L2E Parameter Estimation of Gaussian Mixture Models: Comparison with Expectation Maximization" Neural Information Processing Vol. 9491 (2015) p. 281 - 288
Available at: http://works.bepress.com/umashanger-thayasivam/3/