Article
Attack Strength vs. Detectability Dilemma in Adversarial Machine Learning
arXiv: Machine Learning
(2018)
Abstract
As the prevalence and everyday use of machine learning algorithms, along with our reliance on these algorithms grow dramatically, so do the efforts to attack and undermine these algorithms with malicious intent, resulting in a growing interest in adversarial machine learning. A number of approaches have been developed that can render a machine learning algorithm ineffective through poisoning or other types of attacks. Most attack algorithms typically use sophisticated optimization approaches, whose objective function is designed to cause maximum damage with respect to accuracy and performance of the algorithm with respect to some task. In this effort, we show that while such an objective function is indeed brutally effective in causing maximum damage on an embedded feature selection task, it often results in an attack mechanism that can be easily detected with an embarrassingly simple novelty or outlier detection algorithm. We then propose an equally simple yet elegant solution by adding a regularization term to the attacker’s objective function that penalizes outlying attack points.
Disciplines
Publication Date
February 20, 2018
Citation Information
Christopher Frederickson, Michael Moore, Glenn Dawson and Robi Polikar. "Attack Strength vs. Detectability Dilemma in Adversarial Machine Learning" arXiv: Machine Learning (2018) Available at: http://works.bepress.com/robi-polikar/41/