Skip to main content
Reliable Explanations via Adversarial Examples on Robust Networks
Student Research Symposium
  • Walt Woods, Portland State University
  • Jack H Chen, Portland State University
  • Christof Teuscher, Portland State University
Portland State University
Start Date
7-5-2019 11:00 AM
End Date
7-5-2019 1:00 PM
  • Neural networks (Computer science) -- Algorithms,
  • Machine learning,
  • Cooperating objects (Computer systems),
  • Human-computer interaction

Neural Networks (NNs) are increasingly used as the basis of advanced machine learning techniques in sensitive fields such as autonomous vehicles and medical imaging. However, NNs have been found vulnerable to a class of imperceptible attacks, called adversarial examples, which arbitrarily alter the output of the network. To close the schism between needing reliability in real-world applications and the fragility of NNs, we propose a new method for stabilizing networks, and show that as an added bonus, our technique results in reliable, high-fidelity explanations for the NN's decision. Compared to the state-of-the-art, this technique increased the area under the curve of accuracy versus root-mean-squared error of allowed attacks by a factor of 1.8x, and we demonstrate that it allows for new Human-In-The-Loop (HITL) training techniques for NNs. On medical imaging, we show that our technique results in explanations which are significantly more sensible to a human operator than the explanations from previously proposed algorithms. The combination of increased network robustness and the ability to demonstrate decision boundaries to a human observer should pave the way for greatly improved HITL decision processes in future work.

Persistent Identifier
Citation Information
Walt Woods, Jack H Chen and Christof Teuscher. "Reliable Explanations via Adversarial Examples on Robust Networks" (2019)
Available at: