Skip to main content
Explanation Methods for Neural Networks
Student Research Symposium
  • Jack H Chen, Portland State University
  • Christof Teuscher, Portland State University
Portland State University
Start Date
7-5-2019 11:00 AM
End Date
7-5-2019 1:00 PM
  • Neural networks -- Algorithms,
  • Machine learning,
  • Categories (Mathematics),
  • Neural networks -- Classifiers,
  • Explanation

Neural Networks (NNs) have become a basis of almost all state-of-the-art machine learning algorithms and classifiers. While NNs have been shown to generalize well to real-world examples, researchers have struggled to show why they work on an intuitive level. We designed several methods to explain the decisions of two state-of-the-art NN classifiers, ResNet and an All-CNN, in the context of the Japanese Society of Radiological Technology (JSRT) lung nodule dataset and the CIFAR-10 image dataset. Leading explanation methods LIME and Grad-CAM generate variations of heat maps which represent the regions of the input determined salient by the NN. We analyze these salient regions highlighted by these algorithms, show how these explanations may be misleading, and discuss future directions including methods which construct full color images rather than heat maps to provide more complete explanations of NN classifiers. This work is relevant in sensitive problems and fields that require validity in decisions made by a classifier such as medical imaging or fraud detection.

Persistent Identifier
Citation Information
Jack H Chen and Christof Teuscher. "Explanation Methods for Neural Networks" (2019)
Available at: