- Neural networks -- Algorithms,
- Machine learning,
- Categories (Mathematics),
- Neural networks -- Classifiers,
Neural Networks (NNs) have become a basis of almost all state-of-the-art machine learning algorithms and classifiers. While NNs have been shown to generalize well to real-world examples, researchers have struggled to show why they work on an intuitive level. We designed several methods to explain the decisions of two state-of-the-art NN classifiers, ResNet and an All-CNN, in the context of the Japanese Society of Radiological Technology (JSRT) lung nodule dataset and the CIFAR-10 image dataset. Leading explanation methods LIME and Grad-CAM generate variations of heat maps which represent the regions of the input determined salient by the NN. We analyze these salient regions highlighted by these algorithms, show how these explanations may be misleading, and discuss future directions including methods which construct full color images rather than heat maps to provide more complete explanations of NN classifiers. This work is relevant in sensitive problems and fields that require validity in decisions made by a classifier such as medical imaging or fraud detection.
Available at: http://works.bepress.com/christof-teuscher/40/