Skip to main content
Article
Understanding more about human and machine attention in deep neural networks
IEEE Transactions on Multimedia
  • Qiuxia Lai, Chinese University of Hong Kong
  • Salman Khan, Mohamed Bin Zayed University of Artificial Intelligence
  • Yongwei Nie, South China University of Technology
  • Hanqiu Sun, University of Electronic Science and Technology of China
  • Jianbing Shen, Inception Institute of Artificial Intelligence
  • Ling Shao, Inception Institute of Artificial Intelligence
Document Type
Article
Abstract

Human visual system can selectively attend to parts of a scene for quick perception, a biological mechanism known as Human attention. Inspired by this, recent deep learning models encode attention mechanisms to focus on the most task-relevant parts of the input signal for further processing, which is called Machine/Neural/Artificial attention. Understanding the relation between human and machine attention is important for interpreting and designing neural networks. Many works claim that the attention mechanism offers an extra dimension of interpretability by explaining where the neural networks look. However, recent studies demonstrate that artificial attention maps do not always coincide with common intuition. In view of these conflicting evidence, here we make a systematic study on using artificial attention and human attention in neural network design. With three example computer vision tasks (i.e., salient object segmentation, video action recognition, and fine-grained image classification), diverse representative backbones (i.e., AlexNet, VGGNet, ResNet) and famous architectures (i.e., Two-stream, FCN), corresponding real human gaze data, and systematically conducted large-scale quantitative studies, we quantify the consistency between artificial attention and human visual attention and offer novel insights into existing artificial attention mechanisms by giving preliminary answers to several key questions related to human and artificial attention mechanisms. Overall results demonstrate that human attention can benchmark the meaningful 'ground-truth' in attention-driven tasks, where the more the artificial attention is close to human attention, the better the performance; for higher-level vision tasks, it is case-by-case. It would be advisable for attention-driven tasks to explicitly force a better alignment between artificial and human attention to boost the performance; such alignment would also improve the network explainability for higher-level computer vision tasks.

DOI
10.1109/TMM.2020.3007321
Publication Date
1-1-2021
Keywords
  • artificial attention,
  • Attention mechanism,
  • deep learning,
  • human attention
Comments

IR deposit conditions:

  • OA version (pathway a)
  • Accepted version
  • No embargo
  • When accepted for publication, set statement to accompany deposit (see policy)
  • Must link to publisher version with DOI
  • Publisher copyright and source must be acknowledged
Citation Information
Q. Lai, S. Khan, Y. Nie, H. Sun, J. Shen and L. Shao, "Understanding more about human and machine attention in deep neural networks," in IEEE Transactions on Multimedia , vol. 23, pp. 2086-2099, 2021, doi: 10.1109/TMM.2020.3007321.