Skip to main content
Article
An Effective Video Transformer With Synchronized Spatiotemporal and Spatial Self-Attention for Action Recognition
IEEE Transactions on Neural Networks and Learning Systems (2022)
  • Saghir Alfasly
  • Charles K. Chui, University of Missouri–St. Louis
  • Qingtang Jiang, University of Missouri-St. Louis
  • Jian Lu
  • Chen Xu
Abstract
Convolutional neural networks (CNNs) have come to dominate vision-based deep neural network structures in both image and video models over the past decade. However, convolution-free vision Transformers (ViTs) have recently outperformed CNN-based models in image recognition. Despite this progress, building and designing video Transformers have not yet obtained the same attention in research as image-based Transformers. While there have been attempts to build video Transformers by adapting image-based Transformers for video understanding, these Transformers still lack efficiency due to the large gap between CNN-based models and Transformers regarding the number of parameters and the training settings. In this work, we propose three techniques to improve video understanding with video Transformers. First, to derive better spatiotemporal feature representation, we propose a new spatiotemporal attention scheme, termed synchronized spatiotemporal and spatial attention (SSTSA), which derives the spatiotemporal features with temporal and spatial multiheaded self-attention (MSA) modules. It also preserves the best spatial attention by another spatial self-attention module in parallel, thereby resulting in an effective Transformer encoder. Second, a motion spotlighting module is proposed to embed the short-term motion of the consecutive input frames to the regular RGB input, which is then processed with a single-stream video Transformer. Third, a simple intraclass frame interlacing method of the input clips is proposed that serves as an effective video augmentation method. Finally, our proposed techniques have been evaluated and validated with a set of extensive experiments in this study. Our video Transformer outperforms its previous counterparts on two well-known datasets, Kinetics400 and Something-Something-v2.
Keywords
  • Action recognition,
  • frame interlacing,
  • motion spotlighting,
  • video augmentation,
  • video transformers
Disciplines
Publication Date
July 20, 2022
DOI
10.1109/TNNLS.2022.3190367
Citation Information
Saghir Alfasly, Charles K. Chui, Qingtang Jiang, Jian Lu, et al.. "An Effective Video Transformer With Synchronized Spatiotemporal and Spatial Self-Attention for Action Recognition" IEEE Transactions on Neural Networks and Learning Systems (2022)
Available at: http://works.bepress.com/qingtang-jiang/83/