Skip to main content
Article
STAP: Spatial-Temporal Attention-Aware Pooling for Action Recognition
IEEE Transactions on Circuits and Systems for Video Technology
  • Tam Nguyen, University of Dayton
  • Zheng Song, Visenze Pte.
  • Shuicheng Yan, National University of Singapore
Document Type
Article
Publication Date
1-1-2015
Abstract

Human action recognition is valuable for numerous practical applications, e.g., gaming, video surveillance, and video search. In this paper we hypothesize that the classification of actions can be boosted by designing a smart feature pooling strategy under the prevalently used bag-of-words-based representation. Founded on automatic video saliency analysis, we propose the spatial-temporal attention-aware pooling scheme for feature pooling. First, the video saliencies are predicted using the video saliency model, and the localized spatial-temporal features are pooled at different saliency levels and video-saliency-guided channels are formed. Saliency-aware matching kernels are thus derived as the similarity measurement of these channels. Intuitively, the proposed kernels calculate the similarities of the video foreground (salient areas) or background (nonsalient areas) at different levels. Finally, the kernels are fed into popular support vector machines for action classification. Extensive experiments on three popular data sets for action classification validate the effectiveness of our proposed method, which outperforms the state-of-the-art methods, namely 95.3% on UCF Sports (better by 4.0%), 87.9% on YouTube data set (better by 2.5%), and achieves comparable results on Hollywood2 dataset.

Inclusive pages
77-86
ISBN/ISSN
1051-8215
Comments

Permission documentation is on file.

Publisher
IEEE
Peer Reviewed
Yes
Citation Information
Tam Nguyen, Zheng Song and Shuicheng Yan. "STAP: Spatial-Temporal Attention-Aware Pooling for Action Recognition" IEEE Transactions on Circuits and Systems for Video Technology Vol. 25 Iss. 1 (2015)
Available at: http://works.bepress.com/tam-nguyen/17/