Skip to main content
Article
On improving adversarial transferability of vision transformers
arXiv
  • Muzammal Naseer, Australian National University & Mohamed bin Zayed University of Artificial Intelligence
  • Kanchana Ranasinghe, Stony Brook University
  • Salman Khan, Mohamed bin Zayed University of Artificial Intelligence
  • Fahad Shahbaz Khan, Linköping University & Mohamed bin Zayed University of Artificial Intelligence
  • Fatih Porikli, Qualcomm
Document Type
Article
Abstract

Vision transformers (ViTs) process input images as sequences of patches via self-attention; a radically different architecture than convolutional neural networks (CNNs). This makes it interesting to study the adversarial feature space of ViT models and their transferability. In particular, we observe that adversarial patterns found via conventional adversarial attacks show very low black-box transferability even for large ViT models. We show that this phenomenon is only due to the sub-optimal attack procedures that do not leverage the true representation potential of ViTs. A deep ViT is composed of multiple blocks, with a consistent architecture comprising of self-attention and feed-forward layers, where each block is capable of independently producing a class token. Formulating an attack using only the last class token (conventional approach) does not directly leverage the discriminative information stored in the earlier tokens, leading to poor adversarial transferability of ViTs. Using the compositional nature of ViT models, we enhance transferability of existing attacks by introducing two novel strategies specific to the architecture of ViT models. (i) Self-Ensemble: We propose a method to find multiple discriminative pathways by dissecting a single ViT model into an ensemble of networks. This allows explicitly utilizing class-specific information at each ViT block. (ii) Token Refinement: We then propose to refine the tokens to further enhance the discriminative capacity at each block of ViT. Our token refinement systematically combines the class tokens with structural information preserved within the patch tokens. An adversarial attack when applied to such refined tokens within the ensemble of classifiers found in a single vision transformer has significantly higher transferability and thereby brings out the true generalization potential of the ViT’s adversarial space. Code: https://git.io/JZmG3. Copyright © 2021, The Authors. All rights reserved.

DOI
doi.org/10.48550/arXiv.2106.04169
Publication Date
6-8-2021
Keywords
  • Artificial Intelligence (cs.AI),
  • Computer Vision and Pattern Recognition (cs.CV),
  • Machine Learning (cs.LG)
Comments

Preprint:arXiv

Citation Information
M. Naseer, K. Ranasinghe, S. Khan, F.S. Khan, and F. Porikli, "On improving adversarial transferability of vision transformers", 2021, arXiv:2106.04169