Skip to main content
Article
Vision Transformer Slimming: Multi-Dimension Searching In Continuous Optimization Space
arXiv
  • Arnav Chevan, Indian Institute of Technology, Dhanbad, India & Mohamed bin Zayed University of Artificial Intelligence
  • Zhiqiang Shen, Carnegie Mellon University & Mohamed bin Zayed University of Artificial Intelligence
  • Zhuang Liu, Carnegie Mellon University
  • Kwangting Cheng, Hong Kong University of Science and Technology
  • Eric Xing, Carnegie Mellon University & Mohamed bin Zayed University of Artificial Intelligence
Document Type
Article
Abstract

This paper explores the feasibility of finding an optimal sub-model from a vision transformer and introduces a pure vision transformer slimming (ViT-Slim) framework that can search such a sub-structure from the original model end-to-end across multiple dimensions, including the input tokens, MHSA and MLP modules with state-of-the-art performance. Our method is based on a learnable and unified ℓ1sparsity constraint with pre-defined factors to reflect the global importance in the continuous searching space of different dimensions. The searching process is highly efficient through a single-shot training scheme. For instance, on DeiT-S, ViT-Slim only takes ∼43 GPU hours for searching process, and the searched structure is flexible with diverse dimensionalities in different modules. Then, a budget threshold is employed according to the requirements of accuracy-FLOPs trade-off on running devices, and a retraining process is performed to obtain the final models. The extensive experiments show that our ViT-Slim can compress up to 40% of parameters and 40% FLOPs on various vision transformers while increasing the accuracy by ∼0.6% on ImageNet. We also demonstrate the advantage of our searched models on several downstream datasets. Our source code will be publicly available later. Copyright © 2022, The Authors. All rights reserved.

Publication Date
1-1-2022
Keywords
  • Economic and social effects; Machine learning; Optimization; Continuous optimization; End to end; Multi dimensions; Multiple dimensions; Original model; Searching spaces; Sparsity constraints; State-of-the-art performance; Sub-structures; Submodels; Budget control; Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
Comments

Preprint: arXiv

Citation Information
A. Chavan, Z. Shen, Z. Liu, Z Liu, K. Cheng, E. Xing, "Vision transformer slimming: multi-dimension searching in continuous optimization space," 2022, arXiv:2201.00814