Skip to main content
Article
SDQ: Stochastic Differentiable Quantization with Mixed Precision
Proceedings of Machine Learning Research
  • Xijie Huang, Hong Kong University of Science and Technology
  • Zhiqiang Shen, Hong Kong University of Science and Technology
  • Shichao Li, Hong Kong University of Science and Technology
  • Zechun Liu, Meta
  • Xianghong Hu, Hong Kong University of Science and Technology
  • Jeffry Wicaksana, Hong Kong University of Science and Technology
  • Eric Xing, Mohamed Bin Zayed University of Artificial Intelligence
  • Kwang Ting Cheng, Hong Kong University of Science and Technology
Document Type
Conference Proceeding
Abstract

In order to deploy deep models in a computationally efficient manner, model quantization approaches have been frequently used. In addition, as new hardware that supports mixed bitwidth arithmetic operations, recent research on mixed precision quantization (MPQ) begins to fully leverage the capacity of representation by searching optimized bitwidths for different layers and modules in a network. However, previous studies mainly search the MPQ strategy in a costly scheme using reinforcement learning, neural architecture search, etc., or simply utilize partial prior knowledge for bitwidth assignment, which might be biased on locality of information and is sub-optimal. In this work, we present a novel Stochastic Differentiable Quantization (SDQ) method that can automatically learn the MPQ strategy in a more flexible and globally-optimized space with smoother gradient approximation. Particularly, Differentiable Bitwidth Parameters (DBPs) are employed as the probability factors in stochastic quantization between adjacent bitwidth choices. After the optimal MPQ strategy is acquired, we further train our network with Entropy-aware Bin Regularization and knowledge distillation. We extensively evaluate our method for several networks on different hardware (GPUs and FPGA) and datasets. SDQ outperforms all state-of-the-art mixed or single precision quantization with a lower bitwidth and is even better than the full-precision counterparts across various ResNet and MobileNet families, demonstrating its effectiveness and superiority.

Publication Date
7-1-2022
Keywords
  • Program processors,
  • Reinforcement learning,
  • Stochastic systems
Comments

IR conditions: non-described

Access available in PMLR

Citation Information
X. Huang, et al, "SDQ: Stochastic Differentiable Quantization with Mixed Precision", in Proceedings of the 39th Intl. Conf. on Machine Learning, PMLR, vol 162, July 2022. https://proceedings.mlr.press/v162/huang22h/huang22h.pdf