Skip to main content
Article
Natural Compression for Distributed Deep Learning
Proceedings of Machine Learning Research
  • Samuel Horváth, Mohamed Bin Zayed University of Artificial Intelligence
  • Chen Yu Ho, King Abdullah University of Science and Technology
  • Horváth L'udovít, Dell Technologies
  • Atal Narayan Sahu, King Abdullah University of Science and Technology
  • Marco Canini, King Abdullah University of Science and Technology
  • Peter Richtárik, King Abdullah University of Science and Technology
Document Type
Conference Proceeding
Abstract

Modern deep learning models are often trained in parallel over a collection of distributed machines to reduce training time. In such settings, communication of model updates among machines becomes a significant performance bottleneck, and various lossy update compression techniques have been proposed to alleviate this problem. In this work, we introduce a new, simple yet theoretically and practically effective compression technique: natural compression (Cnat). Our technique is applied individually to all entries of the to-be-compressed update vector. It works by randomized rounding to the nearest (negative or positive) power of two, which can be computed in a “natural” way by ignoring the mantissa. We show that compared to no compression, Cnat increases the second moment of the compressed vector by not more than the tiny factor 9/8, which means that the effect of Cnat on the convergence speed of popular training algorithms, such as distributed SGD, is negligible. However, the communications savings enabled by Cnat are substantial, leading to 3-4× improvement in overall theoretical running time. For applications requiring more aggressive compression, we generalize Cnat to natural dithering, which we prove is exponentially better than the common random dithering technique. Our compression operators can be used on their own or in combination with existing operators for a more aggressive combined effect while offering new state-of-the-art theoretical and practical performance.

Publication Date
8-1-2022
Keywords
  • Distibuted Optimization,
  • Gradient Compression,
  • Non-convex Optimization,
  • Stochastic Optimization
Comments

Access available at PMLR site

Citation Information
S. Horváth , C.Y. Ho, H. L'udovít, A.N.Sahu, M. Canini, and P. Richtárik, "Natural Compression for Distributed Deep Learning", in 3rd Annual Conf. on Mathematical and Scientific Machin Learning (MSML 2022), PLMR, vol 190, pp. 129-141, Aug 2022. Available online at: https://proceedings.mlr.press/v190/horvoth22a/horvoth22a.pdf