Skip to main content
Article
Asynchronous Distributed ADMM for Large-Scale Optimization—Part I: Algorithm and Convergence Analysis
IEEE Transactions on Signal Processing
  • Tsung-Hui Chang, The Chinese University of Hong Kong
  • Mingyi Hong, Iowa State University
  • Wei-Cheng Liao, University of Minnesota - Twin Cities
  • Xiangfeng Wang, East China Normal University
Document Type
Article
Publication Version
Submitted Manuscript
Publication Date
1-1-2016
DOI
10.1109/TSP.2016.2537271
Abstract

Aiming at solving large-scale optimization problems, this paper studies distributed optimization methods based on the alternating direction method of multipliers (ADMM). By formulating the optimization problem as a consensus problem, the ADMM can be used to solve the consensus problem in a fully parallel fashion over a computer network with a star topology. However, traditional synchronized computation does not scale well with the problem size, as the speed of the algorithm is limited by the slowest workers. This is particularly true in a heterogeneous network where the computing nodes experience different computation and communication delays. In this paper, we propose an asynchronous distributed ADMM (AD-ADMM), which can effectively improve the time efficiency of distributed optimization. Our main interest lies in analyzing the convergence conditions of the AD-ADMM, under the popular partially asynchronous model, which is defined based on a maximum tolerable delay of the network. Specifically, by considering general and possibly non-convex cost functions, we show that the AD-ADMM is guaranteed to converge to the set of Karush-Kuhn-Tucker (KKT) points as long as the algorithm parameters are chosen appropriately according to the network delay. We further illustrate that the asynchrony of the ADMM has to be handled with care, as slightly modifying the implementation of the AD-ADMM can jeopardize the algorithm convergence, even under the standard convex setting.

Comments

This is a manuscript of an article from IEEE Transactions on Signal Processing 64 (2016): 3118, DOI: 10.1109/TSP.2016.2537271. Posted with permission.

Rights
© 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Copyright Owner
IEEE
Language
en
File Format
application/pdf
Citation Information
Tsung-Hui Chang, Mingyi Hong, Wei-Cheng Liao and Xiangfeng Wang. "Asynchronous Distributed ADMM for Large-Scale Optimization—Part I: Algorithm and Convergence Analysis" IEEE Transactions on Signal Processing Vol. 64 Iss. 12 (2016) p. 3118 - 3130
Available at: http://works.bepress.com/mingyi_hong/16/