Skip to main content
Article
Stochastic Bilevel Distributed Optimization over a Network
arXiv
  • Hongchang Gao, Temple University, United States
  • Bin Gu, Mohamed bin Zayed University of Artificial Intelligence
  • My T. Thai, University of Florida, United States
Document Type
Article
Abstract

Bilevel optimization has been applied to a wide variety of machine learning models. Numerous stochastic bilevel optimization algorithms have been developed in recent years. However, most of them restrict their focus on the single-machine setting so that they are incapable of handling the distributed data. To address this issue, under the setting where all participants compose a network and perform the peer-to-peer communication in this network, we developed two novel distributed stochastic bilevel optimization algorithms based on the gradient tracking communication mechanism and two different gradient estimators. Additionally, we show that they can achieve O(Equation presented) convergence rate respectively to obtain the ∊-accuracy solution, where 1 − λ denotes the spectral gap of the communication network. To our knowledge, this is the first work achieving these theoretical results. Finally, we applied our algorithms to practical machine learning models, and the experimental results confirmed the efficacy of our algorithms. Copyright © 2022, The Authors. All rights reserved.

DOI
10.48550/arXiv.2206.15025
Publication Date
6-30-2022
Keywords
  • Data handling,
  • Learning algorithms,
  • Machine learning,
  • Optimization,
  • Peer to peer networks,
  • Scheduling algorithms
Comments

IR Deposit conditions: non-described

Preprint available on arXiv

Citation Information
H. Gao, B. Gu and M.T. Thai, "Stochastic Bilevel Distributed Optimization over a Network", 2022, arXiv:2206.15025