Skip to main content
Decentralized Personalized Federated Min-Max Problems
  • Ekaterina Borodich, MIPT, Russia
  • Aleksandr Beznosikov, MIPT, Russia
  • Abdurakhmon Sadiev, MIPT, Russia
  • Vadim Sushko, Bosch, Germany
  • Nikolay Savelyev, MIPT, Russia
  • Martin Takáč, Mohamed bin Zayed University of Artificial Intelligence
  • Alexander V. Gasnikov, MIPT, Russia
Document Type

Personalized Federated Learning (PFL) has recently seen tremendous progress, allowing the design of novel machine learning applications to preserve the privacy of the training data. Existing theoretical results in this field mainly focus on distributed optimization for minimization problems. This paper is the first to study PFL for saddle point problems (which cover a broader class of optimization problems), allowing for a more rich class of applications requiring more than just solving minimization problems. In this work, we consider a recently proposed PFL setting with the mixing objective function, an approach combining the learning of a global model together with locally distributed learners. Unlike most previous work, which considered only the centralized setting, we work in a more general and decentralized setup that allows us to design and analyze more practical and federated ways to connect devices to the network. We proposed new algorithms to address this problem and provide a theoretical analysis of the smooth (strongly-)convex-(strongly-)concave saddle point problems in stochastic and deterministic cases. Numerical experiments for bilinear problems and neural networks with adversarial noise demonstrate the effectiveness of the proposed methods. Copyright © 2021, The Authors. All rights reserved.

Publication Date
  • Distributed,
  • Parallel,
  • and Cluster Computing; Machine Learning; Optimization and Control

Preprint: arXiv

Citation Information
E. Borodich, "Decentralized personalized federated min-max problems," 2022, arXiv:2106.07289