Skip to main content
Article
Desirable Companion for Vertical Federated Learning: New Zeroth-Order Gradient Based Algorithm
International Conference on Information and Knowledge Management, Proceedings
  • Qingsong Zhang, Xidian University
  • Bin Gu, Mohamed bin Zayed University of Artificial Intelligence
  • Zhiyuan Dang, Xidian University
  • Cheng Deng, Xidian University
Document Type
Conference Proceeding
Abstract

Vertical federated learning (VFL) attracts increasing attention due to the emerging demands of multi-party collaborative modeling and concerns of privacy leakage. A complete list of metrics to evaluate VFL algorithms should include model applicability, privacy security, communication cost, and computation efficiency, where privacy security is especially important to VFL. However, to the best of our knowledge, there does not exist a VFL algorithm satisfying all these criteria very well. To address this challenging problem, in this paper, we reveal that zeroth-order optimization (ZOO) is a desirable companion for VFL. Specifically, ZOO can 1) improve the model applicability of VFL framework, 2) prevent VFL framework from privacy leakage under curious, colluding, and malicious threat models, 3) support inexpensive communication and efficient computation. Based on that, we propose a novel and practical VFL framework with black-box models, which is inseparably interconnected to the promising properties of ZOO. We believe that it takes one stride towards designing a practical VFL framework matching all the criteria. Under this framework, we raise two novel asynchronous zeroth-order algorithms for vertical federated learning (AsyREVEL) with different smoothing techniques. We theoretically drive the convergence rates of AsyREVEL algorithms under nonconvex condition. More importantly, we prove the privacy security of our proposed framework under existing VFL attacks on different levels. Extensive experiments on benchmark datasets demonstrate the favorable model applicability, satisfied privacy security, inexpensive communication, efficient computation, scalability and losslessness of our framework.

DOI
10.1145/3459637.3482249
Publication Date
10-26-2021
Keywords
  • asynchronous parallel,
  • vertical federated learning,
  • vertical federated neural network,
  • zeroth-order optimization
Disciplines
Comments

IR deposit conditions: none described

Citation Information
Q. Zhang, B. Gu, Z. Dang, C. Deng, and H. Huang, “Desirable companion for vertical federated learning: new zeroth-order gradient based algorithm,” International Conference on Information and Knowledge Management, Proceedings, pp. 2598–2607, Oct. 2021, doi: 10.1145/3459637.3482249.