Skip to main content
Optimizing Large-Scale Hyperparameters Via Automated Learning Algorithm
  • Bin Gu, Mohamed bin Zayed University of Artificial Intelligence & JD Finance America Cooperation, USA
  • Guodong Liu, University of Pittsburgh
  • Yanfu Zhang, University of Pittsburgh
  • Xiang Geng, Nanjing University of Information Science & Technology
  • Heng Huang, JD Finance America Corporation & University of Pittsburgh
Document Type

Modern machine learning algorithms usually involve tuning multiple (from one to thousands) hyperparameters which play a pivotal role in terms of model generalizability. Black-box optimization and gradient-based algorithms are two dominant approaches to hyperparameter optimization while they have totally distinct advantages. How to design a new hyperparameter optimization technique inheriting all benefits from both approaches is still an open problem. To address this challenging problem, in this paper, we propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG). Specifically, we first exactly formulate hyperparameter optimization as an A-based constrained optimization problem, where A is a black-box optimization algorithm (such as deep neural network). Then, we use the average zeroth-order hyper-gradients to update hyperparameters. We provide the feasibility analysis of using HOZOG to achieve hyperparameter optimization. Finally, the experimental results on three representative hyperparameter (the size is from 1 to 1250) optimization tasks demonstrate the benefits of HOZOG in terms of simplicity, scalability, flexibility, effectiveness and efficiency compared with the state-of-the-art hyperparameter optimization methods. © 2021, CC0.

Publication Date
  • Bi-level optimization; Black-box optimization; Hyperparameter optimization; Zeroth-order optimization

Preprint: arXiv

  • Archived with thanks to arXiv
  • Preprint License: CC by 0
  • Uploaded 24 March 2022
Citation Information
B. Gu, G. Liu, Y. Zhang, X. Geng, and H. Huang, "Optimizing large-scale hyperparameters via automated learning algorithm," 2021, arXiv:2102.09026