Skip to main content
Article
Efficient Semi-Supervised Adversarial Training without Guessing Labels
Proceedings - IEEE International Conference on Data Mining, ICDM
  • Huimin Wu, Nanjing University of Information Science & Technology
  • William Vazelhes, Mohamed Bin Zayed University of Artificial Intelligence
  • Bin Gu, Mohamed Bin Zayed University of Artificial Intelligence
Document Type
Conference Proceeding
Abstract

Adversarial training has been proved to be the most effective defensive strategy to protect models from adversarial attacks. In the practical application scenario of adversarial training, besides labeled data, we also face an enormous amount of unlabeled data. However, existing adversarial training methods are naturally targeting supervised learning problems. To adapt to semi-supervised learning problems, they need to estimate labels for unlabeled data in advance, which inevitably degenerates the performance of the learned model due to the bias on the estimation of labels for unlabeled data. To mitigate this issue, in this paper, we propose a new semi-supervised adversarial training framework via maximizing AUCs which is also a minimax problem but treats the unlabeled samples as both positive and negative ones, which allows us to avoid guessing labels for unlabeled data. Quite naturally, the minimax problem can be solved via a traditional adversarial training algorithm by extending singly stochastic gradients to triply stochastic gradients, to adapt to the three (i.e. positive, negative, and unlabeled) data sources. To further accelerate the training procedure, we transform the minimax adversarial training problem into an equivalent minimization one based on the kernel perspective. For the minimization problem, we discuss scalable and efficient algorithms not only for deep neural networks but also for kernel support vector machines. Extensive experimental results show that our algorithms not only achieve better generalization performance against various adversarial attacks, but also enjoy efficiency and scalability when considered from the kernel perspective.

DOI
10.1109/ICDM54844.2022.00064
Publication Date
2-1-2023
Keywords
  • adversarial training,
  • semi-supervised learning
Comments

IR conditions: non-described

Citation Information
H. Wu, W. Vazelhes and B. Gu, "Efficient Semi-Supervised Adversarial Training without Guessing Labels," 2022 IEEE International Conference on Data Mining (ICDM), Orlando, FL, USA, 2022, pp. 538-547, doi: 10.1109/ICDM54844.2022.00064.