Skip to main content
Article
On the Number of Linear Regions of Convolutional Neural Networks
37th International Conference on Machine Learning, ICML 2020
  • Huan Xiong, Mohamed bin Zayed University of Artificial Intelligence
  • Lei Huang, Inception Institute of Artificial Intelligence
  • Mengyang Yu, Inception Institute of Artificial Intelligence
  • Li Liu, Inception Institute of Artificial Intelligence
  • Fan Zhu, Inception Institute of Artificial Intelligence
  • Ling Shao, Mohamed Bin Zayed University of Artificial Intelligence
Document Type
Conference Proceeding
Abstract

One fundamental problem in deep learning is understanding the outstanding performance of deep Neural Networks (NNs) in practice. One explanation for the superiority of NNs is that they can realize a large class of complicated functions, i.e., they have powerful expressivity. The expressivity of a ReLU NN can be quantified by the maximal number of linear regions it can separate its input space into. In this paper, we provide several mathematical results needed for studying the linear regions of CNNs, and use them to derive the maximal and average numbers of linear regions for one-layer ReLU CNNs. Furthermore, we obtain upper and lower bounds for the number of linear regions of multi-layer ReLU CNNs. Our results suggest that deeper CNNs have more powerful expressivity than their shallow counterparts, while CNNs have more expressivity than fully-connected NNs per parameter.

Publication Date
1-1-2020
Disciplines
Comments

IR deposit conditions: none described

Proceedings for ICML available on PMLR (OA)

Citation Information
H. Xiong, L. Huang, M. Yu, L. Liu, F. Zhu, and L. Shao, “On the Number of Linear Regions of Convolutional Neural Networks,” 37th International Conference on Machine Learning, ICML 2020, PMLR v. 119. https://proceedings.mlr.press/v119/