While analyzing high dimensional data-sets using deep neural network (NN), increased sparsity is desirable but requires careful selection of "sparsity parameters." In this paper, a novel distributed learning methodology is proposed to optimize the NN while addressing this challenge. To address this challenge, the optimal sparsity in the NN is estimated via a two player zero-sum game in the paper. In the proposed game, sparsity parameter is the first player with the aim of increasing sparsity in the NN while NN weights is the second player with the goal of improving its performance in the presence of increased sparsity. To solve the game, additional variables are introduced into the optimization problem such that the output at every layer in the NN depends on this variable instead of the previous layer. Using these additional variables, layer wise cost-functions are derived that are then independently optimized to learn the additional variables, NN weights and the sparsity parameters. To implement the proposed learning procedure in a parallelized and distributed environment, a novel computational algorithm is also proposed. The efficiency of the proposed approach is demonstrated using a total of six data-sets.
- Big data,
- Clustering algorithms,
- Cost functions,
- Computational algorithm,
- Distributed environments,
- Distributed learning,
- High dimensional data,
- Learning procedures,
- Optimal sparsities,
- Optimization problems,
- Sparse neural networks,
- Deep neural networks
Available at: http://works.bepress.com/jagannathan-sarangapani/200/