![](https://d3ilqtpdwi981i.cloudfront.net/Zh8vKzkRjIeuFcL-PWjtK4pKZEM=/425x550/smart/https://bepress-attached-resources.s3.amazonaws.com/uploads/1b/b1/5f/1bb15f65-987f-458a-810e-16f8966cb30d/thumbnail_20d05aa8-5bc5-467f-b7bc-2e4ad9e5be24.jpg)
In this paper, generalization error for traditional learning regimes-based classification is demonstrated to increase in the presence of bigdata challenges such as noise and heterogeneity. To reduce this error while mitigating vanishing gradients, a deep neural network (NN)-based framework with a direct error-driven learning scheme is proposed. To reduce the impact of heterogeneity, an overall cost comprised of the learning error and approximate generalization error is defined where two NNs are utilized to estimate the costs respectively. To mitigate the issue of vanishing gradients, a direct error-driven learning regime is proposed where the error is directly utilized for learning. It is demonstrated that the proposed approach improves accuracy by 7 % over traditional learning regimes. The proposed approach mitigated the vanishing gradient problem and improved generalization by 6%.
- Big data,
- Cost benefit analysis,
- Errors,
- Error-driven learning,
- Generalization Error,
- Heterogeneity,
- Learning error,
- Noise,
- Overall costs,
- Traditional learning,
- Vanishing gradients,
- Deep neural networks,
- Bigdata,
- Noise
Available at: http://works.bepress.com/jagannathan-sarangapani/199/