Article
Robustness Analysis of Classification Using Recurrent Neural Networks with Perturbed Sequential Input
arXiv
Document Type
Article
Abstract
For a given stable recurrent neural network (RNN) that is trained to perform a classification task using sequential inputs, we quantify explicit robustness bounds as a function of trainable weight matrices. The sequential inputs can be perturbed in various ways, e.g., streaming images can be deformed due to robot motion or imperfect camera lens. Using the notion of the Voronoi diagram and Lipschitz properties of stable RNNs, we provide a thorough analysis and characterize the maximum allowable perturbations while guaranteeing the full accuracy of the classification task. We illustrate and validate our theoretical results using a map dataset with clouds as well as the MNIST dataset. © 2022, CC BY.
DOI
arXiv:2203.05403v1
Publication Date
3-10-2022
Keywords
- Classification tasks,
- Lipschitz property,
- Robot motion,
- Robustness analysis,
- Robustness bound,
- Streaming images,
- Voronoi diagram properties,
- Weight matrices,
- Recurrent neural networks,
- Machine Learning (cs.LG)
Disciplines
Citation Information
G. Liu, A. Amini, M. Takac, and N. Motee, "Robustness Analysis of Classification Using Recurrent Neural Networks with Perturbed Sequential Input", 2022, arXiv, doi: 10.48550/arXiv.2203.05403
Preprint: arXiv
Archived with thanks to arXiv
Preprint License: CC by 4.0
Uploaded 18 May 2022