In recent years, significant progress has been achieved for 3D object detection on point clouds thanks to the advances in 3D data collection and deep learning techniques. Nevertheless, 3D scenes exhibit a lot of variations and are prone to sensor inaccuracies as well as information loss during pre-processing. Thus, it is crucial to design techniques that are robust against these variations. This requires a detailed analysis and understanding of the effect of such variations. This work aims to analyze and benchmark popular point-based 3D object detectors against several data corruptions. To the best of our knowledge, we are the first to investigate the robustness of point-based 3D object detectors. To this end, we design and evaluate corruptions that involve data addition, reduction, and alteration. We further study the robustness of different modules against local and global variations. Our experimental results reveal several intriguing findings. For instance, we show that methods that integrate Transformers at a patch or object level lead to increased robustness, compared to using Transformers at the point level. The code is available at https://github.com/sultanabughazal/robustness3d.
- 3D Detectors,
- 3D Object Detection,
- Point Clouds,
- Robustness,
- Deep learning,
- Learning systems,
- Object recognition
https://doi.org/10.1145/3551626.3564956
preprint version: https://arxiv.org/abs/2207.10205
IR conditions: non-described