In the recent years, color and depth camera systems have attracted intensive attention because of its wide applications in image-based rendering, 3D model reconstruction, and human tracking and pose estimation. These applications often require multiple color and depth cameras to be placed with wide separation so as to capture the scene objects from different prospectives. The difference in modality and the wide baseline make calibration a challenging problem. In this paper, we present an algorithm that simultaneously and automatically calibrates the extrinsics across multiple color and depth cameras across the network. Rather than using the standard checkerboard, we use a sphere as a calibration object to identify the correspondences across different views. We experimentally demonstrate that our calibration framework can seamlessly integrate different views with wide baselines that outperforms other techniques in the literature.
Available at: http://works.bepress.com/ju_shen/12/
Permission documentation is on file.