Visual Servoing via Navigation Functions
Copyright 2002 IEEE. Reprinted from IEEE Transactions on Robotics and Automation, Volume 18, Issue 4, August 2002, pages 521-533.
This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of the University of Pennsylvania's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to firstname.lastname@example.org. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.
NOTE: At the time of publication, Daniel Koditschek was affiliated with the University of Michigan. Currently, he is a faculty member of the School of Engineering at the University of Pennsylvania.
This paper presents a framework for visual servoing that guarantees convergence to a visible goal from almost every initially visible configurations while maintaining full view of all the feature points along the way. The method applies to first- and second-order fully actuated plant models. The solution entails three components: a model for the "occlusion-free" configurations; a change of coordinates from image to model coordinates; and a navigation function for the model space. We present three example applications of the framework, along with experimental validation of its practical efficacy.
Noah J. Cowan, Daniel E. Koditschek, and Joel D. Weingarten. "Visual Servoing via Navigation Functions" Departmental Papers (ESE) (2002).
Available at: http://works.bepress.com/daniel_koditschek/38