Skip to main content
Article
Extracting User Intent in Mixed Initiative Teleoperator Control
Proceedings of the American Institute of Aeronautics and Astronautics Intelligent Systems Technical Conference (2004)
  • Andrew H. Fagg
  • Michael Rosenstein
  • Robert Platt, Jr.
  • Roderic Grupen, University of Massachusetts - Amherst
Abstract
User fatigue is common with robot teleoperation interfaces. Mixed initiative control approaches attempt to reduce this fatigue by allowing control responsibility to be shared between the user and an intelligent control system. A critical challenge is how the user can communicate her intentions to the control system in an intuitive manner as possible. In the context of control of a humanoid robot, we propose an interface that uses the movement currently commanded by the user to assess the intended outcome. Specifically, given the observation of the motion of the teleoperated robot for a given period of time, we would like to automatically generate an abstract explanation of that movement. Such an explanation should facilitate the execution of the same movement under the same or similar conditions in the future. How do we translate these observations of teleoperator behavior into deep representation of the teleoperator's intentions? Neurophysiological evidence suggests that in primates, the mechanisms for execution of the same actions. For example Rizzolatti et al. (1988) identified neurons within the ventral premotor cortex of monkey that fired during execution of specific grasping movements. Although this area is traditionally thought of as a motor execution area, Rizzolatti et al. (1996) showed that neurons in a subarea were active not only when the monkey executed certain grasping actions, but also when the monkey observed others making similar movements. These and other results suggest that generators of action could also facilitate the recognition of motor actions taken by another entity (in our case, the teleoperator). The foci of this study are teleoperated pick-and-place tasks using the UMass Torso robot. This robot consists of an articulated, stereo biSight head; two 7-DOF Whole Arm Manipulators (WAMs); two 3-fingered hands (each finger is equipped with a six-axis force/torque sensor); and a quadraphonic audio input system. The teleoperator interface consists of a red/blue stereo display and a P5 Essential Reality glove that senses the position and orientation of the user's hand, as well as the flexion of the user's fingers.
Disciplines
Publication Date
2004
Publisher Statement
Harvested from Citseer.
Citation Information
Andrew H. Fagg, Michael Rosenstein, Robert Platt and Roderic Grupen. "Extracting User Intent in Mixed Initiative Teleoperator Control" Proceedings of the American Institute of Aeronautics and Astronautics Intelligent Systems Technical Conference (2004)
Available at: http://works.bepress.com/roderic_grupen/2/