Skip to main content
Presentation
Assessing Multiple Participant View Positioning in Virtual Reality-Based Training
Industrial and Manufacturing Systems Engineering Conference Proceedings and Posters
  • Jonathan W. Kelly, Iowa State University
  • Eliot Winer, Iowa State University
  • Stephen B. Gilbert, Iowa State University
  • Michael Curtis, Iowa State University
  • Eduardo Rubio, Iowa State University
  • Ken Kopecky, Iowa State University
  • Joseph Scott Holub, Iowa State University
  • Julio de la Cruz, United States Army Research Laboratory
Document Type
Conference Proceeding
Publication Version
Published Version
Publication Date
1-1-2013
Conference Title
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC)
Conference Date
December 2-5, 2013
Geolocation
(28.5383355, -81.37923649999999)
Abstract

As cost, time, and other challenging resource requirements are placed on U.S. Joint forces training, the role of simulations will play an even greater role than it does today. To effectively aid a Warfighter in gaining critical skills and to assess the proficiency of those skills, computer-based training must advance beyond traditional desktop simulations and monoscopic projection technology. Virtual Reality (VR) based training has been proven in fields such as medical and engineering to increase a trainee’s level of immersion, and increase training performance in several metrics including accuracy and efficiency, while simultaneously decreasing cost.

Warfighter training offers a unique set of challenges that demand additional studies before they can be correctly addressed in a VR environment. Primary among them is the ability to have multiple Warfighters train together. While VR systems typically include monocular and binocular depth cues, the actual imagery is only drawn correctly for a single viewer. Imprecision in Warfighter training can result in incorrect acquisition of an enemy avatar’s position and/or target location. These errors can carry over into future training as well as actual missions.

In this paper, a formal method to produce a combined viewpoint, suitable for multiple participants, in a VR simulation-based training is presented. The concepts of monocular and stereoscopic depth cues and their effect on Warfighter training will be discussed. A comprehensive review of current research into simulation-based training environments will also be presented. Lastly, we will present new results from a formal user study comparing the proposed combined viewpoint with that of a typical VR system in a Warfighter training task involving shooting of virtual targets. Initial results of this study show significant advantages to using the combined viewpoint. Analyses of the results show the maximum shooting error committed by an individual participant was reduced by up to 47%.

Comments

This proceeding is published as Kelly, J., Winer, E., Gilbert, S., *Curtis, M., *Rubio, E., *Kopecky, K., *Holub, J., de la Cruz, J. (2013) "Assessing Multiple Participant View Positioning in Virtual Reality-Based Training," Paper No. 13209. The Interservice/Industry Training, Simulation & Education Conference (I/ITSEC), Orlando, FL, December 2-5, 2013.

Rights
Works produced by employees of the U.S. Government as part of their official duties are not copyrighted within the U.S. The content of this document is not copyrighted.
Language
en
File Format
application/pdf
Citation Information
Jonathan W. Kelly, Eliot Winer, Stephen B. Gilbert, Michael Curtis, et al.. "Assessing Multiple Participant View Positioning in Virtual Reality-Based Training" Orlando, FL(2013)
Available at: http://works.bepress.com/stephen_b_gilbert/54/