Skip to main content
Presentation
Comparing Visual Assembly Aids for Augmented Reality Work Instructions
Proceedings of the 2017 Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC)
  • Anastacia MacAllister, Iowa State University
  • Melynda Hoover, Iowa State University
  • Stephen Gilbert, Iowa State University
  • James Oliver, Iowa State University
  • Rafael Radkowski, Iowa State University
  • Timothy Garrett, Iowa State University
  • Joseph Holub, Iowa State University
  • Eliot Winer, Iowa State University
  • Scott Terry, The Boeing Company
  • Paul Davies, The Boeing Company
Document Type
Presentation
Conference
Interservice/Industry Training, Simulation and Education Conference (I/ITSEC) 2017
Publication Version
Published Version
Publication Date
1-1-2017
Conference Title
Interservice/Industry Training, Simulation and Education Conference (I/ITSEC) 2017
Conference Date
November 27-December 1, 2017
Geolocation
(28.5383355, -81.37923649999999)
Abstract

Increased product complexity and the focus on zero defects, especially when manufacturing complex engineered products, means new tools are required for helping workers conduct challenging assembly tasks. Augmented reality (AR) has shown considerable promise in delivering work instructions over traditional methods. Many proof-of-concept systems have demonstrated the feasibility of AR but little work has been devoted to understanding how users perceive different AR work instruction interface elements. This paper presents a between-subjects study looking at how interface elements for object depth placement in a scene impact a user’s ability to quickly and accurately assemble a mock aircraft wing in a standard work cell. For object depth placement, modes with varying degrees of 3D modeled occlusion were tested, including a control group with no occlusion, virtual occlusion, and occlusion by contours. Results for total assembly time and total errors indicated no statistically significant difference between interfaces, leading the authors to conclude a floor has been reached for optimizing the current assembly when using AR for work instruction delivery. However, looking at a handful of highly error prone steps showed the impact different types of occlusion have on helping users correctly complete an assembly task. The results of the study provide insight into how to construct an interface for delivering AR work instructions using occlusion. Based on these results, the authors recommend customizing the occlusion method based on the features of the required assembly task. The authors also identified a floor effect for the steps of the assembly process, which involved picking the necessary parts from tables and bins. The authors recommend using vibrant outlines and large textual cues (e.g., numbers on parts bins) as interface elements to guide users during these types of “picking” steps.

Comments

This proceeding is published as MacAllister, Anastacia, Melynda Hoover, Stephen Gilbert, James Oliver, Rafael Radkowski, Timothy Garrett, Joseph Holub, Eliot Winer, Scott Terry, and Paul Davies. "Comparing Visual Assembly Aids for Augmented Reality Work Instructions." In Proceedings of the 2017 Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC). Volume 2017, Paper no. 17208. Arlington, VA: National Training and Simulation Association. Posted with permission.

Copyright Owner
IITSEC
Language
en
File Format
application/pdf
Citation Information
Anastacia MacAllister, Melynda Hoover, Stephen Gilbert, James Oliver, et al.. "Comparing Visual Assembly Aids for Augmented Reality Work Instructions" Orlando, FLProceedings of the 2017 Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) Vol. 2017 (2017) p. 17208
Available at: http://works.bepress.com/stephen_b_gilbert/75/