Skip to main content
Article
Visualization of 3D Images from Multiple Texel Images Created from Fused Ladar/Digital Imagery
Proc. SPIE (2015)
  • Cody C Killpack, Utah State University
  • Scott Budge
Abstract
The ability to create 3D models, using registered texel images (fused ladar and digital imagery), is an important
topic in remote sensing. These models are automatically generated by matching multiple texel images into a
single common reference frame. However, rendering a sequence of independently registered texel images often
provides challenges. Although accurately registered, the model textures are often incorrectly overlapped and
interwoven when using standard rendering techniques. Consequently, corrections must be done after all the
primitives have been rendered, by determining the best texture for any viewable fragment in the model.
Determining the best texture is difficult, as each texel image remains independent after registration. The
depth data is not merged to form a single 3D mesh, thus eliminating the possibility of generating a fused texture
atlas. It is therefore necessary to determine which textures are overlapping and how to best combine them
dynamically during the render process. The best texture for a particular pixel can be defined using 3D geometric
criteria, in conjunction with a real-time, view-dependent ranking algorithm. As a result, overlapping texture
fragments can now be hidden, exposed, or blended according to their computed measure of reliability.
Keywords
  • idar,
  • ladar,
  • texel image,
  • 3D texel model,
  • dynamic texture,
  • view-dependent rendering,
  • render to texture
Disciplines
Publication Date
May 19, 2015
Citation Information
Cody C Killpack and Scott Budge. "Visualization of 3D Images from Multiple Texel Images Created from Fused Ladar/Digital Imagery" Proc. SPIE Vol. 9465 (2015)
Available at: http://works.bepress.com/scott_budge/52/