Skip to main content
Article
Dynamic visualization of three- dimensional images from multiple texel images created from fused ladar/ digital imagery
Optical Engineering (2016)
  • Cody C. Killpack
  • Scott Budge
Abstract
The ability to create three-dimensional (3-D) image models, using registered texel images (fused ladar
and digital imagery), is an important topic in remote sensing. These models are automatically generated
by matching multiple texel images into a single common reference frame. However, rendering a sequence of
independently registered texel images often provides challenges. Although accurately registered, the model
textures are often incorrectly overlapped and interwoven when using standard rendering techniques.
Consequently, corrections must be done after all the primitives have been rendered by determining the best
texture for any viewable fragment in the model. This paper describes a technique to visualize a 3-D model
image created from a set of registered texel images. The visualization is determined for each viewpoint. It is,
therefore, necessary to determine which textures are overlapping and how to best combine them dynamically
during the rendering process. The best texture for a particular pixel can be defined using 3-D geometric criteria, in
conjunction with a real-time, view-dependent ranking algorithm. As a result, overlapping texture fragments can
now be hidden, exposed, or blended according to their computed measure of reliability. The advantages of this
technique are illustrated using artificial and real data examples.
Keywords
  • lidar ladar texel image three-dimensional texel model dynamic texture view-dependent rendering render to texture.
Disciplines
Publication Date
October 26, 2016
DOI
10.1117/1.OE.56.3.031209
Citation Information
Cody C. Killpack, Scott E. Budge, "Dynamic visualization of three-dimensional images from multiple texel images created from fused ladar/digital imagery," Opt. Eng. 56(3), 031209 (2017), doi: 10.1117/1.OE.56.3.031209.