In low-light conditions, a conventional camera imaging pipeline produces sub-optimal images that are usually dark and noisy due to a low photon count and low signal-to-noise ratio (SNR). We present a data-driven approach that learns the desired properties of well-exposed images and reflects them in images that are captured in extremely low ambient light environments, thereby significantly improving the visual quality of these low-light images. The recent works on this problem only consider a pixel-level loss metric that ignores perceptual quality and thus generate outputs susceptible to visual artifacts. To address this problem, we propose a new loss function that exploits the characteristics of both pixel-wise and perceptual metrics, enabling our deep neural network to learn the camera processing pipeline to transform the short-exposure, low-light RAW sensor data to well-exposed sRGB images. The results show that our method outperforms the state-of-the-art according to psychophysical tests as well as pixel-wise standard metrics and recent learning-based perceptual image quality measures. In essence, the proposed model can potentially replace the conventional digital camera pipeline for the specific case of extreme low-light imaging.
- Digital camera pipeline,
- Learning-based ISP modeling,
- Low-light image enhancement,
- RAW to sRGB mapping
IR deposit conditions: