Color remapping turns night into day
Night vision cameras are a vital source of information for a wide range of critical military and law enforcement applications such as surveillance, reconnaissance, intelligence gathering, and security. Currently, monochrome display of night imagery is still the standard. However, monochrome images often do not look natural, provide lower feature contrast, and they tend to induce visual illusions and fatigue. Intuitive color representations of night-vision imagery may alleviate these problems.
The increasing availability of multi-spectral, night-vision systems has led to a growing interest in the color display of night imagery. Color may improve feature contrast and reduce visual clutter, thus enabling better scene recognition, object detection, and depth perception. Most current techniques to colorize multi-band, night-time imagery are computationally expensive or do not yield natural and stable color settings. To resolve these issues, we developed a simple color remapping technique that provides colored night-time imagery with an intuitive and stable appearance.1 The method is computationally efficient and can easily be deployed in real-time.
Our color remapping technique assumes a fixed relation between false color tuples and natural color triplets for bands near the visual spectrum. This allows its implementation as a simple color-table swapping operation. For bands that are not correlated with the visual spectrum, color remapping can be used to enhance the detectability of targets through contrast enhancement and color highlighting.
We achieved color remapping by associating the multi-band sensor signal to an indexed false color image and swapping its color table with that of a regular daylight color image of a similar scene (see Figure 1). A wide range of environments can be represented with only a limited number of color tables. These tables need to be constructed only once before the system is deployed.
The derivation of the color transformation requires a color photograph representing the intended operating theater or a similar, but not necessarily the same, environment. Then there are two options: either transfer the color statistics of this photograph to the false color multi-spectral image (when both images represent different but similar scenes), or establish a sample-based mapping between corresponding pixel values.1
To achieve an efficient real-time implementation, we used indexed, color-image representations and performed all required operations on their corresponding color lookup tables. Although the sample-based mapping approach yields more specific colors than the statistical method, both techniques produce intuitively correct and stable color mappings (see Figure 2). The specificity of the sample-based color remapping also allows us to selectively enhance and emphasize certain details in a scene.
We implemented our color-mapping procedure in three portable real-time, multi-band night-vision systems2–4 that were deployed in several night-time field trials.5, 6 It appears that color makes it significantly easier to distinguish more details and evaluate a scene.7 Mappings emphasizing hot targets facilitate the detection of persons and vehicles in a scene. The inherent color constancy provided by the method gives dynamic imagery a stable appearance when the sensor suite pans over or moves through a scene. By integrating a multi-band, night-vision system with a surveillance and observation system that generates real-time, synthetic 3D environment views from a geometric 3D scene model, we were able to demonstrate that the use of synthetic imagery also serves to derive appropriate color mappings (see Figure 3).8
The color remapping procedure was initially developed to colorize multi-band, night-vision imagery. However, since the method can be applied to enhance specific image details, there are also law enforcement, surveillance, medical, and industrial applications. Going forward, we will use newly developed color image quality metrics to derive optimal mappings for each of these applications.
Alexander Toet received his PhD from Utrecht University, The Netherlands. He is currently a senior research scientist at TNO and has worked on image fusion for 25 years. His background is in visual perception and image processing.