The monochromatic black and green that defined night vision for decades is fast receding into the past.
The US military is already offering night-vision goggles that depict people and other objects in bright white, and researchers around the world are racing to develop even more advanced ways to see in the dark. A new proof-of-principle study offers intriguing clues about how the next generation of this technology might work.
In an article published Wednesday in the academic journal PLOS ONEthe researchers demonstrate that a deep learning algorithm can construct a color reconstruction of a scene using only infrared images that the human eye cannot see.
These findings suggest an exciting new future for night vision technology.
Human eyes face many limitations
It seems that humans can see all colors, but our eyes can only detect a narrow slice of the electromagnetic spectrum. The light waves we can see range from about 400 nanometers (which register in the human brain as violet) to about 700 nanometers (perceived as red). If someone were in a windowless room with a bright bulb emitting light at a wavelength of 800 nanometers, they would experience total darkness.
A mosquito or viper, on the other hand, could see very well. (Just like a cyborg mouse.) A human could also see a version of the scene if they looked through an infrared camera. That’s because it’s not a technical challenge to shoot in infrared light. The challenge is to render these images in visible light so that a human viewer can make sense of what they see. For example, thermal imaging uses a technique called pseudo-color to make an infrared image visible. Although the resulting image contains multiple colors, it is actually a bloated black and white image where the colors do not match what the scene would look like if viewed in visible light.
New technology could make infrared light perfectly visible
The researchers behind the new study are doing something much more sophisticated with infrared images. They started by printing images of color palettes and faces. Then they created a dataset by taking pictures of these images using a monochromatic camera that can be tuned to take pictures at very specific wavelengths. They took pictures of the faces under monochromatic light sources of different wavelengths in the visible and near-infrared spectra.
With these digital files in hand, they drew on decades of computer science research to develop and test a deep learning algorithm that could start with infrared images of a scene and infer what that scene would look like in the spectrum. visible. And it worked ! Under these admittedly ideal conditions, the researchers found that one of their algorithms – using U-Net-based deep architectures – was able to transform a set of three infrared images into a color photo that closely resembled a normal photo of the same image.
We probably won’t see this technology in night vision goggles anytime soon, but this proof of concept shows that color night vision is on the horizon.