Humans are, let’s face it, kind of the dunces of the animal kingdom. We can’t sniff stuff as well as dogs or bees; we can’t hear as well as bats; even our primary sense, sight, pales in comparison to animals that can see ultraviolet or infrared. In fact, the only advantage we have really isn’t a sense at all, but our big old brains.
Sometimes, though, that’s all you need as a species. We can’t see in the dark, but we can invent infrared cameras to do it for us – and now, researchers from the University of California, Irvine, have developed a way to make those images even closer to the real thing.
“Some night vision systems use infrared light that is not perceptible to humans and the images rendered are transposed to a digital display presenting a monochromatic image in the visible spectrum,” explains a paper describing the technology, published this week in the journal PLOS ONE.
“We sought to develop an imaging algorithm powered by optimized deep learning architectures whereby infrared spectral illumination of a scene could be used to predict a visible spectrum rendering of the scene as if it were perceived by a human with visible spectrum light,” the paper continues. “This would make it possible to digitally render a visible spectrum scene to humans when they are otherwise in complete ‘darkness’ and only illuminated with infrared light.”
So: a camera that can reconstruct color images from infrared light? Well, actually, no – not quite. The important bit isn’t the camera, but the algorithm the team used to reconstruct the images. They created a special type of AI known as a neural network – a kind of deep learning algorithm designed to simulate how human brains learn – which they then trained to spot correlations between how images look under infrared and under the visible spectrum.
“We … optimized a convolutional neural network with a U-Net-like architecture [an architecture designed to allow fast and precise image processing] to predict visible spectrum images from only near-infrared images,” says the paper. “This study serves as a first step towards predicting human visible spectrum scenes from imperceptible near-infrared illumination.”
But while the reconstructed images are no doubt impressive, the researchers acknowledge that this is only a “proof-of-principle study using printed images with a limited optical pigment context” – or to put it another way, it probably can’t be used for much just yet. So far, its success has been limited to just faces.
“Human faces are, of course, a very constrained group of objects, if you like. It doesn’t immediately translate to coloring a general scene,” Professor Adrian Hilton, Director of the Centre for Vision, Speech and Signal Processing (CVSSP) at the University of Surrey, told New Scientist.
“As it stands at the moment, if you apply the method trained on faces to another scene, it probably wouldn’t work, it probably wouldn’t do anything sensible.”
As an example, he explained, an AI trained on bowls of fruit rather than faces would be fooled by a random blue banana, because its training would have included only yellow bananas. As is so often the case, AI is only as intelligent and objective as we make it.
Still, while Andrew Browne, lead author of the study, cautions that these results are very early, he says that with further study the technique could become extremely accurate.
“I think this technology could be used for precise color evaluation if the amount and variety of data used to train the neural network is sufficiently large to increase accuracy,” he told New Scientist.
Which just leaves one question – how would the new AI fare against The Dress?