Skip to main content

Artificial Intelligence

Sensor fusion

We can enhance data from one area (like the visible spectrum) by adding information collected from another one (like the near-infrared).

What we do

The Sum is Greater Than the Parts—Enhancing Images with Sensor Fusion.

Deep learning is one of many machine learning techniques that have brought the field of computer vision on in leaps and bounds. The technology has made it possible to combine different types of input data to significantly enhance an image, a technique known as sensor fusion.

We can use it to do things like add elements from the near-infrared to RGB images or blend outputs from close range radar with those from thermal sensors.

Sensor fusion

Case studies

 
Development of the process to train CNN by synthetic images

Sensor fusion for RGB + near IR

 

What did Digica want to achieve

Images taken in a low-light environment have problems with:

  • high noise level
  • low contrast
  • poor visibility

While it’s possible to improve image quality, the drawbacks are huge. Upping the exposure time introduces motion blur, which you can only get around by stabilising the camera but this only works for stationary objects. We could use flash to illuminate the subject but this isn’t always appropriate for every situation.

We wanted to find a way around these limitations, so we conducted a research project to explore methods of brightening up low-light mobile images without extending the exposure time or resorting to traditional flash.

How Digica met this challenge

Modern smartphones have very advanced cameras with sophisticated sensors that are sensitive to near-infrared. It’s a waveband that humans can’t see, but it holds a lot of valuable information that can help us see better in dark conditions. Pictures become much brighter after we merge colour information from the visible spectrum with shapes from the near-infrared. We did use a flash in the end, but not a normal one. It lights up the subject in the near-infrared only, so while your eyes don’t notice it, the final image certainly does.

What we achieved

Compared to traditional methods PSNR (peak signal-to-noise ratio) grew from 8.12 to 25.51 and the SSIM (structural similarity index) grew from 0.044 to 0.833.

Technologies used

Sony IMX219 sensor, Near-Infrared LED flash 850nm, TensorFlow

How can we help you?

 
To find out more about Digica, or to discuss how we may be of service to you, please get in touch.