Have you ever thought of why an image result will differ from normal light, infrared or ultraviolet light and why are we only limiting the number of light to these three spectrums? Why don’t we use continuous spectrums of light to create a lot of massive images and analyze the difference between these images? The technology that researches these problem calls Hyperspectral Imaging, an emergent technology that been researching and applying in lots of company, including FPT.

What is Hyperspectral Imaging?

Hyperspectral Imaging (HIS) is one of the fastest developing areas in image processing. It is defined as a spectral sensing technique which continuously takes hundreds or thousands of photos in the human-eye visional to infrared regions of the electromagnetic spectrum. The image pixels form spectral vectors which represent the spectral characteristic of the materials in the targeted object. HIS has been applied in industrial manufacturing including food and pharmaceutical quality inspection and now interested in remote sensing such as in mining, precision agriculture or global information system (GIS) monitoring. A hyperspectral camera can be used for these purposes, but such a device is so expensive. In this article, we will present a simple version of the hyperspectral camera that capable of capturing ultraviolet and near-infrared light with a high spatial resolution to offer us a peek at things that are invisible to our naked eyes.

(Read more: CCTV System with the Power of Video Analytics in DX Era)

What is FPT approaches for this technology?

The idea of FPT hyperspectral camera is based on a research conducted by Microsoft and University of Washington. Our hyperspectral camera consists of two main parts: a camera and a lighting system. The sensor of the camera should be able to capture light from a spectrum that is as broad as possible. To this end, we have decided to use Blackfly S Mono 1.3 MP USB3 Vision, a camera with a sensitivity range from 300 nm to 1000 nm and peak quantum efficiency at 560nm.

The lighting system activates on a captured object by means of 17 narrow band LEDs. Those LEDs are 3W LEDs in which the wavelengths vary from 365nm to 940 nm and mostly cover the camera’s sensitivity range. The LEDs are arranged in a ring model to save space and effectively create spectrums. Even though this causes different lighting directions and paths, we can compensate it by integrating a hemisphere at the front of the LED ring to diffuse and uniform the LED light. This light first hits the hemisphere, then reflects a lot of times and finally gets out of the hemisphere through a lampshade placed at the center of the LED ring. The hemisphere and the lampshade are modeled and then 3D printed. To avoid light absorption, we polish and spray chrome on their surfaces.

In addition to the hyperspectral camera, we are developing a general-purpose system that can be used for multiple applications. The system provides APIs to control the LEDs and the camera, as well as processing captured images. We turn every single LED on and keep others off to respectively capture 17 images of 17 wavelengths. Then, using the Principal Component Analysis technique, we combine those 17 images into a single image that differentiate the most from our eye perceptions – it is the same technique that an HDR image combines photos taken at different exposures. The final hyperspectral image makes it substantially easy to find hidden information for an particular object and reveals not only the surface detail that is invisible to our naked eye but also deeper details beneath it.

How this technology applied worldwide?

This emergent technology has been researched and applied in different domain. One of case studies that is applied for solving aviation fuel complexities. Such demand comes from two aviation fuel hazards:  One on 17th January 2008 that makes a British airplane crashed while landing due to ice formation in the fuel and another on 13th April 2010, a Hong Kong airplane has to be made an emergency landing at the Hong Kong international airport due to the loss of engine control. Investigation of both cases revealed that fuel lines had been contaminated from water present in the fuel. To be more precise, in its complex and long way of transportation, fuel has been contaminated by water and not possible to visualize nor analyze. HIS can be helpful in the line of transportation to increase the ability to recognize the contamination, information that’s nearly impossible to obtain from RGB images analytics.

Conclusion

The process of hyperspectral imaging involves different stages and requires efficient data acquisition to extract the pixel spectral distribution, matching, and anomaly detection as well as a computational resource to analyze. Our solution is still on it researching and implementing phase. More information will be updated soon as long as the images are analyzed and obtained the results. 

 

For more readings on technology, click here to explore!

Author Pham Thanh Dai Linh