Robotics combined with AI is going to tremendously change how manufacturing, quality control, and a lot of other tasks are done. A foundational piece of the puzzle is environment sensing, often done via multiple cameras, typically one per type of light: RGB, Infrared, Thermal etc…

Multiple cameras are typically set as an array with different camera types next to each other, similar to what you see in certain phones but bigger. This configuration leads to images from each camera being captured with slightly different views and introduces parallax. As a result, combining the information from all the cameras into one view via software can be difficult.

At CEATEC, JVCKenwood takes an entirely different approach with its Advanced Sensor Fusion Camera System (PDF PR). It has developed a way to use a single lens connected to multiple sensors. This works by optically funneling different light spectra entering one lens to different sensors. 

As a result, the capture is parallax-free, and very little or no processing is required to re-align data captured from different sensors. The accuracy of the combined depth+color data is increased, and that’s ideal for autonomous robotics use cases as machines might have to perform extremely precise actions where every percentage of gained accuracy can turn into additional productivity for the robot’s lifetime.

We expect to see this camera being used for industrial applications where the needs and benefits are already clearly understood. It will be interesting to see if this kind of technology would make its way into the consumer market someday.

Filed in Photo-Video. Read more about and .

Discover more from Ubergizmo

Subscribe now to keep reading and get access to the full archive.

Continue reading