One of the main hardware components that helps determine how good a camera captures images is its sensor. While the marketing department of companies tend to focus on the megapixel mount of the sensors, there is more to it than that, where there have been innovations made that we think might be more interesting than the camera’s resolution.
In fact, recently both Sony and Prophesee (formerly known as Chronocam, a company that Intel invested in) have announced that they are working together to develop a stacked event-based vision sensor. What does this mean? For those unfamiliar, basically a stacked sensor is a sensor that comes with several layers on it.
This means that there is a layer dedicated to collecting image data, a layer for processing that data, and in some cases, a layer for DRAM that will enable faster video frame rates (amongst other things).
With this new sensor, Sony and Prophesee are claiming to have created the “industry’s smallest 4.86μm pixel size and the industry’s highest 124dB (or more) HDR performance”. The sensor will be capable of detecting luminance changes in each pixel asynchronously and will also be able to output data like the coordinates and time for when a change is detected.
Ultimately, this is expected to result in higher efficiency, faster speeds, and lower latency data output. The companies envision that the use of this sensor could used for machine learning purposes, where it can be used to detect fast moving objects in a variety of environments and conditions.