At SIGGRAPH 2016, NVIDIA Researchers will demonstrate a new VR rendering technique called perceptually-based foveated rendering. The fovea is a part of the human retina where the visual perception is the highest as it is where the most retinal cones (light sensors) are present.
Foveated Rendering is based on the idea that we only need to render the full details where your gaze is currently focused at and decrease details progressively away from that point. Ideally, you should not even realize that Foveated Rendering is happening. NVIDIA Researchers say that they can render 70% fewer pixels when using this technique – impressive.
How does Foveated Rendering Work?
The idea of Foveated Rendering is not new. It’s been published about for years, but now is a time when excellent eye tracking and very good VR headsets are available. NVIDIA has partnered with SMI (SensorMotoric Instruments), a company that has been on eye-tracking and gaze detection for 20 years. SMI has done its own Foveated Rendering demo back in January 2016.
SMI’s hardware can track eye movement at 250Hz and can be integrated to a VR Headset. This sampling frequency is much higher than the 90Hz (or FPS) typically required for VR applications, so your eyes will not “outrun” the tracking system. This also helps minimize lag and provides a natural sensation. Although there are always possible improvements, SMI’s hardware is sufficiently advanced today.
What’s special about NVIDIA’s technique?
There are many ways to do Foveated Rendering, and all have “full detail” zones along with “partial details” ones. The problem is that most techniques end up with small issues that are perceptible, and more or less distracting to the experience. We’re trying to be immersed here, so any “odd” perception can have magnified effects.
It is very hard to make a Foveated Renderer that will not produce perceptible flickering or weird side effect such as the sensation of tunnel vision.
This is where NVIDIA Research steps in: during a chat, the research team explained that typical approaches often generate flickering in the peripheral vision if the detail reduction was too brutal. This is bad because humans have been evolving for millions of years to be wary of stuff that’s moving in our peripheral vision (predator coming at you), so this can be very distracting.
Blurring the pixelated parts of the image does reduce the flickering problem, but it creates another one: a tunnel vision sensation, where things are unnaturally blurry outside the center of vision.
NVIDIA’s researcher looked into “why” this tunnel vision was happening and realized that the blurring typically lowered the overall contrast of the low-detail section of the image, and that the user could perceive it as this tunnel-vision effect. This makes sense because blurring would average light and dark pixels, thus reducing contrast.
By preserving the contrast after the blur, the tunnel effect is gone, and things look natural. NVIDIA Research has not yet provided the implementation details, which is the crunchy part for graphics engineers, but that’s what SIGGRAPH is for. In the meantime, look at the video demo:
SMI also came up with a demo in which you can see distinct zones of details
This is another demo, made with the Unity engine. Notice how details pop in and out, with clear distinct (and visible) zones
What are the benefits to users?
There’s a huge difference between having Foveated Rendering that “mostly works” but produces side effects, and one that is not perceptible to the user. NVIDIA’s perceptually-based foveated rendering has the potential to render as much as 70% fewer pixels, with no perceptible effect.
This is a huge number, and it is a most welcome optimization because VR currently requires extremely fast PCs and GPUs to work in good conditions. The computations savings could either be used to make VR work on mid-range PCS, or to render even more details where the point of gaze is. Computing efficiency is always good.
Finally, the availability of the gaze information (where your eyes are looking at in the virtual world) to the application could have a big influence on how VR games are built. If SMI’s tracking (or something like it) was to become a standard feature for VR headsets, games and apps could know where you’re looking at and react to it. Right now, most games assume that the user is looking straight on.
At the moment, NVIDIA’s Perception based foveated rendering remains a research project, and the team is using its own mini-engine to build demos. The exact implementations details have not been presented yet, so it’s a bit too early to have an opinion on how this will be transferred to games and game engines, but baring some unforeseen complexity, I don’t see an immediate reason why games couldn’t integrate this relatively quickly.
Although NVIDIA’s approach isn’t the first one, it does tackle real problems such as flickering and progressive blurring, which are absolutely necessary for a “real-world” use. Foveated rendering should be completely invisibly to the user if done right, and this is what this technique should achieve. NVIDIA will have live demos at SIGGRAPH.