Computer graphics and AI are cornerstones of NVIDIA. Combined, they’re bringing creators closer to the goal of cinema-quality 3D imagery rendered in real time.
At a series of graphics conferences this summer, NVIDIA Research is sharing groundbreaking work in real-time path tracing and content creation, much of it based on cutting-edge AI techniques. These projects are tackling the hardest unsolved problems in graphics with new tools that advance the state of the art in real-time rendering.
One goal is improving the realism of rendered light as it passes through complex materials like fur or fog. Another is helping artists more easily turn their creative visions into lifelike models and scenes.
Presented at this week’s SIGGRAPH 2021 — as well as the recent High-Performance Graphics conference and the Eurographics Symposium on Rendering — these research advancements highlight how NVIDIA RTX GPUs make it possible to further the frontiers of photorealistic real-time graphics.
Rendering photorealistic images in real time requires accurate simulation of light, modeling the same laws that govern light in the physical world. The most effective approach known so far, path tracing, requires massive computational resources but can deliver spectacular imagery.
The NVIDIA RTX platform, with dedicated ray-tracing hardware and high-performance Tensor Cores for efficient evaluation of AI models, is tailor made for this task. Yet there are still situations where creating high-fidelity rendered images remains challenging.
Consider, for one, a tiger prowling through the woods.
Seeing the Light: Real-Time Path Tracing
To make a scene completely realistic, creators must render complex lighting effects such as reflections, shadows and visible haze.
In a forest scene, dappled sunlight filters through the leaves on the trees and grows hazy among the water molecules suspended in the foggy air. Rendering realistic real-time imagery of clouds, dusty surfaces or mist like this was once out of reach. But NVIDIA researchers have developed techniques that often compute the visual effect of these phenomena 10x more efficiently.
The tiger itself is both illuminated by sunlight and shadowed by trees. As it strides through the woods, its reflection is visible in the pond below. Lighting these kinds of rich visuals with both direct and indirect reflections can require calculating thousands of paths for every pixel in the scene.
It’s a task far too resource-hungry to solve in real time. So our research team created a path-sampling algorithm that prioritizes the light paths and reflections most likely to contribute to the final image, rendering images over 100x more quickly than before.
AI of the Tiger: Neural Radiance Caching
Another group of NVIDIA researchers achieved a breakthrough in global illumination with a new technique named neural radiance caching. This method uses both NVIDIA RT Cores for ray tracing and Tensor Cores for AI acceleration to train a tiny neural network live while rendering a dynamic scene.
The neural network learns how light is distributed throughout the scene. It evaluates over a billion global illumination queries per second when running on an NVIDIA GeForce RTX 3090 GPU, depicting the tiger’s dense fur with rich lighting detail previously unattainable at interactive frame rates.
Seamless Creation of Tough Textures
As rendering algorithms have progressed, it’s crucial that the 3D content available keeps up with the complexity and richness that the algorithms are capable of.
NVIDIA researchers are diving into this area by developing a variety of techniques that support content creators in their efforts to model rich and realistic 3D environments. One area of focus is on materials with rich geometric complexity, which can be difficult to simulate using traditional techniques.
The weave of a polo shirt, the texture of a carpet, or blades of grass have features often much smaller than the size of a pixel, making it difficult to efficiently store and render representations of them. NVIDIA researchers are addressing this with NeRF-Tex, an approach that uses neural networks to represent these challenging materials and encode how they respond to lighting.
Seeing the Forest for the Trees
Complex geometric objects also vary in their appearance depending on how close they are to the viewer. A leafy tree is one example: Close up, there’s enormous detail in its branches, leaves and bark. From afar, it may appear to be little more than a green blob.
It would be a waste of time to render detailed bark and leaves on a tree that’s on the other end of the forest in a scene. But when zooming in for a close-up, the model should be as realistic as possible.
This is a classic problem in computer graphics known as level of detail. Artists have often been burdened with this challenge, manually modeling multiple versions of each 3D object to enable efficient rendering.
NVIDIA researchers have developed a new approach that generates simplified models automatically based on an inverse rendering method. With it, creators can generate simplified models that are optimized to appear indistinguishable from the originals, but with drastic reductions in their geometric complexity.
NVIDIA at SIGGRAPH 2021
More than 200 scientists around the globe make up the NVIDIA Research team, focusing on AI, computer graphics, computer vision, self-driving cars, robotics and more. At SIGGRAPH, which runs from Aug. 9-13, our researchers are presenting the following papers:
- Real-Time Neural Radiance Caching for Path Tracing
- Neural Scene Graph Rendering
- An Unbiased Ray-Marching Transmittance Estimator
- StrokeStrip: Joint Parameterization and Fitting of Stroke Clusters
Don’t miss NVIDIA’s special address at SIGGRAPH on Aug. 10 at 8 a.m. Pacific, revealing our latest technology, demos and more. Catch our Real Time Live demo on Aug. 10 at 4:30 p.m. Pacific to see how NVIDIA Research creates AI-driven digital avatars.
For more, check out the full lineup of NVIDIA events at SIGGRAPH 2021.