Regularizing Discontinuities in Ray Traced Global Illumination for Differentiable Rendering

Abstract

In recent years, neural network based machine learning (ML) models have demonstrated revolutionary performance in computer vision tasks, such as object recognition. These methods typically focus on only 2D images and lack an understanding of the 3D world that underlies the image. On the other hand, rendering, in the computer graphics field, is the process of creating a 2D image from a digital description of a 3D scene.

Viewing computer vision as an inverse rendering problem has led to a growing interest in differentiable rendering, the key idea being that perhaps by incorporating information about how 3D scenes result in 2D images, ML models that analyse 2D images could be improved. Differentiable renderers, like traditional renderers, generate images from digital descriptions of 3D scenes, but also allow for the computation of gradients for the output image with respect to the various input parameters in the scene. These input parameters can include the positions of objects, material properties, and camera pose. The gradients can be used in end-to-end training of machine learning models, in applications such as single-view 3D object reconstruction, or analysis-by-synthesis approaches for inverse graphics.

We introduce a novel differentiable path tracing algorithm where discontinuities in the rendering process are regularized through a blurring of the geometry. Our differentiable renderer implements full global illumination and has parameters for controlling the regularization, which allows for control over the smoothness of the loss landscape. Additionally, we also explore how differentiable renderers can be adapted to camera effects such as motion blur and depth of field. We successfully apply our system to solve several examples of challenging inverse rendering optimization problems that involve complex light transport scenarios.

Full Paper Examples