Volumetric differentiable rendering


Lately, learning-based 3D reconstruction methods have shown impressive results. There is a line of research in the last couple of years that unlike most traditional and other learning-based methods does not require 3D supervision which is often hard to obtain for real-world datasets. Recently, several works have proposed differentiable rendering techniques to train reconstruction models from RGB images. The approaches that are restricted to voxel- and mesh-based representations, suffer from discretization or low resolution. In this webinar, we will talk about a differentiable rendering formulation for implicit shape and texture representations. Implicit representations have recently gained popularity as they represent shape and texture continuously. I will present the work DVR (Niemeyer et al. 2020) which shows that depth gradients can be derived analytically using the concept of implicit differentiation. This allows neural networks to learn implicit shape and texture representations directly from RGB images. DVR can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.

Join Zoom Meeting via this link