Volumetric Isosurface Rendering with Deep Learning-Based Super-Resolution
Sebastian Weiss, Mengyu Chu, Nils Thuerey, Rüdiger Westermann
External link (DOI)
View presentation:2020-10-29T19:15:00ZGMT-0600Change your timezone on the schedule page
2020-10-29T19:15:00Z

Fast forward
Direct link to video on YouTube: https://youtu.be/l-SzWAfPVcA
Keywords
Volume visualization, Machine Learning, Raytracing
Abstract
Rendering an accurate image of an isosurface in a volumetric field typically requires large numbers of data samples. Reducing this number lies at the core of research in volume rendering. With the advent of deep learning networks, a number of architectures have been proposed recently to infer missing samples in multi-dimensional fields, for applications such as image super-resolution. In this paper, we investigate the use of such architectures for learning the upscaling of a low-resolution sampling of an isosurface to a higher resolution, with reconstruction of spatial detail and shading. We introduce a fully convolutional neural network, to learn a latent representation generating smooth, edge-aware depth and normal fields as well as ambient occlusions from a low-resolution depth and normal field. By adding a frame-to-frame motion loss into the learning stage, upscaling can consider temporal variations and achieves improved frame-to-frame coherence. We assess the quality of inferred results and compare it to bi-linear and -cubic upscaling. We do this for isosurfaces which were never seen during training, and investigate the improvements when the network can train on the same or similar isosurfaces. We discuss remote visualization and foveated rendering as potential applications.