Volumetric Isosurface Rendering with Deep Learning-Based Super-Resolution

Sebastian Weiss, Mengyu Chu, Nils Thuerey, Rüdiger Westermann

View presentation:2020-10-29T19:15:00ZGMT-0600Change your timezone on the schedule page
2020-10-29T19:15:00Z
Exemplar figure
Our super-resolution network can upscale an input sampling of an isosurface, stored as mask, normal and depth, at low resolution (i.e., 320x240), to a high resolution mask, depth and normal map (i.e., 1280x960) with ambient occlusion. A fixed, differentiable shading stage in screen-space computes the final output color.
Fast forward

Direct link to video on YouTube: https://youtu.be/l-SzWAfPVcA

Keywords

Volume visualization, Machine Learning, Raytracing

Abstract

Rendering an accurate image of an isosurface in a volumetric field typically requires large numbers of data samples. Reducing this number lies at the core of research in volume rendering. With the advent of deep learning networks, a number of architectures have been proposed recently to infer missing samples in multi-dimensional fields, for applications such as image super-resolution. In this paper, we investigate the use of such architectures for learning the upscaling of a low-resolution sampling of an isosurface to a higher resolution, with reconstruction of spatial detail and shading. We introduce a fully convolutional neural network, to learn a latent representation generating smooth, edge-aware depth and normal fields as well as ambient occlusions from a low-resolution depth and normal field. By adding a frame-to-frame motion loss into the learning stage, upscaling can consider temporal variations and achieves improved frame-to-frame coherence. We assess the quality of inferred results and compare it to bi-linear and -cubic upscaling. We do this for isosurfaces which were never seen during training, and investigate the improvements when the network can train on the same or similar isosurfaces. We discuss remote visualization and foveated rendering as potential applications.