SEED Research & Announcements Blogs Publications Careers Contact Us Research & Announcements Blogs Publications Careers Contact Us

Filter-Adapted Spatio-Temporal Sampling for Real-Time Rendering

This paper was presented at 2024 ACM Siggraph Symposium on Interactive 3G Graphics and Games in Philadelphia, USA, on 8-10 May, 2024.

Authors: William Donnelly, Alan Wolfe, Judith Bütepage, Jon Valdés.

 

Filter-Adapted Spatio-Temporal Sampling for Real-Time Rendering

Download the paper (PDF 8 MB).

Stochastic sampling techniques are ubiquitous in real-time rendering, where performance constraints force the use of low sample counts, which leads to noisy intermediate results. To remove this noise, temporal and spatial denoising in post-processing is an integral part of the real-time graphics pipeline. 

This paper's main insight is that we can optimize the samples used in stochastic sampling to minimize the post-processing error.

The core of our method is an analytical loss function that measures post-filtering error for a class of integrands – multidimensional Heaviside functions. These integrands are an approximation of the discontinuous functions commonly found in rendering. Our analysis applies to arbitrary spatial and spatiotemporal filters, scalar and vector sample values, and uniform and non-uniform probability distributions. 

We show that the spectrum of Monte Carlo noise resulting from our sampling method is adapted to the shape of the filter, resulting in less noisy final images. We demonstrate improvements over state-of-the-art sampling methods in three representative rendering tasks: ambient occlusion, volumetric ray-marching, and color image dithering. 

Common noise textures and noise-generation code are available at https://github.com/electronicarts/fastnoise.

Related News

Beyond White Noise for Real-Time Rendering

SEED
May 14, 2024
SEED’s Alan Wolfe discusses the use of different types of noise for random number generation, focusing on the application of blue noise in rendering images for gaming.

Evaluating Gesture Generation in a Large-Scale Open Challenge

SEED
May 9, 2024
This paper, published in Transactions on Graphics, reports on the second GENEA Challenge, a project to benchmark data-driven automatic co-speech gesture generation.

From Photo to Expression: Generating Photorealistic Facial Rigs

SEED
Apr 25, 2024
This presentation from GDC 2024 discusses how machine learning can improve facial animation.