SEED Research & Announcements Blogs Publications Careers Contact Us Research & Announcements Blogs Publications Careers Contact Us

MM 2022: End-to-End 3D Face Reconstruction From Single Images

This research paper was accepted for publication by the ACM International Multimedia Conference 2022.
Authors: Qixin Deng (University of Houston), Aobo Jin (University of Houston-Victoria), Binh H. Le (SEED, Electronic Arts), Zhigang Deng (University of Houston)

End-to-End 3D Face Reconstruction With Expressions and Specular Albedos From Single In-the-Wild Images

Download the full research paper. (24 MB PDF)

Do co-speech gestures really affect how people perceive animated characters? And if so, how do you measure their effectiveness?

The ability to recover 3D face models from photos taken outside of studio settings has a lot of potential applications. However, properly modeling complex lighting effects from a single in-the-wild face image — including specular lighting, shadows, and occlusions – is still considered to be an open research challenge. 

In this paper, we propose a framework based on a convolutional neural network to regress the face model from a single image taken in the wild. The output face model includes a dense 3D shape, head pose, expression, diffuse albedo, specular albedo, and the corresponding lighting conditions. 

Our approach uses novel hybrid loss functions to disentangle face shape identities, expressions, poses, albedos, and lighting. In addition to a carefully-designed ablation study, we also conduct direct comparison experiments to show that our method can outperform state-of-art methods both quantitatively and qualitatively.

Evaluating Data-Driven Co-Speech Gestures of Embodied Conversational Agents through Real-Time Interaction

Related News

Evaluating Gesture Generation in a Large-Scale Open Challenge

SEED
May 9, 2024
This paper, published in Transactions on Graphics, reports on the second GENEA Challenge, a project to benchmark data-driven automatic co-speech gesture generation.

Filter-Adapted Spatio-Temporal Sampling for Real-Time Rendering

SEED
May 1, 2024
This paper, presented at I3D 2024, discusses how to tailor the frequencies of rendering noise to improve image denoising in real time rendering.

From Photo to Expression: Generating Photorealistic Facial Rigs

SEED
Apr 25, 2024
This presentation from GDC 2024 discusses how machine learning can improve facial animation.