EA Technology AI & Machine Learning Animation Speech & Language Rendering & Lighting SEED Frostbite All Teams AI & Machine Learning Animation Speech & Language Rendering & Lighting SEED Frostbite All Teams

GENEA Challenge 2022: Co-Speech Gesture Generation

This research paper was accepted for publication by the 24th ACM International Conference on Multimodal Interaction.

Authors: Youngwoo Yoon (ETRI), Pieter Wolfert (IDLab), Taras Kucherenko (SEED), Carla Viegas (Carnegie Mellon), Teodor Nikolov (Umeå), Mihail Tsakov (Umeå), and Gustav Eje Henter (KTH).

The GENEA Challenge 2022: A Large Evaluation of Data-Driven Co-Speech Gesture Generation

Download the full research paper. (1 MB PDF)

This paper reports on the second GENEA Challenge, which benchmarks data-driven automatic co-speech gesture generation. Participating teams used the same speech and motion dataset to build gesture-generation systems. Motion generated by all these systems was rendered to video using a standardized visualization pipeline and evaluated in several large, crowdsourced user studies. Unlike when comparing different research papers, differences in results are here only due to differences between methods, enabling direct comparison between systems.

This year’s dataset was based on 18 hours of full-body motion capture (including fingers) of different persons engaging in dyadic conversation. Ten teams participated in the challenge across two tiers: full-body and upper-body gesticulation. For each tier, we evaluated both the human-likeness of the gesture motion and its appropriateness for the specific speech signal. Our evaluations decouple human-likeness from gesture appropriateness, which previously was a major challenge in the field.

The evaluation results are a revolution, and a revelation. Some synthetic conditions are rated as significantly more human-like than human motion capture. To the best of our knowledge, this has never been shown before on a high-fidelity avatar. On the other hand, all synthetic motion is found to be vastly less appropriate for the speech than the original motion-capture recordings.

Also see the GENEA Challenge project page at Github for more information, including the dataset and code.

Related News

Filter-Adapted Spatio-Temporal Sampling for Real-Time Rendering

SEED
May 1, 2024
This paper, presented at I3D 2024, discusses how to tailor the frequencies of rendering noise to improve image denoising in real time rendering.

From Photo to Expression: Generating Photorealistic Facial Rigs

SEED
Apr 25, 2024
This presentation from GDC 2024 discusses how machine learning can improve facial animation.

EA presents on graphics, gameplay, and generative tech at GDC 2024

Electronic Arts Inc.
Mar 12, 2024
Don’t miss these GDC 2024 talks from Electronic Arts engineers, scientists, and technical artists.