SEED Research & Announcements Blogs Publications Careers Contact Us Research & Announcements Blogs Publications Careers Contact Us

GENEA Challenge 2023: Evaluating Gesture Generation Models in Monadic and Dyadic Settings

This research paper was accepted for publication by the 25th ACM International Conference on Multimodal Interaction.

Authors: Taras Kucherenko (SEED), Rajmund Nagy (KTH), Youngwoo Yoon (ETRI), Jieyeon Woo (ISIR), Teodor Nikolov (Umeå), Mihail Tsakov (Umeå), and Gustav Eje Henter (KTH).

GENEA Challenge 2023: A Large-Scale Evaluation of Gesture Generation Models in Monadic and Dyadic Settings

Download the full research paper. (2.7 MB PDF)

This paper reports on the third GENEA Challenge, which benchmarks data-driven automatic co-speech gesture generation.

In the GENEA Challenge, participating teams built speech-driven gesture-generation systems followed by a joint evaluation. This year’s challenge provided data on both sides of a dyadic interaction, allowing teams to generate full-body motion for an agent given its speech (text and audio) and the speech and motion of the interlocutor.

We evaluated 12 submissions and two baselines together with held-out motion-capture data in several large-scale user studies.

The studies focused on three aspects:

  1. The human-likeness of the motion.
  2. The appropriateness of the motion for the agent’s own speech whilst controlling for the human-likeness of the motion.
  3. The appropriateness of the motion for the behavior of the interlocutor in the interaction

The challenge used a setup that controls for both the human-likeness of the motion and the agent’s own speech.

We found a large span in human-likeness between challenge submissions, with a few systems rated close to human mocap. Appropriateness seems far from being solved, with most submissions performing in a narrow range slightly above chance, far behind natural motion. The effect of the interlocutor is even more subtle, with submitted systems at best performing barely above chance. Interestingly, a dyadic system being highly appropriate for agent speech does not necessarily imply high appropriateness for the interlocutor.

Additional material is available via the project website at svito-zar.github.io/GENEAchallenge2023/

Related News

Filter-Adapted Spatio-Temporal Sampling for Real-Time Rendering

SEED
May 1, 2024
This paper, presented at I3D 2024, discusses how to tailor the frequencies of rendering noise to improve image denoising in real time rendering.

From Photo to Expression: Generating Photorealistic Facial Rigs

SEED
Apr 25, 2024
This presentation from GDC 2024 discusses how machine learning can improve facial animation.

EA presents on graphics, gameplay, and generative tech at GDC 2024

Electronic Arts Inc.
Mar 12, 2024
Don’t miss these GDC 2024 talks from Electronic Arts engineers, scientists, and technical artists.