SEED Research & Announcements Blogs Publications Careers Contact Us Research & Announcements Blogs Publications Careers Contact Us

IVA 2022: Evaluating Data-Driven Co-Speech Gestures of Embodied Conversational Agents

This is a research presentation from the International Conference on Intelligent Virtual Agents 2022.

Authors: Yuan He (KTH), André Pereira (KTH), Taras Kucherenk (SEED)

Download the full research paper. (2 MB PDF)

Do co-speech gestures really affect how people perceive animated characters? And if so, how do you measure their effectiveness?

Embodied Conversational Agents (ECAs) that make use of co-speech gestures can enhance human-machine interactions in many ways. In recent years, data-driven gesture generation approaches for ECAs have attracted considerable research attention, and related methods have continuously improved. Real-time interaction is typically used when researchers evaluate ECA systems that generate rule-based gestures. However, when evaluating the performance of ECAs based on data-driven methods, participants are often required only to watch pre-recorded videos, which cannot provide adequate information about what a person perceives during the interaction. 

To address this limitation, this paper explored use of real-time interaction to assess data-driven gesturing ECAs. We provided a testbed framework, and investigated whether gestures could affect human perception of ECAs in the dimensions of human-likeness, animacy, perceived intelligence, and focused attention. 

Our user study required participants to interact with two ECAs – one with and one without hand gestures. We collected subjective data from the participants’ self-report questionnaires and objective data from a gaze tracker. To our knowledge, the current study represents the first attempt to evaluate data-driven gesturing ECAs through real-time interaction and the first experiment using gaze-tracking to examine the effect of ECAs’ gestures.

Evaluating Data-Driven Co-Speech Gestures of Embodied Conversational Agents through Real-Time Interaction

Related News

Evaluating Gesture Generation in a Large-Scale Open Challenge

SEED
May 9, 2024
This paper, published in Transactions on Graphics, reports on the second GENEA Challenge, a project to benchmark data-driven automatic co-speech gesture generation.

Filter-Adapted Spatio-Temporal Sampling for Real-Time Rendering

SEED
May 1, 2024
This paper, presented at I3D 2024, discusses how to tailor the frequencies of rendering noise to improve image denoising in real time rendering.

From Photo to Expression: Generating Photorealistic Facial Rigs

SEED
Apr 25, 2024
This presentation from GDC 2024 discusses how machine learning can improve facial animation.