AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (14 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

VarGes: Improving variation in co-speech 3D gesture generation via StyleCLIPS

School of Data Science and Media Intelligence, Communication University of China, Beijing 100024, China
Samsung Research America, Sun Jose 95134, CA, USA
Hainan International College, Communication University of China, Hainan 572423, China
Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing, Institute of Artificial Intelligence, Beihang University, Beijing 100191, China
Hangzhou International Innovation Institute, Beihang University, Hangzhou 310051, China

* Ming Meng and Ke Mu contributed equally to this work.

Show Author Information

Abstract

Generating expressive and diverse human gestures from audio is crucial in fields like human–computer interaction, virtual reality, and animation. While existing methods have achieved remarkable performance, they often exhibit limitations due to constrained dataset diversity and the restricted amount of information derived from audio inputs. To address these challenges, we present VarGes, a novel variation-driven framework designed to enhance co-speech gesture generation by integrating visual stylistic cues while maintaining naturalness. Our approach begins with a variation-enhanced feature extraction module, which seamlessly incorporates style-reference video data into a 3D human pose estimation network to extract StyleCLIPS, thereby enriching the input with stylistic information. Subsequently, we employ a variation-compensation style encoder, a transformer-style encoder equipped with an additive attention mechanism pooling layer, to robustly encode diverse StyleCLIPS representations and effectively manage stylistic variations. Finally, a variation-driven gesture predictor module fuses MFCC audio features with StyleCLIPS encodings via cross-attention, injecting this fused data into a cross-conditional autoregressive model to modulate 3D human gesture generation based on audio input and stylistic clues. The efficacy of our approach is validated on benchmark datasets, on which it outperforms existing methods in terms of gesture diversity and naturalness. Our code and video results are publicly available at https://github.com/mookerr/VarGES/.

Graphical Abstract

References

【1】
【1】
 
 
Computational Visual Media
Pages 1263-1279

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Meng M, Mu K, Zhu Y, et al. VarGes: Improving variation in co-speech 3D gesture generation via StyleCLIPS. Computational Visual Media, 2025, 11(6): 1263-1279. https://doi.org/10.26599/CVM.2025.9450477

754

Views

14

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Received: 23 September 2024
Accepted: 12 February 2025
Published: 12 December 2025
© The Author(s) 2025.

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

To submit a manuscript, please go to https://jcvm.org.