Motion2Motion

Cross-topology Motion Transfer with Sparse Correspondence

SIGGRAPH Asia 2025
1Tsinghua University
2IDEA Research
3HKUST
4HKU
5ByteDance
6Shanghai AI Lab

Video demo of our cross-topology motion retargeting capabilities

Abstract

This work studies the challenge of transferring animations between characters whose skeletal topologies differ substantially. While many techniques have advanced retargeting techniques in decades, transferring motions across diverse topologies remains less-explored. The primary obstacle lies in the inherent topological inconsistency between source and target skeletons, which restricts the establishment of straightforward one-to-one bone correspondences. Besides, the current lack of large-scale paired motion datasets spanning different topological structures severely constrains the development of data-driven approaches.

To address these limitations, we introduce Motion2Motion, a novel, training-free framework. Simply yet effectively, Motion2Motion works with only one or a few example motions on the target skeleton, by accessing a sparse set of bone correspondences between the source and target skeletons. Through comprehensive qualitative and quantitative evaluations, we demonstrate that Motion2Motion achieves efficient and reliable performance in both similar-skeleton and cross-species skeleton transfer scenarios.

Key Features & Results

Cross-Topology Motion Retargeting

We demonstrate both in-species and cross-species motion retargeting examples:

Anaconda attack motion → King Cobra (in-species) → T-Rex (cross-species)

Biped to Quadruped Animation Retargeting

High-quality motion transfer between different locomotion types:

Flamingo walking motion → Monkey: Coherent hind legs, natural tail & arm motion

Matching with Sparse Correspondence

Our system works with minimal bone binding - as few as 6 bound bones:

Dog animation from bear motion with only 6 bound bones (visualized in purple)

Motion Phase Visualization

Phase coherence analysis across different species:

T-Rex → Human & Fox: Hind leg binding maintains phase coherence across different species

Sparse Source Key Frames

Retargeting with temporally sparse source motion:

Purple frames: Provided key frames | Blue frames: Ground truth
Successfully retarget dragon motion from sparse bat motion key frames

Comparison with Baselines

Our method significantly outperforms existing approaches:

Dragon → Bat retargeting: Our result shows stable alignment with source motion

System Overview

Motion2Motion works in a motion-matching fashion with sparse bone correspondence:

Motion2Motion System Overview

System overview of Motion2Motion. (A) The source motion sequence. (B) The source sequence is divided into overlapping motion patches . (C) Each source patch is projected to the target skeleton space via sparse mapping and noise initialization, serving as the query for retrieval. For each source patch, we retrieve target patches (D) from a pre-built motion patch database, based on sparse correspondences. (E) The matched target patches are averaged for blending. (F) The retargeted motion is reconstructed from the blended target patches. (C)-(F) are executed in L times.

Applications

SMPL-Based Motion to Any Character

Bridging the gap between simple motion capture and complex game characters:

SMPL motion → Complex game characters with dynamic elements like clothing and hair

Blender Add-on

Professional workflow integration for real-time motion retargeting:

See our Blender add-on in action: real-time motion retargeting with intuitive interface

Citation

If you find our work useful, please consider citing:

@article{chen2025motion2motion, title = {Motion2Motion: Cross-topology Motion Transfer with Sparse Correspondence}, author = {Chen, Ling-Hao and Zhang, Yuhong and Yin, Zixin and Dou, Zhiyang and Chen, Xin and Wang, Jingbo and Komura, Taku and Zhang, Lei}, journal = {ACM Transactions on Graphics (TOG)}, volume = {44}, number = {1}, year = {2025}, publisher = {ACM}, doi = {10.1145/xxxxxxx} }

Click the citation box above to copy to clipboard

Acknowledgments

Work done during Ling-Hao Chen's internship at IDEA Research. The author team would like to acknowledge all program committee members for their extensive efforts and constructive suggestions. In addition, Weiyu Li (HKUST), Shunlin Lu (CUHK-SZ), and Bohong Chen (ZJU) had discussed with the author team many times throughout the process. The author team would like to convey sincere appreciation to them as well.