About Me


I'm Ling-Hao CHEN (陈凌灏 in Chinese, Evan in English)! I am now a Ph.D. student at Tsinghua University, supervised by Prof. Heung-Yeung Shum (沈向洋 教授). I obtained my bachelor's degree from the School of Computer Science and Technology at Xidian University in 2022. My research interests lie in Character Animation, Digital Generation, Embodied Intelligence, and Machine Learning. I am a research intern at the International Digital Economy Academy (IDEA), working closely with Prof. Lei Zhang (张磊 教授).

Address: Room 3908, Building 1, Chang Fu Jin Mao Tower, 5 Shihua Road, Futian District, Shenzhen, P.R. China.

Out of research: I am a part-time marathon runner and I began to make all my training logs public here since 2025.

Selected Publications

The full list of publications can be obtained via My Google Scholar.

Highlight Preprint

MotionLLM

NEW MotionLLM: Understanding Human Behaviors from Human Motions and Videos

Ling-Hao CHEN*, Shunlin Lu*, Ailing Zeng, Hao Zhang, Benyou Wang, Ruimao Zhang, Lei Zhang

Arxiv Preprint 2024

TLDR: This is the FIRST LLM-based (billion level) motion understanding model.

MotionCLR

NEW MotionCLR: Motion Generation and Training-free Editing via Understanding Attention Mechanisms

Ling-Hao CHEN, Shunlin Lu, Wenxun Dai, Zhiyang Dou, Xuan Ju, Jingbo Wang, Taku Komura, Lei Zhang

Arxiv Preprint 2024

TLDR: Interactive editing human motions via understanding attention mechanisms.

Selected Publications

NEW Motion2Motion: Cross-topology Motion Transfer with Sparse Correspondence

Ling-Hao Chen, Yuhong Zhang, Zixin Yin, Zhiyang Dou, Xin Chen, Jingbo Wang, Taku Komura, Lei Zhang

ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia (ACM SIGGRAPH ASIA) 2025

TLDR: The FIRST cross-topology motion retargeting solution (WITHOUT deep models! run on CPU!).

ConsistEdit

NEW ConsistEdit: Highly Consistent and Precise Training-free Visual Editing

Zixin Yin, Ling-Hao Chen, Lionel M. Ni, Xili Dai

ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia (ACM SIGGRAPH ASIA) 2025, To Appear

TLDR: ConsistEdit is a training-free attention control method for MM-DiT that enables precise, structure-aware image and video editing.

MotionLCM

HOT MotionLCM: Real-time Controllable Motion Generation via Latent Consistency Model

Wenxun Dai, Ling-Hao CHEN#, Jingbo Wang, Jinpeng Liu, Bo Dai, Yansong Tang

European Conference on Computer Vision (ECCV) 2024

# Project lead.

TLDR: This is the FIRST time in the community to generate controllable motions in real time.

HumanTOMATO

HumanTOMATO: Text-aligned Whole-body Motion Generation

Shunlin Lu*, Ling-Hao CHEN*, Ailing Zeng, Jing Lin, Ruimao Zhang, Lei Zhang, Heung-Yeung Shum

International Conference on Machine Learning (ICML) 2024

TLDR: We present the FIRST attempt to generate whole-body motions with text description.

HumanMAC

HumanMAC: Masked Motion Completion for Human Motion Prediction

Ling-Hao CHEN*#, Jiawei Zhang*, Yewen Li, Yiren Pang, Xiaobo Xia#, Tongliang Liu

IEEE/CVF International Conference on Computer Vision (ICCV) 2023

# Project lead.

TLDR: This is the FIRST attempt to predict human motion via training-free diffusion models.

Education

Tsinghua University

Shenzhen, P.R. China

Ph.D. Student, from Sep. 2022

Majored in Computer Science and Technology, supervised by Prof. Heung-Yeung Shum (沈向洋 教授).

Xidian University

Xi'an, P.R. China

B. Eng., from Sep. 2018 to June 2022

Majored in Software Engineering. Rank: 1/398 (less than 1%); GPA: 3.9/4.0.

News

Aug. 2025 NEW Two papers (two submissions) got conditional acceptance by ACM SIGGRAPH ASIA 2025. See you in HongKong, China🇨🇳.
Oct. 2024 We release MotionCLR for interactive motion editing via understanding attention.
July 2024 ERASE is accepted by CIKM 2024.
July 2024 MotionLCM is accepted by ECCV 2024.
May 2024 We release MotionLLM, the first LLM-based (billion level) motion understanding model.
May 2024 HOT HumanTOMATO is accepted by ICML 2024. 🤩 See you in Vienna🇦🇹!
May 2024 We release MotionLCM, the first real-time controllable motion generation method.
Jan. 2024 IDEAL is accepted by ICLR 2024.
Nov. 2023 I have been selected as a Top Reviewer for NeurIPS 2023.
July 2023 HumanMAC is accepted by ICCV 2023.
Dec. 2021 I was awarded the Principal Scholarship (only FIVE students in Xidian University).

Highlight Open-source Project

UniMoCap

HOT UniMoCap: Unifier for Text-Motion Datasets

TLDR: UniMoCap is a community implementation to unify the text-motion datasets (HumanML3D, KIT-ML, and BABEL), support SMPL and SMPL-X formats.

Awards

Nov. 2023

NeurIPS 2023 Top Reviewer

June 2022

Outstanding Graduate Scholarship

(the ONLY student in the S.E. department of XDU)

Dec. 2021

Principal Scholarship & First-class CASC Scholarship

(Only FIVE students in Xidian University for Principal Scholarship; the ONLY student in Xidian University for CASC Scholarship)

2018-2021

National Scholarship (less than 1%)

(For three years in my undergraduate life!)

Academic Service

Conference Reviewer or PC Member

ACM SIGGRAPH (2025), ACM SIGKDD (2024), ICLR (2024, 2025), NeurIPS (2023, 2024, 2025), ICML (2024, 2025), CVPR (2024, 2025), ICCV (2023, 2025), ECCV (2024), ICDM (2023), WACV (2026), ACL ARR (2024), AAAI (2024, 2025, 2026), UAI (2023), IJCAI (2024, 2025), ACM MM (2024), AISTATS (2024, 2025), ACML (2023).

Journal Reviewer

TVCG, TPAMI, TASE, TNNLS, ACM TIST, TMM.

Collaborators and Friends

Jingbo Wang (Shanghai AI Lab), Zhiyang Dou (MIT), Zixin Yin (HKUST), Xuan Ju (CUHK), Wenxun Dai (THU), Shunlin Lu (CUHK-SZ), Xiaobo Xia (NUS), Liangcai Su (THU)