Ling-Hao CHEN     

Hi here, I'm Ling-Hao CHEN (陈凌灏 in Chinese, Evan in English)! Now, I am now a Ph.D. student at Tsinghua University, supervised by Prof. Heung-Yeung Shum (沈向洋 教授). I obtained my bachelor's degree from the School of Computer Science and Technology at Xidian University in 2022. My research interests lie in Character Animation, Digital Generation, Embodied Intelligence, and Machine Learning. I am a research intern at the International Digital Economy Academy (IDEA), working closely with Prof. Lei Zhang (张磊 教授).

Address: Room 3908, Building 1, Chang Fu Jin Mao Tower, 5 Shihua Road, Futian District, Shenzhen, P.R. China.

Motto: Seek the truth, practice real skills and do real things! (求真学问,练真本领,做真东西!)

Email  /  GitHub  /  ZhiHu (知乎)  /  Twitter  /  Google Scholar

profile photo
Education
inspur

Shenzhen International Graduate School, Tsinghua University, Shenzhen, P.R. China
Ph.D. Student, from Sep. 2022
Majored in Computer Science and Technology, supervised by Prof. Heung-Yeung Shum (沈向洋 教授).

inspur

School of Computer Science and Technology, Xidian University, Xi'an, P.R. China
B. Eng., from Sep. 2018 to June 2022
Majored in Software Engineering. Rank: 1/398 (less than 1%); GPA: 3.9/4.0.

News
  • Oct. 2024. We release MotionCLR for interactive motion editng via understanding attention.
  • July 2024. ERASE is accepted by CIKM 2024.
  • July 2024. MotionLCM is accepted by ECCV 2024.
  • May 2024. We release MotionLLM, the first LLM-based (billion level) motion understanding model.
  • May 2024. HumanTOMATO is accepted by ICML 2024. 🤩 See you in Vienna🇦🇹!
  • May 2024. We release MotionLCM, the first real-time controllable motion generation method.
  • Jan. 2024. IDEAL is accepted by ICLR 2024.
  • Dec. 2023. Check out our recent work ERASE.
  • Dec. 2023. Invited talk at MiHoYo Lumi Team (about HumanMAC, HumanTOMATO, and UniMoCap).
  • Dec. 2023. Invited talk at CUHK SEEM (about HumanMAC, HumanTOMATO, and UniMoCap).
  • Nov. 2023. I have been selected as a Top Reviewer for NeurIPS 2023.
  • Oct. 2023. Check out our recent work HumanTOMATO and IDEAL.
  • Oct. 2023. Check out my recent project UniMoCap.
  • Aug. 2023. Invited talk (in Chinese) at AI Time [Video link] and TechBeat [Video link].
  • July 2023. HumanMAC is accepted by ICCV 2023.
  • Feb. 2023. Check out our recent work HumanMAC .
  • Jan. 2023. AnomMAN was accepted by the Information Sciences Journal (JCR-Q1, IF=8.23).
  • June 2022. I defended my thesis "Audio-driven Talking Head Reenactment Algorithm" successfully, and the thesis was selected as the OUTSTANDING thesis of XDU.
  • Dec. 2021. I was awarded the Principal Scholarship (only FIVE students in Xidian University).
  • Dec. 2021. I was awarded the Fisrt-class CASC Scholarship of China Aerospace Science and Technology Corporation (the ONLY student in Xidian University).
  • Sep. 2021. I was awarded the National Scholarship 2020-2021.
Hightlight Preprint
inspur

MotionCLR: Motion Generation and Training-free Editing via Understanding Attention Mechanisms.
Ling-Hao CHEN, Wenxun Dai, Xuan Ju, Shunlin Lu, Lei Zhang.
Arxiv Preprint 2024.
[Project Page] | [Preprint] | [Code] | [Demo] | [Blogpost] | [Video]
TLDR: Interactive editing human motions via understanding attention mechanisms.

inspur

MotionLLM: Understanding Human Behaviors from Human Motions and Videos.
Ling-Hao CHEN*, Shunlin Lu*, Ailing Zeng, Hao Zhang, Benyou Wang, Ruimao Zhang, Lei Zhang.
Arxiv Preprint 2024.
[Project Page] | [Preprint] | [Code] | [Demo] | [Blogpost]
TLDR: This is the FIRST LLM-based (billion level) motion understanding model.

Selected Publication

          The full list of publications can be obtained via My Google Scholar.
inspur

MotionLCM: Real-time Controllable Motion Generation via Latent Consistency Model.
Wenxun Dai, Ling-Hao CHEN#, Jingbo Wang, Jinpeng Liu, Bo Dai, Yansong Tang.
European Conference on Computer Vision (ECCV) 2024.
[Project Page] | [Preprint] | [Code] | [Video] | [Demo]
# Project lead.
TLDR: This is the FIRST time in the community to generate controllable motions in real time.

inspur

HumanTOMATO: Text-aligned Whole-body Motion Generation.
Shunlin Lu*, Ling-Hao CHEN*, Ailing Zeng, Jing Lin, Ruimao Zhang, Lei Zhang, Heung-Yeung Shum.
International Conference on Machine Learning (ICML) 2024.
[Project Page] | [Preprint] | [Code] | [Video] | [Slides] | [Poster]
TLDR: We present the FIRST attempt to generate whole-body motions with text description.

inspur

HumanMAC: Masked Motion Completion for Human Motion Prediction.
Ling-Hao CHEN*# , Jiawei Zhang*, Yewen Li, Yiren Pang, Xiaobo Xia# , and Tongliang Liu.
IEEE/CVF International Conference on Computer Vision (ICCV) 2023.
[Project Page] | [Preprint] | [Code] | [Video] | [Slides] | [Poster] | [ZhiHu (知乎)]
# Project lead.
TLDR: This is the FIRST attempt to predict human motion via training-free diffusion models.

Hightlight Open-source Project
inspur

UniMoCap: Unifier for Text-Motion Datasets. [Code]
TLDR: UniMoCap is a community implementation to unify the text-motion datasets (HumanML3D, KIT-ML, and BABEL), support SMPL and SMPL-X formats.

Academic Service
    Conference Reviewer or PC Member: SIGKDD (2024), ICLR (2024, 2025), NeurIPS (2023, 2024), ICML (2024), CVPR (2024), ICCV (2023), ECCV (2024), ICDM (2023), ACL ARR (2024), AAAI (2024, 2025), UAI (2023), IJCAI (2024), ACM MM (2024), AISTATS (2024, 2025), ACML (2023).
    Journal Reviewer: TNNLS, ACM TIST, Information Fusion, Neural Networks.
Awards

  • Nov. 2023, NeurIPS 2023 Top Reviewer .
  • June 2022, Outstanding Graduate Scholarship (the ONLY student in the S.E. department of XDU).
  • Dec. 2021, Principal Scholarship (Only FIVE students in Xidian University) & Fisrt-class CASC Scholarship (the ONLY student in Xidian University).
  • 2020 ~ 2021, 2019 ~ 2020, 2018 ~ 2019, National Scholarship(less than 1%). (For three years in my undergraduate life!)

Resource
Collaborators and Friends

    Here are some peers/mates who are working or worked very closely with me.

  • Jingbo Wang, Research Scientist at Shanghai AI Lab, Ph.D. from CUHK MMLab.
  • Xuan Ju, Ph.D. student from CUHK, working on generative models.
  • Wenxun Dai, Master student at THU, working on human motion.
  • Shunlin Lu, Ph.D. student of CUHK-SZ, focous on Huamn Motion Generation.
  • Haotian Zheng, Ph.D. student of HKU, focous on Machine Learning.
  • Jiale Liu, Ph.D. student of PSU, focous on LLMs and Coreset Selection.
  • Xiaobo Xia, Ph.D. candidate at USYD, focous on Machine Learning, Google Ph.D. Fellowship 2022. (mentored me)
  • Wenhao Yang, Ph.D. student at LAMDA@NJU, focus on Machine Learning (especially Online Learning) and Optimization.
  • Liangcai Su, Master candidate at THU, focous on Information Retrieval and Data Mining. (mentored me)
  • Yiren Pang, Master student at TANKLAB@TJU, focus on Operating Systems, FPGA, and Computer Network.
  • Xieyang Sun (Sund), Ph.D student at XJTU, focus on Air-gapped Attack, Covert Communication, and Wireless Sensing.
  • Boyuan Sun (BB Chan), Ph.D. student at NKU, focus on Computer Vision.


(Last update: Oct. 2024)