About
Experiences
Education
News
- We are hosting two CVPR 2025 Video Understanding Challenge @ LOVE Track 1A and LOVE Track 1B.
- We release Video-MMLU, a Massive Multi-Discipline Lecture Understanding Benchmark.
- One paper accepted to CVPR 2025 workshop@Efficient Large Vision Models.
- Our paper AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark is accepted by ICLR 2025.
- Our paper Meissonic: Revitalizing masked generative transformers for efficient high-resolution text-to-image synthesis is accepted by ICLR 2025.
Selected Publications and Manuscripts
* Equal contribution. † Project lead. ‡ Corresponding author.
Also see Google Scholar.

Video-MMLU: A Massive Multi-Discipline Lecture Understanding Benchmark
AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark
ICLR, 2025
AuroraCap is a multimodal LLM designed for image and video detailed captioning. We also release VDC, the first benchmark for detailed video captioning.

MovieChat: From Dense Token to Sparse Memory for Long Video Understanding
CVPR, 2024
MovieChat achieves state-of-the-art performace in extra long video (more than 10K frames) understanding by introducing memory mechanism.
Teaching Assistant
     Teaching Assistant (TA), with Prof. Gaoang Wang
Selected Honors & Awards
- National Scholarship, 2024 (Zhejiang University)
- National Scholarship, 2021 (Dalian University of Technology)
Top