About
Education
Experiences
News
- We are hosting CVPR 2025 Video Understanding Challenge @ LOVEU.
- Our paper AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark is accepted by ICLR 2025.
- Our paper Meissonic: Revitalizing masked generative transformers for efficient high-resolution text-to-image synthesis is accepted by ICLR 2025.
Selected Publications and Manuscripts
* Equal contribution. † Project lead. ‡ Corresponding author.
Also see Google Scholar.
AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark
ICLR, 2025
AuroraCap is a multimodal LLM designed for image and video detailed captioning. We also release VDC, the first benchmark for detailed video captioning.

Fantasy: Transformer Meets Transformer in Text-to-Image Generation
Preperint, 2024
Fantasy, an efficient text-to-image generation model marrying the decoder-only Large Language Models (LLMs) and transformer-based masked image modeling (MIM).

MovieChat: From Dense Token to Sparse Memory for Long Video Understanding
CVPR, 2024
MovieChat achieves state-of-the-art performace in extra long video (more than 10K frames) understanding by introducing memory mechanism.
Teaching Assistant
     Teaching Assistant (TA), with Prof. Gaoang Wang
Selected Honors & Awards
- National Scholarship, 2024 (Zhejiang University)
- National Scholarship, 2021 (Dalian University of Technology)
Top