Junjie Qiu
Logo Research Assistant, HKUST (GZ)
Logo Research Intern, 2012 Laboratories, Huawei

Junjie Qiu is an incoming PhD student in Data Science and Analytics at the Hong Kong University of Science and Technology (Guangzhou). He is currently a Research Assistant in the Intelligent Transportation Thrust, working under the supervision of Prof. Yuxuan Liang. Previously, he was a Research Intern at Huawei’s 2012 Laboratories.


Education
  • Hong Kong University of Science and Technology (GZ)
    Hong Kong University of Science and Technology (GZ)
    Ph.D. in Data Science and Analytics
    Upcoming
  • Southern University of Science and Technology
    Southern University of Science and Technology
    B.S. in Data Science and Big Data Technology
    Sep. 2021 - Jul. 2025
Work Experience
  • 2012 Laboratories, Huawei
    2012 Laboratories, Huawei
    Research Intern, Spatio-Temporal Foundation Model
    Mar. 2025 - Present
News
2025
One submission was accepted by NeurIPS 2025 as a spotlight paper.
Sep 18
Selected Publications (view all )
Learning to Factorize Spatio-Temporal Foundation Models
Learning to Factorize Spatio-Temporal Foundation Models

Siru Zhong, Junjie Qiu, Yangyu Wu, Xingchen Zou, Bin Yang, Chenjuan Guo, Hao Xu, Yuxuan Liang# (# corresponding author)

Neural Information Processing Systems (NeurIPS) 2025 Spotlight

Spatio-Temporal (ST) Foundation Models (STFMs) promise cross-dataset generalization, yet joint ST pretraining is computationally costly and struggles with domain-specific spatial correlations. To address this, we propose FactoST, a factorized STFM that decouples universal temporal pretraining from ST adaptation. The first stage trains a space-agnostic backbone via multi-task learning to capture multi-frequency, cross-domain temporal patterns at low cost. The second stage attaches a lightweight adapter that rapidly adapts the backbone to specific ST domains via metadata fusion, interaction pruning, domain alignment, and memory replay. Extensive forecasting experiments show that in few-shot settings, FactoST reduces MAE by up to 46.4% versus UniST, uses 46.2% fewer parameters, achieves 68% faster inference than OpenCity, and remains competitive with expert models. This factorized view offers a practical, scalable path toward truly universal STFMs.

Learning to Factorize Spatio-Temporal Foundation Models

Siru Zhong, Junjie Qiu, Yangyu Wu, Xingchen Zou, Bin Yang, Chenjuan Guo, Hao Xu, Yuxuan Liang# (# corresponding author)

Neural Information Processing Systems (NeurIPS) 2025 Spotlight

Spatio-Temporal (ST) Foundation Models (STFMs) promise cross-dataset generalization, yet joint ST pretraining is computationally costly and struggles with domain-specific spatial correlations. To address this, we propose FactoST, a factorized STFM that decouples universal temporal pretraining from ST adaptation. The first stage trains a space-agnostic backbone via multi-task learning to capture multi-frequency, cross-domain temporal patterns at low cost. The second stage attaches a lightweight adapter that rapidly adapts the backbone to specific ST domains via metadata fusion, interaction pruning, domain alignment, and memory replay. Extensive forecasting experiments show that in few-shot settings, FactoST reduces MAE by up to 46.4% versus UniST, uses 46.2% fewer parameters, achieves 68% faster inference than OpenCity, and remains competitive with expert models. This factorized view offers a practical, scalable path toward truly universal STFMs.

All publications
Honors & Awards
  • Top Ten Graduate Award, Shude Residential College, SUSTech
    June. 2025
  • First Prize (10/300+) & Group Competitions Prize, ASC24
    Apr. 2024
Service
  • Student Member, IEEE
    Nov. 2024 - Present