Junjie Qiu
Logo Research Assistant, HKUST (GZ)

Junjie Qiu is currently a Research Assistant in the Intelligent Transportation Thrust, working under the supervision of Prof. Yuxuan Liang.


Education
  • Southern University of Science and Technology
    Southern University of Science and Technology
    B.S. in Data Science and Big Data Technology
    Sep. 2021 - Jul. 2025
Work Experience
  • 2012 Laboratories, Huawei
    2012 Laboratories, Huawei
    Research Intern, Spatio-Temporal Foundation Model
    March. 2025 - Dec. 2025
News
2025
One submission was accepted by NeurIPS 2025 as a spotlight paper.
Sep 18
Selected Publications (view all )
Learning to Factorize Spatio-Temporal Foundation Models
Learning to Factorize Spatio-Temporal Foundation Models

Siru Zhong, Junjie Qiu, Yangyu Wu, Xingchen Zou, Bin Yang, Chenjuan Guo, Hao Xu, Yuxuan Liang# (# corresponding author)

Neural Information Processing Systems (NeurIPS) 2025 Spotlight

Spatio-Temporal (ST) Foundation Models (STFMs) promise cross-dataset generalization, yet joint ST pretraining is computationally costly and struggles with domain-specific spatial correlations. To address this, we propose FactoST, a factorized STFM that decouples universal temporal pretraining from ST adaptation. The first stage trains a space-agnostic backbone via multi-task learning to capture multi-frequency, cross-domain temporal patterns at low cost. The second stage attaches a lightweight adapter that rapidly adapts the backbone to specific ST domains via metadata fusion, interaction pruning, domain alignment, and memory replay. Extensive forecasting experiments show that in few-shot settings, FactoST reduces MAE by up to 46.4% versus UniST, uses 46.2% fewer parameters, achieves 68% faster inference than OpenCity, and remains competitive with expert models. This factorized view offers a practical, scalable path toward truly universal STFMs.

Learning to Factorize Spatio-Temporal Foundation Models

Siru Zhong, Junjie Qiu, Yangyu Wu, Xingchen Zou, Bin Yang, Chenjuan Guo, Hao Xu, Yuxuan Liang# (# corresponding author)

Neural Information Processing Systems (NeurIPS) 2025 Spotlight

Spatio-Temporal (ST) Foundation Models (STFMs) promise cross-dataset generalization, yet joint ST pretraining is computationally costly and struggles with domain-specific spatial correlations. To address this, we propose FactoST, a factorized STFM that decouples universal temporal pretraining from ST adaptation. The first stage trains a space-agnostic backbone via multi-task learning to capture multi-frequency, cross-domain temporal patterns at low cost. The second stage attaches a lightweight adapter that rapidly adapts the backbone to specific ST domains via metadata fusion, interaction pruning, domain alignment, and memory replay. Extensive forecasting experiments show that in few-shot settings, FactoST reduces MAE by up to 46.4% versus UniST, uses 46.2% fewer parameters, achieves 68% faster inference than OpenCity, and remains competitive with expert models. This factorized view offers a practical, scalable path toward truly universal STFMs.

All publications
Honors & Awards
  • Top Ten Graduate Award, Shude Residential College, SUSTech
    June. 2025
  • Outstanding Graduate of Class 2025, SUSTech
    June. 2025
  • Outstanding Undergraduate Thesis Award, SUSTech
    June. 2025
  • First Prize (10/300+) & Group Competitions Prize, ASC24
    Apr. 2024
Service
  • Reviewer, ICASSP 2026
    Dec. 2025
  • Reviewer, AI4TS Workshop, AAAI 2026
    Nov. 2025