Openings
There are multiple openings of research assistant professor (RAP)/postdocs/research assistants in our group in the following areas [Opening]:
AI agent for wireless networks
Multi-modality large models for wireless networks
Device-edge-cloud collaboration for large language models
Security and privacy issues of large models for wireless networks
Experience in both wireless communication and deep learning is needed. Please send your CV to eejzhang@ust.hk. To apply for a postdoc position, please also provide 3 representative publications. It requires at least one year of postdoc experience to apply for the RAP position.
What's new
(Student Awards) Congratulations to Yuanfang, Yufei, Yuchang, Xinyu, Ruiqi, and Jiawei!
(Aug 2024) Yuanfang received the Hong Kong PhD Fellowship.
(July 2024) Yuanfang and Yufei received the HKUST RedBird PhD Award.
(July 2024) Yuchang received the HKUST RedBird Academic Excellence Award for Continuing PhD Students.
(May 2024) Yuchang and Xinyu passed their thesis exams. Congratulations, Dr. Sun and Dr. Bian!
(May 2024) Ruiqi received the Hong Kong PhD Fellowship.
(March 2024) Jiawei received the School of Engineering (SENG) PhD Research Excellence Award 2023-24! [SENG News]
(HarmoniCa) “HarmoniCa: Harmonizing training and inference for better feature cache in diffusion transformer acceleration,” preprint. [Paper]
(MEGA) “MEGA: Memory-efficient 4D Gaussian splatting for dynamic scenes,” preprint. [Paper]
(GI-GS) “GI-GS: Global illumination decomposition on Gaussian splatting for inverse rendering,” preprint. [Paper] [Project Page]
(EVA-Gaussian) “EVA-Gaussian: 3D Gaussian-based real-time human novel view synthesis under diverse camera settings,” preprint. [Paper] [Project Page]
(ReconX) “ReconX: Reconstruct any scene from sparse views with video diffusion model,” preprint. [Paper] [Code] [Project Page]
(WirelessAgent) “WirelessAgent: Large language model agents for intelligent wireless networks,” submitted. [Paper]
(WirelessLLM) “WirelessLLM: Empowering large language models towards wireless intelligence,” to appear. [Paper]
(NeurIPS 24) Two papers accepted to NeurIPS 2024.
“Vista: A generalizable driving world model with high fidelity and versatile controllability” [Paper] [Code] [Demo]
“Kaleidoscope: Learnable masks for heterogeneous multi-agent reinforcement learning” [Paper] [Code]
(ECCV 24) Two papers accepted to ECCV 2024.
(ICML 24) “Individual contributions as intrinsic exploration scaffolds for multi-agent reinforcement learning”, accepted by ICML 2024. [Paper] [Code]
(CVPR 24) Three papers accepted to CVPR 2024.
“Generalized predictive model for autonomous driving” (Highlight, Top 2.8%) [Paper]
“Boosting neural representations for videos with a conditional decoder” (Highlight, Top 2.8%) [Paper]
“Task-aware encoder control for deep video compression” [Paper]
(Nature Communications) Paper “Selective knowledge sharing for privacy-preserving federated distillation without a good teacher” was accepted by Nature Communications. [Paper]
|
Our Group WeChat Account (in Chineses)
Briefing results of our group.
|
Selected Publications
Reinforcement Learning
X. Li, L. Pan, and J. Zhang, “Kaleidoscope: Learnable masks for heterogeneous multi-agent reinforcement learning,” Advances in neural information processing systems (NeurIPS), Vancouver, Canada, Dec. 2024. (Acceptance Rate: 25.8%) [Paper] [Code]
X. Li, Z. Liu, S. Chen, and J. Zhang, “Individual contributions as intrinsic exploration scaffolds for multi-agent reinforcement learning,” International Conference on Machine Learning (ICML), Vienna, Austria, July 2024. (Acceptance Rate: 27.5%) [Paper] [Code] [Video]
X. Li, J. Zhang, “Context-aware communication for multi-agent reinforcement learning,” International Conference on Autonomous Agents and Multiagent Systems (AAMAS), Auckland, New Zealand, May 2024. (Acceptance Rate: 25%) [Paper]
X. Wang*, X. Li*, J. Shao, and J. Zhang, “AC2C: Adaptively controlled two-hop communication for multi-agent reinforcement learning,” International Conference on Autonomous Agents and Multiagent Systems (AAMAS), London, United Kingdom, May-June 2023. (Acceptance Rate: 23.3%) (* equal contribution) [Paper]
|