Jun Zhang

PhD, FIEEE
alt text 

Associate Professor, IEEE Fellow
Department of Electronic and Computer Engineering (ECE)
Computer Engineering Program (CPEG)
The Hong Kong University of Science and Technology (HKUST)

Distinguished Lecturer, IEEE Communications Society

Office: Room 2430
Email: eejzhang@ust.hk
Phone: +852 2358-7050
Google Scholar Citation

Openings

There are multiple openings of eigineers/postdocs/RAs in our group in the following areas:

  • Large language models (LLMs) and their applications in communications and networking

  • O-RAN platform, and intelligent wireless networks

Applicants with experience in communication system prototyping will be given higher priority. Please send your CV to eejzhang@ust.hk. To apply for a postdoc position, please also provide 3 representative publications.

What's new

  • (Call for Paper)Digital Twins Meet Artificial Intelligence in 6G”, a feature topic on IEEE Communications Magazine. (Deadline: 31 March 2024)

  • (New Book) X. Lin, J. Zhang, Y. Liu, and J. Kim, Eds. Fundamentals of 6G Communications and Networking, Springer Cham, 2024.

  • (New Survey) “Green edge AI: A contemporary survey,” submitted. [Paper]

  • (New Survey) “A survey of what to share in federated learning: Perspectives on model utility, privacy leakage, and communication efficiency,” submitted. [Paper]

  • (EdgeGPT) Our EdgeGPT paper “Large language models empowered autonomous edge AI for connected intelligence” was accepted by IEEE Communications Magazine. [Paper]

  • (Nature Communications) Paper “Selective knowledge sharing for privacy-preserving federated distillation without a good teacher” was accepted by Nature Communications. [Paper]

  • (CVPR 24) Three papers accepted to CVPR 2024.

    • “Generalized predictive model for autonomous driving”

    • “Boosting neural representations for videos with a conditional decoder”

    • “Controlling encoder of deep video compression for machine”

  • (AAMAS 24) “Context-aware communication for multi-agent reinforcement learning,” accepted by AAMAS 2024. [Paper]

  • (CVPR 23) “Generalized relation modeling for transformer tracking”, accepted by CVPR 2023. [Paper] [GitHub]

  • (ICLR 23) Two papers accepted to ICLR 2023.

    • “Sparse Mixture-of-Experts are Domain Generalizable Learners” (notable-top-5%, oral) [Paper] [GitHub]

    • “LDMIC: Learning-based distributed multi-view image coding” [Paper] [GitHub]

Research Interests

  • Generative AI, Foundation Models

  • Reinforcement Learning

  • Edge AI and Edge Computing

  • Integrated AI and Communications

Selected Publications

  • GenAI, Foundation Models, and Applications

    • J. Yang, S. Gao, Y. Qiu, L. Chen, T. Li, B. Dai, K. Chitta, P. Wu, J. Zeng, J. Zhang, A. Geiger, Y. Qiao, and H. Li, “Generalized predictive model for autonomous driving,” IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, Jun. 2024.

    • Y. Shen, J. Shao, X. Zhang, Z. Lin, H. Pan, D. Li, J. Zhang, K. B. Letaief, “Large language models empowered autonomous edge AI for connected intelligence,” IEEE Commun. Mag., to appear. [Paper]

    • P. Jiang, C.-K. Wen, X. Yi, X. Li, S. Jin, and J. Zhang, “Semantic communications using foundation models: Design approaches and open issues,” submitted. [Paper]

  • Neural Data Representation and Compression

    • X. Zhang, R. Yang, D. He, X. Ge, T. Xu, Y. Wang, H. Qin, and J. Zhang, “Boosting neural representations for videos with a conditional decoder,” IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, Jun. 2024.

    • X. Ge, J. Luo, X. Zhang, T. Xu, G. Lu, D. He, J. Geng, Y. Wang, J. Zhang, and H. Qin, “Controlling encoder of deep video compression for machine,” IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, Jun. 2024.

    • X. Zhang, J. Shao, and J. Zhang, “Low-complexity deep video compression with a distributed coding architecture,” IEEE International Conference on Multimedia and Expo (ICME), Brisbane, Australia, July 2023. [Paper] [GitHub]

    • X. Zhang, J. Shao, and J. Zhang, “LDMIC: Learning-based distributed multi-view image coding,” International Conference on Learning Representations (ICLR), Kigali, Rwanda, May 2023. [Paper] [GitHub]

  • Reinforcement Learning

    • X. Li, J. Zhang, “Context-aware communication for multi-agent reinforcement learning,” International Conference on Autonomous Agents and Multiagent Systems (AAMAS), Auckland, New Zealand, May 2024. (Acceptance Rate: 25%) [Paper]

    • X. Wang*, X. Li*, J. Shao, and J. Zhang, “AC2C: Adaptively controlled two-hop communication for multi-agent reinforcement learning,” International Conference on Autonomous Agents and Multiagent Systems (AAMAS), London, United Kingdom, May-June 2023. (Acceptance Rate: 23.3%) (* equal contribution) [Paper]

    • Y. Zhang, Z. Yu, J. Zhang, L. Wang, T. Luan, B. Guo, and C. Yuen, “Learning decentralized traffic signal controllers with multi-agent graph reinforcement learning,” IEEE Trans. Mobile Computing, to appear. [Paper]

  • Edge AI and Edge Computing

    • J. Shao, X. Zhang, and J. Zhang, “Task-oriented communication for edge video analytics,” IEEE Trans. Wireless Commun., to appear. [Paper] [GitHub]

    • J. Shao, Y. Mao, and J. Zhang, “Task-oriented communication for multi-device cooperative edge inference,” IEEE Trans. Wireless Commun., vol. 11, no. 1, pp. 73-87, Jan. 2023. [Paper]

    • J. Shao, Y. Mao, and J. Zhang, “Learning task-oriented communication for edge inference: An information bottleneck approach,” IEEE J. Select. Areas Commun, vol. 40, no. 1, pp. 197-211, Jan. 2022. [Paper] [GitHub]

    • J. Zhang and K. B. Letaief, “Mobile edge intelligence and computing for the Internet of vehicles,” Proc. IEEE, vol. 108, no. 2, pp. 246–261, Feb. 2020. [Paper]

  • Federated Learning, Safe/Robust AI

    • J. Shao, F. Wu, and J. Zhang, “Selective knowledge sharing for privacy-preserving federated distillation without a good teacher,” Nature Communications, Jan 2024. [Paper]

    • Y. Sun, Y. Mao, and J. Zhang, “MimiC: Combating client dropouts in federated learning by mimicking central updates,” IEEE Trans. Mobile Computing, to appear. [Paper]

    • T. Zhou, J. Zhang, and D. Tsang, “FedFA: Federated learning with feature anchors to align feature and classifier for heterogeneous data,” IEEE Trans. Mobile Computing, to appear. [Paper]

    • B. Li*, Y. Shen*, J. Yang, Y. Wang, J. Ren, T. Che, J. Zhang, and Z. Liu, “Sparse Mixture-of-Experts are Domain Generalizable Learners,” International Conference on Learning Representations (ICLR), Kigali, Rwanda, May 2023. (Oral presentation) [Paper] [GitHub]

    • J. Shao, Y. Sun, S. Li, and J. Zhang, “DReS-FL: Dropout-resilient secure federated learning for non-IID clients via secret data sharing,” NeurIPS 2022. [Paper] (Acceptance Rate: 25.6%)

  • Integrated AI and Communications

    • W. Yu, Y. Shen, H. He, X. Yu, S.H. Song, J. Zhang, and K. B. Letaief, “An adaptive and robust deep learning framework for THz ultra-massive MIMO channel estimation,” IEEE J. Sel. Topics Signal Process., to appear. [Paper] [GitHub]

    • Y. Shen, J. Zhang, S.H. Song, and K. B. Letaief, “Graph neural networks for wireless communications: From theory to practice,” IEEE Trans. Wireless Commun., vol. 22, no. 5, pp. 3554-3569, May 2023. [Paper] [GitHub]

    • Y. Shen, Y. Shi, J. Zhang, and K. B. Letaief, “Graph neural networks for scalable radio resource management: architecture design and theoretical analysis,” IEEE J. Select. Areas Commun, vol. 39, no. 1, pp. 101–115, Jan. 2021. [Paper]