Portrait of Xue (Steve) Liu is unavailable

Xue (Steve) Liu

Associate Academic Member
Full Professor, McGill University, School of Computer Science
Vice President Research and Development, Chief Scientist and Co-Director, Samsung's Montreal AI Center
Research Topics
Deep Learning

Biography

Xue (Steve) Liu is an associate academic member of Mila – Quebec Artificial Intelligence Institute and full professor at McGill University’s School of Computer Science.

He is also a William Dawson Scholar at McGill, as well as a professor (courtesy appointment) in the Department of Mathematics and Statistics, associate member of the Centre for Intelligent Machines (CIM), and associate member of the Centre for Advanced Systems and Technologies in Communications (SYTACom).

Liu is VP of R&D, chief scientist and co-director of Samsung AI Center Montréal. Before that, he was chief scientist in charge of research and innovation at Tinder Inc., the world’s largest dating and social discovery app, then valued at over US$10 billion.

He is a Fellow of the IEEE and the Canadian Academy of Engineering in addition to being the recipient of many awards, including the 2017 Mitacs Award for Exceptional Leadership – Professor; Outstanding Young Canadian Computer Science Researcher Prize from the Canadian Association of Computer Science (2014); and McGill’s Tomlinson Scientist Award for “recognition of excellence and scientific leadership.” He founded McGill’s Cyber-Physical Intelligence Lab in 2007 and still serves as its director.

Liu also briefly served as Samuel R. Thompson Chair Associate Professor in the Department of Computer Science and Engineering at the University of Nebraska-Lincoln, and worked at Hewlett-Packard Labs in Palo Alto (California) and IBM’s Thomas J. Watson Research Center (New York)

Current Students

Collaborating Alumni - McGill University
Co-supervisor :
PhD - McGill University
PhD - McGill University
Co-supervisor :
PhD - McGill University
PhD - McGill University
PhD - McGill University
PhD - McGill University
PhD - McGill University
PhD - McGill University
Master's Research - McGill University
PhD - McGill University
PhD - McGill University
PhD - McGill University
Master's Research - McGill University
PhD - McGill University
Co-supervisor :
PhD - McGill University
PhD - McGill University
PhD - McGill University

Publications

Probabilistic Mobility Load Balancing for Multi-band 5G and Beyond Networks
Saria Al Lahham
Ekram Hossain
Acheiving United Nations' SDG3 Through Empowering Health Artificial Intelligence on Resource-Constrained Mobile Devices Without Connectivity
Tianyi Yang
Tianze Yang
Shaoshan Liu
At least half of the world's population do not have access to essential health services. Worse, large numbers of households are being pushed… (see more) into poverty because they must pay for health care out of their own pockets.
FedSwarm: An Adaptive Federated Learning Framework for Scalable AIoT
Haizhou Du
Chengdong Ni
Chaoqian Cheng
Qiao Xiang
X. T. Chen
Federated learning (FL) is a key solution for datadriven the Artificial Intelligence of Things (AIoT). Although much progress has been made,… (see more) scalability remains a core challenge for real-world FL deployments. Existing solutions either suffer from accuracy loss or do not fully address the connectivity dynamicity of FL systems. In this article, we tackle the scalability issue with a novel, adaptive FL framework called FedSwarm, which improves system scalability for AIoT by deploying multiple collaborative edge servers. FedSwarm has two novel features: 1) adaptiveness on the number of local updates and 2) dynamicity of the synchronization between edge devices and edge servers. We formulate FedSwarm as a local update adaptation and perdevice dynamic server selection problem and prove FedSwarm‘s convergence bound. We further design a control mechanism consisting of a learning-based algorithm for collaboratively providing local update adaptation on the servers’ side and a bonus-based strategy for spurring dynamic per-device server selection on the devices’ side. Our extensive evaluation shows that FedSwarm significantly outperforms other studies with better scalability, lower energy consumption, and higher model accuracy.
FedSwarm: An Adaptive Federated Learning Framework for Scalable AIoT
Haizhou Du
Chengdong Ni
Chaoqian Cheng
Qiao Xiang
Xi Chen
Federated learning (FL) is a key solution for datadriven the Artificial Intelligence of Things (AIoT). Although much progress has been made,… (see more) scalability remains a core challenge for real-world FL deployments. Existing solutions either suffer from accuracy loss or do not fully address the connectivity dynamicity of FL systems. In this article, we tackle the scalability issue with a novel, adaptive FL framework called FedSwarm, which improves system scalability for AIoT by deploying multiple collaborative edge servers. FedSwarm has two novel features: 1) adaptiveness on the number of local updates and 2) dynamicity of the synchronization between edge devices and edge servers. We formulate FedSwarm as a local update adaptation and perdevice dynamic server selection problem and prove FedSwarm‘s convergence bound. We further design a control mechanism consisting of a learning-based algorithm for collaboratively providing local update adaptation on the servers’ side and a bonus-based strategy for spurring dynamic per-device server selection on the devices’ side. Our extensive evaluation shows that FedSwarm significantly outperforms other studies with better scalability, lower energy consumption, and higher model accuracy.
ICE-SEARCH: A Language Model-Driven Feature Selection Approach
Tianze Yang
Tianyi Yang
Shaoshan Liu
Fuyuan Lyu
This study unveils the In-Context Evolutionary Search (ICE-SEARCH) method, the first work that melds language models (LMs) with evolutionary… (see more) algorithms for feature selection (FS) tasks and demonstrates its effectiveness in Medical Predictive Analytics (MPA) applications. ICE-SEARCH harnesses the crossover and mutation capabilities inherent in LMs within an evolutionary framework, significantly improving FS through the model's comprehensive world knowledge and its adaptability to a variety of roles. Our evaluation of this methodology spans three crucial MPA tasks: stroke, cardiovascular disease, and diabetes, where ICE-SEARCH outperforms traditional FS methods in pinpointing essential features for medical applications. ICE-SEARCH achieves State-of-the-Art (SOTA) performance in stroke prediction and diabetes prediction; the Decision-Randomized ICE-SEARCH ranks as SOTA in cardiovascular disease prediction. Our results not only demonstrate the efficacy of ICE-SEARCH in medical FS but also underscore the versatility, efficiency, and scalability of integrating LMs in FS tasks. The study emphasizes the critical role of incorporating domain-specific insights, illustrating ICE-SEARCH's robustness, generalizability, and swift convergence. This opens avenues for further research into comprehensive and intricate FS landscapes, marking a significant stride in the application of artificial intelligence in medical predictive analytics.
AICOM-MP: an AI-based Monkeypox Detector for Resource-Constrained Environments
Tianyi Yang
Tianze Yang
Andrew Liu
Na An
Jie Tang
Shaoshan Liu
AICOM-MP: an AI-based monkeypox detector for resource-constrained environments
Tianyi Yang
Tianze Yang
Andrew Liu
Na An
Shaoshan Liu
Real-time monitoring for the next core-collapse supernova in JUNO
Angel Abusleme
Thomas Adam
Shakeel Ahmad
Rizwan Ahmed
Sebastiano Aiello
Muhammad Akram
Abid Aleem
Fengpeng An
Qi An
Giuseppe Andronico
Nikolay Anfimov
Vito Antonelli
Tatiana Antoshkina
Burin Asavapibhop
João Pedro Athayde Marcondes de André
Didier Auguste
Weidong Bai
Nikita Balashov
Wander Baldini
Andrea Barresi … (see 476 more)
Davide Basilico
Eric Baussan
Marco Bellato
Marco Beretta
Antonio Bergnoli
Daniel Bick
Lukas Bieger
Svetlana Biktemerova
Thilo Birkenfeld
Iwan Morton-Blake
David Blum
Simon Blyth
Anastasia Bolshakova
Mathieu Bongrand
Clément Bordereau
Dominique Breton
Augusto Brigatti
Riccardo Brugnera
Riccardo Bruno
Antonio Budano
Jose Busto
Anatael Cabrera
Barbara Caccianiga
Hao Cai
Xiao Cai
Yanke Cai
Zucong Cai
Stéphane Callier
Antonio Cammi
Agustin Campeny
Guofu Cao
Jun Cao
Rossella Caruso
C. Cerna
Vanessa Cerrone
Chi Chan
Jinfan Chang
Yun Chang
Auttakit Chatrabhuti
Chao Chen
Guoming Chen
Pingping Chen
Shaomin Chen
Yixue Chen
Yu Chen
Zhangming Chen
Zhiyuan Chen
Zikang Chen
Jie Cheng
Yaping Cheng
Yuanyuan Zhang
Alexander Chepurnov
Alexey Chetverikov
Davide Chiesa
Pietro Chimenti
Yen-Ting Chin
Ziliang Chu
Artem Chukanov
Gérard Claverie
Catia Clementi
Barbara Clerbaux
Marta Colomer Molla
Selma Conforti Di Lorenzo
Alberto Coppi
Daniele Corti
Simon Csakli
Flavio Dal Corso
Olivia Dalager
Jaydeep Datta
C. Taille
Zhi Deng
Ziyan Deng
Xiaoyu Ding
Xuefeng Ding
Yayun Ding
Bayu Dirgantara
Carsten Dittrich
Sergey Dmitrievsky
Tadeas Dohnal
Dmitry Dolzhikov
Georgy Donchenko
Jianmeng Dong
Evgeny Doroshkevich
Wei Dou
Marcos Dracos
Frédéric Druillole
Ran Du
S. X. Du
Katherine Dugas
Stefano Dusini
Hongyue Duyang
Jessica Eck
Timo Enqvist
Andrea Fabbri
Ulrike Fahrendholz
Lei Fan
Jian Fang
Wen Fang
Marco Fargetta
Dmitry Fedoseev
Zhengyong Fei
Li-Cheng Feng
Qichun Feng
Federico Ferraro
Amélie Fournier
H. Gan
Feng Gao
Alberto Garfagnini
Arsenii Gavrikov
Marco Giammarchi
Nunzio Giudice
Maxim Gonchar
G. Gong
Hui Gong
Yuri Gornushkin
A. Gottel
Marco Grassi
Maxim Gromov
Vasily Gromov
Minghao Gu
Xiang Zhou
Yunting Gu
Mengyun Guan
Yuduo Guan
Nunzio Guardone
Cong Guo
Wanlei Guo
Xinheng Guo
Caren Hagner
Ran Han
Yang Han
Miao He
W. He
Tobias Heinz
Patrick Hellmuth
Yue-kun Heng
Rafael Herrera
Yuenkeung Hor
Shaojing Hou
Yee Hsiung
Bei-Zhen Hu
Hang Hu
Jianrun Hu
Jun Hu
Shouyang Hu
T. Hu
Yuxiang Hu
Zhuojun Hu
Guihong Huang
Hanxiong Huang
Jinhao Huang
Jun-Hao Huang
Kaixuan Huang
Xinting Huang
X. T. Huang
Yongbo Huang
Jiaqi Hui
Lei Huo
Wenju Huo
Cédric Huss
Safeer Hussain
Leonard Imbert
Ara Ioannisian
Roberto Isocrate
Arshak Jafar
Beatrice Jelmini
Ignacio Jeria
Xiaolu Ji
Huihui Jia
Junji Jia
Siyu Jian
Cailian Jiang
Di Jiang
Wei Jiang
Xiaoshan Jiang
Xiang Jing
Cécile Jollet
Philipp Kampmann
Li Kang
Rebin Karaparambil
Narine Kazarian
Ali Khan
Amina Khatun
Khanchai Khosonthongkee
Denis Korablev
K. Kouzakov
Alexey Krasnoperov
Sergey Kuleshov
Nikolay Kutovskiy
Loïc Labit
Tobias Lachenmaier
Cecilia Landini
Sébastien Leblanc
Victor Lebrin
Frederic Lefevre
Rupert Leitner
Jason Leung
Demin Li
Fei Li
Fule Li
Gaosong Li
Huiling Li
Jiajun Li
Mengzhao Li
Min Li
Nan Li
Qingjiang Li
Ruhui Li
Rui Li
Shanfeng Li
Tao Li
Teng Li
Weidong Li
Weiguo Li
Xiaomei Li
Xiao-Nan Li
Xinglong Li
Yi Li
Yichen Li
Yufeng Li
Zhaohan Li
Zhibing Li
Ziyuan Li
Zonghui Li
Hao Liang
Jiaming Yan
Ayut Limphirat
Gen Lin
Shengxin Lin
Tao Lin
Jiajie Ling
Xin Ling
Ivano Lippi
Caimei Liu
Yang Liu
Fengcheng Liu
Haidong Liu
Hongbang Liu
Hongjuan Liu
Hongtao Liu
H. Liu
Jianglai Liu
Jia-xing Liu
Jinchang Liu
Min Liu
Qian Liu
Qi Liu
Runxuan Liu
Sheng Liu
Shubin Liu
Shulin Liu
Xiaowei Liu
Xiwen Liu
Yankai Liu
Alexey Lokhov
Paolo Lombardi
Claudio Lombardo
Kai Loo
Chuan Lu
Haoqi Lu
Jingbin Lu
Junguang Lu
Peizhi Lu
Shuxian Du
Xianguo Lu
Bayarto Lubsandorzhiev
Sultim Lubsandorzhiev
Livia Ludhova
Arslan Lukanov
Daibin Luo
Feng Luo
Guang Luo
Jianyi Luo
Shu Luo
Wuming Luo
Xiaojie Luo
Vladimir Lyashuk
Biao Ma
Bing Ma
R. Q. Ma
Si Ma
Xiaoyan Ma
Xubo Ma
Jihane Maalmi
Marco Magoni
Jingyu Mai
Yury Malyshkin
Roberto Carlos Mandujano
Fabio Mantovani
Xin Mao
Yajun Mao
S. Mari
F. Marini
Agnese Martini
Matthias Mayer
Davit Mayilyan
Ints Mednieks
Yu Meng
Anita Meraviglia
Anselmo Meregaglia
Emanuela Meroni
David J. Meyhofer
Lino Miramonti
Nikhil Mohan
Michele Montuschi
Axel Muller
Massimiliano Nastasi
Dmitry V. Naumov
Elena Naumova
Diana Navas-Nicolas
Igor Nemchenok
Minh Thuan Nguyen Thi
Alexey Nikolaev
Feipeng Ning
Zhe Ning
Hiroshi Nunokawa
Lothar Oberauer
Juan Pedro Ochoa-Ricoux
Alexander Olshevskiy
Domizia Orestano
Fausto Ortica
Rainer Othegraven
Alessandro Paoloni
Sergio Parmeggiano
Y. P. Pei
Luca Pelicci
Anguo Peng
Yuekun Heng
Z-R Peng
Frédéric Perrot
P. Petitjean
Fabrizio Petrucci
Oliver Pilarczyk
Luis Felipe Piñeres Rico
Artyom Popov
Pascal Poussot
Ezio Previtali
Fazhi Qi
M. Qi
Xiaohui Qi
Sen Qian
Xiangyang Qian
Zhen Qian
Hao-xue Qiao
Zhonghua Qin
Shoukang Qiu
Manhao Qu
Z. Qu
Gioacchino Ranucci
Reem Rasheed
A. Re
Abdel Rebii
Mariia Redchuk
Bin Ren
Jie Ren
Barbara Ricci
Komkrit Rientong
Mariam Rifai
Mathieu Roche
Narongkiat Rodphai
Aldo M. Romani
Bedřich Roskovec
Xianhui Ruan
Arseniy Rybnikov
Andrey Sadovsky
Paolo Saggese
Deshan Sandanayake
Anut Sangka
Giuseppe Sava
Utane Sawangwit
Michaela Schever
Cédric Schwab
Konstantin Schweizer
Alexandr Selyunin
Andrea Serafini
Mariangela Settimo
Vladislav Sharov
Arina Shaydurova
Jingyan Shi
Yanan Shi
Vitaly Shutov
Andrey Sidorenkov
Fedor Šimkovic
Apeksha Singhal
Chiara Sirignano
Jaruchit Siripak
Monica Sisti
Mikhail Smirnov
Oleg Smirnov
Thiago Sogo-Bezerra
Sergey Sokolov
Julanan Songwadhana
Boonrucksar Soonthornthum
Albert Sotnikov
Ondvrej vSr'amek
Warintorn Sreethawong
Achim Stahl
Luca Stanco
Konstantin Stankevich
Hans Steiger
Jochen Steinmann
Tobias Sterr
M. Stock
Virginia Strati
Alexander Studenikin
Aoqi Su
Jun Su
Shifeng Sun
Xilei Sun
Yongjie Sun Sun
Yongzhao Sun
Zhengyang Sun
Narumon Suwonjandee
Michal Szelezniak
Akira Takenaka
Qiang Tang
Quan Tang
Xiao Tang
Vidhya Thara Hariharan
Eric Theisen
Alexander Tietzsch
Igor Tkachev
Tomas Tmej
M. Torri
Francesco Tortorici
K. Treskov
Andrea Triossi
Riccardo Triozzi
Wladyslaw Trzaska
Y. Tung
Cristina Tuve
Nikita Ushakov
Vadim Vedin
Carlo Venettacci
Giuseppe Verde
Maxim Vialkov
Benoit Viaud
Cornelius Moritz Vollbrecht
Katharina von Sturm
Vit Vorobel
Dmitriy Voronin
Lucia Votano
Pablo Walker
Caishen Wang
Chung-Hsiang Wang
En Wang
Guoli Wang
Jian Wang
Jun Wang
Li Wang
Lucinda W. Wang
Meng Wang
Ruiguang Wang
Siguang Wang
Wei David Wang
Wenshuai Wang
Xi Wang
Xiangyue Wang
Yangfu Wang
Yaoguang Wang
Yi Xing Wang
Yifang Wang
Yong Wang
Yuyi Wang
Zhe Wang
Z. Wang
Zhimin Wang
Apimook Watcharangkool
Wei Wei
Wenlu Wei
Yadong Wei
Yuehuan Wei
The core-collapse supernova (CCSN) is considered one of the most energetic astrophysical events in the universe. The early and prompt detect… (see more)ion of neutrinos before (pre-SN) and during the supernova (SN) burst presents a unique opportunity for multi-messenger observations of CCSN events. In this study, we describe the monitoring concept and present the sensitivity of the system to pre-SN and SN neutrinos at the Jiangmen Underground Neutrino Observatory (JUNO), a 20 kton liquid scintillator detector currently under construction in South China. The real-time monitoring system is designed to ensure both prompt alert speed and comprehensive coverage of progenitor stars. It incorporates prompt monitors on the electronic board as well as online monitors at the data acquisition stage. Assuming a false alert rate of 1 per year, this monitoring system exhibits sensitivity to pre-SN neutrinos up to a distance of approximately 1.6 (0.9) kiloparsecs and SN neutrinos up to about 370 (360) kiloparsecs for a progenitor mass of 30 solar masses, considering both normal and inverted mass ordering scenarios. The pointing ability of the CCSN is evaluated by analyzing the accumulated event anisotropy of inverse beta decay interactions from pre-SN or SN neutrinos. This, along with the early alert, can play a crucial role in facilitating follow-up multi-messenger observations of the next galactic or nearby extragalactic CCSN.
AIoT Smart Home via Autonomous LLM Agents
Dmitriy Rivkin
Francois Hogan
Amal Feriani
Adam Sigal
Less or More From Teacher: Exploiting Trilateral Geometry For Knowledge Distillation
Xi Chen 0009
X. T. Chen
Boyu Wang
Jun Yan
AdaTeacher: Adaptive Multi-Teacher Weighting for Communication Load Forecasting
Ju Wang
Yan Xin
Charlie Zhang
To deal with notorious delays in communication systems, it is crucial to forecast key system characteristics, such as the communication load… (see more). Most existing studies aggregate data from multiple edge nodes for improving the forecasting accuracy. However, the bandwidth cost of such data aggregation could be unacceptably high from the perspective of system operators. To achieve both the high forecasting accuracy and bandwidth efficiency, this paper proposes an Adaptive Multi-Teacher Weighting in Teacher-Student Learning approach, namely AdaTeacher, for communication load forecasting of multiple edge nodes. Each edge node trains a local model on its own data. A target node collects multiple models from its neighbor nodes and treats these models as teachers. Then, the target node trains a student model from teachers via Teacher-Student (T-S) learning. Unlike most existing T-S learning approaches that treat teachers evenly, resulting in a limited performance, AdaTeacher introduces a bilevel optimization algorithm to dynamically learn an importance weight for each teacher toward a more effective and accurate T-S learning process. Compared to the state-of-the-art methods, Ada Teacher not only reduces the bandwidth cost by 53.85%, but also improves the load forecasting accuracy by 21.56% and 24.24% on two real-world datasets.
Energy Saving in Cellular Wireless Networks via Transfer Deep Reinforcement Learning
Yi Tian Xu
M. Jenkin
Seowoo Jang
Ekram Hossain
With the increasing use of data-intensive mobile applications and the number of mobile users, the demand for wireless data services has been… (see more) increasing exponentially in recent years. In order to address this demand, a large number of new cellular base stations are being deployed around the world, leading to a significant increase in energy consumption and greenhouse gas emission. Consequently, energy consumption has emerged as a key concern in the fifth-generation (5G) network era and beyond. Reinforcement learning (RL), which aims to learn a control policy via interacting with the environment, has been shown to be effective in addressing network optimization problems. However, for reinforcement learning, especially deep reinforcement learning, a large number of interactions with the environment are required. This often limits its applicability in the real world. In this work, to better deal with dynamic traffic scenarios and improve real-world applicability, we propose a transfer deep reinforcement learning framework for energy optimization in cellular communication networks. Specifically, we first pre-train a set of RL-based energy-saving policies on source base stations and then transfer the most suitable policy to the given target base station in an unsupervised learning manner. Experimental results demonstrate that base station energy consumption can be reduced significantly using this approach.