Portrait de Xue (Steve) Liu n'est pas disponible

Xue (Steve) Liu

Membre académique associé
Professeur titulaire, McGill University, École d'informatique
Vice-président, recherche et développement, directeur scientifique et co-directeur, Samsung's Montreal AI Center
Sujets de recherche
Apprentissage profond

Biographie

Xue (Steve) Liu est professeur titulaire à l'École d'informatique de l’Université McGill, ainsi que vice-président de la recherche et du développement, scientifique en chef et codirecteur du Centre d'IA de Samsung à Montréal. Il est également titulaire d'une bourse William Dawson (professeur titulaire) à l'Université McGill et professeur de mathématiques et de statistiques (nomination de courtoisie) dans le même établissement. Auparavant, il était scientifique en chef chez Tinder Inc., où il dirigeait la recherche et l'innovation touchant l’application de rencontre et de découverte sociale la plus importante au monde, évaluée à plus de 10 milliards de dollars américains.

M. Liu est membre de l'IEEE et membre associé de Mila – Institut québécois d’intelligence artificielle. À l'Université McGill, il est également membre associé du Centre sur les machines intelligentes (CIM) et du Centre sur les systèmes et les technologies avancés en communication (SYTACom). Il a reçu plusieurs récompenses, notamment le prix Mitacs 2017 reconnaissant un leadership exceptionnel parmi le corps professoral, le prix Outstanding Young Canadian Computer Science Researcher de l'Association canadienne de l'informatique en 2014, et le prix Tomlinson Scientist soulignant l'excellence et le leadership scientifique à l'Université McGill. Il est le directeur du Laboratoire sur l’intelligence cyberphysique de l'Université McGill, qu’il a fondé en 2007. Il a également travaillé brièvement en tant que professeur associé de la chaire Samuel R. Thompson au Département d'informatique et d'ingénierie de l'Université du Nebraska à Lincoln, aux laboratoires Hewlett-Packard à Palo Alto, en Californie, et au centre de recherche T. J. Watson d'IBM à New York.

Étudiants actuels

Maîtrise recherche - McGill
Doctorat - McGill
Co-superviseur⋅e :
Doctorat - McGill
Co-superviseur⋅e :
Doctorat - McGill
Doctorat - McGill
Doctorat - McGill
Doctorat - McGill
Doctorat - McGill
Doctorat - McGill
Maîtrise recherche - McGill
Maîtrise recherche - McGill
Doctorat - McGill
Co-superviseur⋅e :
Maîtrise recherche - McGill
Postdoctorat - McGill
Co-superviseur⋅e :
Doctorat - McGill

Publications

The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks
Ziquan Liu
Yufei Cui
Yan Yan
Yi Xu
Xiangyang Ji
Antoni B. Chan
In safety-critical applications such as medical imaging and autonomous driving, where decisions have profound implications for patient healt… (voir plus)h and road safety, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks and reliable uncertainty quantification in decision-making. With extensive research focused on enhancing adversarial robustness through various forms of adversarial training (AT), a notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models. To address this gap, this study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks within the adversarial defense community. It is first unveiled that existing CP methods do not produce informative prediction sets under the commonly used
Think Before You Act: Decision Transformers with Working Memory
Jikun Kang
Romain Laroche
Xingdi Yuan
Adam Trischler
Jie Fu
Decision Transformer-based decision-making agents have shown the ability to generalize across multiple tasks. However, their performance rel… (voir plus)ies on massive data and computation. We argue that this inefficiency stems from the forgetting phenomenon, in which a model memorizes its behaviors in parameters throughout training. As a result, training on a new task may deteriorate the model’s performance on previous tasks. In contrast to LLMs’ implicit memory mechanism, the human brain utilizes distributed memory storage, which helps manage and organize multiple skills efficiently, mitigating the forgetting phenomenon. Inspired by this, we propose a working memory module to store, blend, and retrieve information for different downstream tasks. Evaluation results show that the proposed method improves training efficiency and generalization in Atari games and Meta-World object manipulation tasks. Moreover, we demonstrate that memory fine-tuning further enhances the adaptability of the proposed architecture.
FedSwarm: An Adaptive Federated Learning Framework for Scalable AIoT
Haizhou Du
Chengdong Ni
Chaoqian Cheng
Qiao Xiang
Xi Chen
Federated learning (FL) is a key solution for datadriven the Artificial Intelligence of Things (AIoT). Although much progress has been made,… (voir plus) scalability remains a core challenge for real-world FL deployments. Existing solutions either suffer from accuracy loss or do not fully address the connectivity dynamicity of FL systems. In this article, we tackle the scalability issue with a novel, adaptive FL framework called FedSwarm, which improves system scalability for AIoT by deploying multiple collaborative edge servers. FedSwarm has two novel features: 1) adaptiveness on the number of local updates and 2) dynamicity of the synchronization between edge devices and edge servers. We formulate FedSwarm as a local update adaptation and perdevice dynamic server selection problem and prove FedSwarm‘s convergence bound. We further design a control mechanism consisting of a learning-based algorithm for collaboratively providing local update adaptation on the servers’ side and a bonus-based strategy for spurring dynamic per-device server selection on the devices’ side. Our extensive evaluation shows that FedSwarm significantly outperforms other studies with better scalability, lower energy consumption, and higher model accuracy.
ICE-SEARCH: A Language Model-Driven Feature Selection Approach
Tianze Yang
Tianyi Yang
Shaoshan Liu
Fuyuan Lyu
This study unveils the In-Context Evolutionary Search (ICE-SEARCH) method, the first work that melds language models (LMs) with evolutionary… (voir plus) algorithms for feature selection (FS) tasks and demonstrates its effectiveness in Medical Predictive Analytics (MPA) applications. ICE-SEARCH harnesses the crossover and mutation capabilities inherent in LMs within an evolutionary framework, significantly improving FS through the model's comprehensive world knowledge and its adaptability to a variety of roles. Our evaluation of this methodology spans three crucial MPA tasks: stroke, cardiovascular disease, and diabetes, where ICE-SEARCH outperforms traditional FS methods in pinpointing essential features for medical applications. ICE-SEARCH achieves State-of-the-Art (SOTA) performance in stroke prediction and diabetes prediction; the Decision-Randomized ICE-SEARCH ranks as SOTA in cardiovascular disease prediction. Our results not only demonstrate the efficacy of ICE-SEARCH in medical FS but also underscore the versatility, efficiency, and scalability of integrating LMs in FS tasks. The study emphasizes the critical role of incorporating domain-specific insights, illustrating ICE-SEARCH's robustness, generalizability, and swift convergence. This opens avenues for further research into comprehensive and intricate FS landscapes, marking a significant stride in the application of artificial intelligence in medical predictive analytics.
AICOM-MP: an AI-based Monkeypox Detector for Resource-Constrained Environments
Tianyi Yang
Tianze Yang
Andrew Liu
Na An
Jie Tang
Shaoshan Liu
Probabilistic Mobility Load Balancing for Multi-band 5G and Beyond Networks
Saria Al Laham
Di Wu
Ekram Hossain
A Survey of Diversification Techniques in Search and Recommendation
Haolun Wu
Yansen Zhang
Chen Ma
Fuyuan Lyu
Bowei He
Bhaskar Mitra
Diversifying search results is an important research topic in retrieval systems in order to satisfy both the various interests of customers … (voir plus)and the equal market exposure of providers. There has been a growing attention on diversity-aware research during recent years, accompanied by a proliferation of literature on methods to promote diversity in search and recommendation. However, the diversity-aware studies in retrieval systems lack a systematic organization and are rather fragmented. In this survey, we are the first to propose a unified taxonomy for classifying the metrics and approaches of diversification in both search and recommendation, which are two of the most extensively researched fields of retrieval systems. We begin the survey with a brief discussion of why diversity is important in retrieval systems, followed by a summary of the various diversity concerns in search and recommendation, highlighting their relationship and differences. For the survey’s main body, we present a unified taxonomy of diversification metrics and approaches in retrieval systems, from both the search and recommendation perspectives. In the later part of the survey, we discuss the openness research questions of diversity-aware research in search and recommendation in an effort to inspire future innovations and encourage the implementation of diversity in real-world systems.
Less or More From Teacher: Exploiting Trilateral Geometry For Knowledge Distillation
Chengming Hu
Haolun Wu
Xuan Li
Chen Ma
Xi Chen
Jun Yan
Boyu Wang
Anomaly Detection for Scalable Task Grouping in Reinforcement Learning-based RAN Optimization
Jimmy Li
Igor Kozlov
Di Wu
The use of learning-based methods for optimizing cellular radio access networks (RAN) has received increasing attention in recent years. Thi… (voir plus)s coincides with a rapid increase in the number of cell sites worldwide, driven largely by dramatic growth in cellular network traffic. Training and maintaining learned models that work well across a large number of cell sites has thus become a pertinent problem. This paper proposes a scalable framework for constructing a reinforcement learning policy bank that can perform RAN optimization across a large number of cell sites with varying traffic patterns. Central to our framework is a novel application of anomaly detection techniques to assess the compatibility between sites (tasks) and the policy bank. This allows our framework to intelligently identify when a policy can be reused for a task, and when a new policy needs to be trained and added to the policy bank. Our results show that our approach to compatibility assessment leads to an efficient use of computational resources, by allowing us to construct a performant policy bank without exhaustively training on all tasks, which makes it applicable under real-world constraints.
AdaTeacher: Adaptive Multi-Teacher Weighting for Communication Load Forecasting
Chengming Hu
Ju Wang
Di Wu
Yan Xin
Charlie Zhang
To deal with notorious delays in communication systems, it is crucial to forecast key system characteristics, such as the communication load… (voir plus). Most existing studies aggregate data from multiple edge nodes for improving the forecasting accuracy. However, the bandwidth cost of such data aggregation could be unacceptably high from the perspective of system operators. To achieve both the high forecasting accuracy and bandwidth efficiency, this paper proposes an Adaptive Multi-Teacher Weighting in Teacher-Student Learning approach, namely AdaTeacher, for communication load forecasting of multiple edge nodes. Each edge node trains a local model on its own data. A target node collects multiple models from its neighbor nodes and treats these models as teachers. Then, the target node trains a student model from teachers via Teacher-Student (T-S) learning. Unlike most existing T-S learning approaches that treat teachers evenly, resulting in a limited performance, AdaTeacher introduces a bilevel optimization algorithm to dynamically learn an importance weight for each teacher toward a more effective and accurate T-S learning process. Compared to the state-of-the-art methods, Ada Teacher not only reduces the bandwidth cost by 53.85%, but also improves the load forecasting accuracy by 21.56% and 24.24% on two real-world datasets.
Energy Saving in Cellular Wireless Networks via Transfer Deep Reinforcement Learning
Di Wu
Yi Tian Xu
M. Jenkin
Seowoo Jang
Ekram Hossain
With the increasing use of data-intensive mobile applications and the number of mobile users, the demand for wireless data services has been… (voir plus) increasing exponentially in recent years. In order to address this demand, a large number of new cellular base stations are being deployed around the world, leading to a significant increase in energy consumption and greenhouse gas emission. Consequently, energy consumption has emerged as a key concern in the fifth-generation (5G) network era and beyond. Reinforcement learning (RL), which aims to learn a control policy via interacting with the environment, has been shown to be effective in addressing network optimization problems. However, for reinforcement learning, especially deep reinforcement learning, a large number of interactions with the environment are required. This often limits its applicability in the real world. In this work, to better deal with dynamic traffic scenarios and improve real-world applicability, we propose a transfer deep reinforcement learning framework for energy optimization in cellular communication networks. Specifically, we first pre-train a set of RL-based energy-saving policies on source base stations and then transfer the most suitable policy to the given target base station in an unsupervised learning manner. Experimental results demonstrate that base station energy consumption can be reduced significantly using this approach.
Learning to Adapt: Communication Load Balancing via Adaptive Deep Reinforcement Learning
Di Wu
Yi Tian Xu
Jimmy Li
M. Jenkin
Ekram Hossain
Seowoo Jang
Yan Xin
Charlie Zhang
The association of mobile devices with network resources (e.g., base stations, frequency bands/channels), known as load balancing, is critic… (voir plus)al to reduce communication traffic congestion and network performance. Reinforcement learning (RL) has shown to be effective for communication load balancing and achieves better performance than currently used rule-based methods, especially when the traffic load changes quickly. However, RL-based methods usually need to interact with the environment for a large number of time steps to learn an effective policy and can be difficult to tune. In this work, we aim to improve the data efficiency of RL-based solutions to make them more suitable and applicable for real-world applications. Specifically, we propose a simple, yet efficient and effective deep RL-based wireless network load balancing framework. In this solution, a set of good initialization values for control actions are selected with some cost-efficient approach to center the training of the RL agent. Then, a deep RL-based agent is trained to find offsets from the initialization values that optimize the load balancing problem. Experimental evaluation on a set of dynamic traffic scenarios demonstrates the effectiveness and efficiency of the proposed method.