Publications

Accelerating Digital Twin Calibration with Warm-Start Bayesian Optimization
Abhisek Konar
Amal Feriani
Di Wu
Seowoo Jang
Digital twins are expected to play an important role in the widespread adaptation of AI-based networking solutions in the real world. The ca… (see more)libration of these virtual replicas is critical to ensure a trustworthy replication of the real environment. This work focuses on the input parameter calibration of radio access network (RAN) simulators using real network performance metrics as supervision signals. Usually, the RAN digital twin is considered a black-box function and each calibration problem is viewed as a standalone search problem. RAN simulators are slow and non-differentiable, often posing as the bottleneck in the execution time for these search problems. In this work, we aim to accelerate the search process by reducing the number of interactions with the simulator by leveraging RAN interactions from previous problems. We present a sequential Bayesian optimization framework that uses information from the past to warm-start the calibration process. Assuming that the network performance exhibits gradual and periodic changes, the stored information can be reused in future calibrations. We test our method across multiple physical sites over one week and show that using the proposed framework, we can obtain better calibration with a smaller number of interactions with the simulator during the search phase.
Adaptive Dynamic Programming for Energy-Efficient Base Station Cell Switching
Junliang Luo
Yi Tian Xu
Di Wu
M. Jenkin
Energy saving in wireless networks is growing in importance due to increasing demand for evolving new-gen cellular networks, environmental a… (see more)nd regulatory concerns, and potential energy crises arising from geopolitical tensions. In this work, we propose an approximate dynamic programming (ADP)-based method coupled with online optimization to switch on/off the cells of base stations to reduce network power consumption while maintaining adequate Quality of Service (QoS) metrics. We use a multilayer perceptron (MLP) given each state-action pair to predict the power consumption to approximate the value function in ADP for selecting the action with optimal expected power saved. To save the largest possible power consumption without deteriorating QoS, we include another MLP to predict QoS and a long short-term memory (LSTM) for predicting handovers, incorporated into an online optimization algorithm producing an adaptive QoS threshold for filtering cell switching actions based on the overall QoS history. The performance of the method is evaluated using a practical network simulator with various real-world scenarios with dynamic traffic patterns.
Anomaly Detection for Scalable Task Grouping in Reinforcement Learning-based RAN Optimization
Jimmy Li
Igor Kozlov
Di Wu
The use of learning-based methods for optimizing cellular radio access networks (RAN) has received increasing attention in recent years. Thi… (see more)s coincides with a rapid increase in the number of cell sites worldwide, driven largely by dramatic growth in cellular network traffic. Training and maintaining learned models that work well across a large number of cell sites has thus become a pertinent problem. This paper proposes a scalable framework for constructing a reinforcement learning policy bank that can perform RAN optimization across a large number of cell sites with varying traffic patterns. Central to our framework is a novel application of anomaly detection techniques to assess the compatibility between sites (tasks) and the policy bank. This allows our framework to intelligently identify when a policy can be reused for a task, and when a new policy needs to be trained and added to the policy bank. Our results show that our approach to compatibility assessment leads to an efficient use of computational resources, by allowing us to construct a performant policy bank without exhaustively training on all tasks, which makes it applicable under real-world constraints.
Optimizing Energy Saving for Wireless Networks Via Offline Decision Transformer
Yi Tian Xu
Di Wu
M. Jenkin
Seowoo Jang
With the global aim of reducing carbon emissions, energy saving for communication systems has gained tremendous attention. Efficient energy-… (see more)saving solutions are not only required to accommodate the fast growth in communication demand but solutions are also challenged by the complex nature of the load dynamics. Recent reinforcement learning (RL)-based methods have shown promising performance for network optimization problems, such as base station energy saving. However, a major limitation of these methods is the requirement of online exploration of potential solutions using a high-fidelity simulator or the need to perform exploration in a real-world environment. We circumvent this issue by proposing an offline reinforcement learning energy saving (ORES) framework that allows us to learn an efficient control policy using previously collected data. We first deploy a behavior energy-saving policy on base stations and generate a set of interaction experiences. Then, using a robust deep offline reinforcement learning algorithm, we learn an energy-saving control policy based on the collected experiences. Results from experiments conducted on a diverse collection of communication scenarios with different behavior policies showcase the effectiveness of the proposed energy-saving algorithms.
PEOPLEx: PEdestrian Opportunistic Positioning LEveraging IMU, UWB, BLE and WiFi
Pierre-Yves Lajoie
Bobak H. Baghi
Sachini Herath
Francois Hogan
This paper advances the field of pedestrian localization by introducing a unifying framework for opportunistic positioning based on nonlinea… (see more)r factor graph optimization. While many existing approaches assume constant availability of one or multiple sensing signals, our methodology employs IMU-based pedestrian inertial navigation as the backbone for sensor fusion, opportunistically integrating Ultra-Wideband (UWB), Bluetooth Low Energy (BLE), and WiFi signals when they are available in the environment. The proposed PEOPLEx framework is designed to incorporate sensing data as it becomes available, operating without any prior knowledge about the environment (e.g. anchor locations, radio frequency maps, etc.). Our contributions are twofold: 1) we introduce an opportunistic multi-sensor and real-time pedestrian positioning framework fusing the available sensor measurements; 2) we develop novel factors for adaptive scaling and coarse loop closures, significantly improving the precision of indoor positioning. Experimental validation confirms that our approach achieves accurate localization estimates in real indoor scenarios using commercial smartphones.
Probabilistic Mobility Load Balancing for Multi-band 5G and Beyond Networks
Saria Al Lahham
Di Wu
Ekram Hossain
Promoting Exploration in Memory-Augmented Adam using Critical Momenta
Pranshu Malviya
Goncalo Mordido
Aristide Baratin
Reza Babanezhad Harikandeh
Jerry Huang
Razvan Pascanu
Adaptive gradient-based optimizers, particularly Adam, have left their mark in training large-scale deep learning models. The strength of su… (see more)ch optimizers is that they exhibit fast convergence while being more robust to hyperparameter choice. However, they often generalize worse than non-adaptive methods. Recent studies have tied this performance gap to flat minima selection: adaptive methods tend to find solutions in sharper basins of the loss landscape, which in turn hurts generalization. To overcome this issue, we propose a new memory-augmented version of Adam that promotes exploration towards flatter minima by using a buffer of critical momentum terms during training. Intuitively, the use of the buffer makes the optimizer overshoot outside the basin of attraction if it is not wide enough. We empirically show that our method improves the performance of several variants of Adam on standard supervised language modelling and image classification tasks.
Tensor-based Space Debris Detection for Satellite Mega-constellations
Olivier Daoust
Hasan Nayir
Irfan Azam
Gunes Karabulut Kurt
Why Don't Prompt-Based Fairness Metrics Correlate?
Abdelrahman Zayed
Goncalo Mordido
Ioana Baldini
The widespread use of large language models has brought up essential questions about the potential biases these models might learn. This led… (see more) to the development of several metrics aimed at evaluating and mitigating these biases. In this paper, we first demonstrate that prompt-based fairness metrics exhibit poor agreement, as measured by correlation, raising important questions about the reliability of fairness assessment using prompts. Then, we outline six relevant reasons why such a low correlation is observed across existing metrics. Based on these insights, we propose a method called Correlated Fairness Output (CAIRO) to enhance the correlation between fairness metrics. CAIRO augments the original prompts of a given fairness metric by using several pre-trained language models and then selects the combination of the augmented prompts that achieves the highest correlation across metrics. We show a significant improvement in Pearson correlation from 0.3 and 0.18 to 0.90 and 0.98 across metrics for gender and religion biases, respectively. Our code is available at https://github.com/chandar-lab/CAIRO.
A Deep Dive into the Trade-Offs of Parameter-Efficient Preference Alignment Techniques
Megh Thakkar
Quentin Fournier
Matthew D Riemer
Pin-Yu Chen
Payel Das
Online Continual Learning of Video Diffusion Models From a Single Video Stream
Jinsoo Yoo
Dylan Green
Geoff Pleiss
Recurrent Policies Are Not Enough for Continual Reinforcement Learning
Nathan Samuel de Lara
Veronica Chelu
Continual Reinforcement Learning (CRL) aims to develop algorithms that adapt to non-stationary sequences of tasks. A promising recent approa… (see more)ch utilizes Recurrent Neural Networks (RNNs) to learn contextual Markov Decision Process (MDP) embeddings. This enables a reinforcement learning (RL) agent to discern the optimality of actions across diverse tasks. In this study, we examine two critical failure modes in the learning of these contextual MDP embeddings. Specifically, we find that RNNs are prone to catastrophic forgetting, manifesting in two distinct ways: (i) embedding collapse---where agents initially learn a contextual task structure that later collapses to a single task, and (ii) embedding drift---where learning embeddings for new MDPs interferes with embeddings the RNN outputs for previous MDPs in the sequence, leading to suboptimal performance of downstream policy networks conditioned on stale embeddings. We explore the effects of various objective functions and network architectures concerning these failure modes, revealing that one of these modes consistently emerges across different setups.