Publications

Deep Neural Networks pruning via the Structured Perspective Regularization
Matteo Cacciola
Antonio Frangioni
Xinlin Li
Towards Causal Representations of Climate Model Data
Julien Boussard
Chandni Nagda
Julia Kaltenborn
Charlotte Emilie Elektra Lange
Philippe Brouillard
Yaniv Gurwicz
Peer Nowack
Climate models, such as Earth system models (ESMs), are crucial for simulating future climate change based on projected Shared Socioeconomic… (voir plus) Pathways (SSP) greenhouse gas emissions scenarios. While ESMs are sophisticated and invaluable, machine learning-based emulators trained on existing simulation data can project additional climate scenarios much faster and are computationally efficient. However, they often lack generalizability and interpretability. This work delves into the potential of causal representation learning, specifically the \emph{Causal Discovery with Single-parent Decoding} (CDSD) method, which could render climate model emulation efficient \textit{and} interpretable. We evaluate CDSD on multiple climate datasets, focusing on emissions, temperature, and precipitation. Our findings shed light on the challenges, limitations, and promise of using CDSD as a stepping stone towards more interpretable and robust climate model emulation.
AdaTeacher: Adaptive Multi-Teacher Weighting for Communication Load Forecasting
Chengming Hu
Ju Wang
Di Wu
Yan Xin
Charlie Zhang
To deal with notorious delays in communication systems, it is crucial to forecast key system characteristics, such as the communication load… (voir plus). Most existing studies aggregate data from multiple edge nodes for improving the forecasting accuracy. However, the bandwidth cost of such data aggregation could be unacceptably high from the perspective of system operators. To achieve both the high forecasting accuracy and bandwidth efficiency, this paper proposes an Adaptive Multi-Teacher Weighting in Teacher-Student Learning approach, namely AdaTeacher, for communication load forecasting of multiple edge nodes. Each edge node trains a local model on its own data. A target node collects multiple models from its neighbor nodes and treats these models as teachers. Then, the target node trains a student model from teachers via Teacher-Student (T-S) learning. Unlike most existing T-S learning approaches that treat teachers evenly, resulting in a limited performance, AdaTeacher introduces a bilevel optimization algorithm to dynamically learn an importance weight for each teacher toward a more effective and accurate T-S learning process. Compared to the state-of-the-art methods, Ada Teacher not only reduces the bandwidth cost by 53.85%, but also improves the load forecasting accuracy by 21.56% and 24.24% on two real-world datasets.
Energy Saving in Cellular Wireless Networks via Transfer Deep Reinforcement Learning
Di Wu
Yi Tian Xu
M. Jenkin
Seowoo Jang
Ekram Hossain
With the increasing use of data-intensive mobile applications and the number of mobile users, the demand for wireless data services has been… (voir plus) increasing exponentially in recent years. In order to address this demand, a large number of new cellular base stations are being deployed around the world, leading to a significant increase in energy consumption and greenhouse gas emission. Consequently, energy consumption has emerged as a key concern in the fifth-generation (5G) network era and beyond. Reinforcement learning (RL), which aims to learn a control policy via interacting with the environment, has been shown to be effective in addressing network optimization problems. However, for reinforcement learning, especially deep reinforcement learning, a large number of interactions with the environment are required. This often limits its applicability in the real world. In this work, to better deal with dynamic traffic scenarios and improve real-world applicability, we propose a transfer deep reinforcement learning framework for energy optimization in cellular communication networks. Specifically, we first pre-train a set of RL-based energy-saving policies on source base stations and then transfer the most suitable policy to the given target base station in an unsupervised learning manner. Experimental results demonstrate that base station energy consumption can be reduced significantly using this approach.
Exhaustive Evaluation of Dynamic Link Prediction
Farimah Poursafaei
Dynamic link prediction is a crucial task in the study of evolving graphs, which serve as abstract models for various real-world application… (voir plus)s. Recent dynamic graph representation learning models have claimed near-perfect performance in this task. However, we argue that the standard evaluation strategy for dynamic link prediction overlooks the sparsity and recurrence patterns inherent in dynamic networks. Specifically, the current strategy suffers from issues such as evaluating models on a balanced set of positive and negative edges, neglecting the reassessment of frequently recurring positive edges, and lacking a comprehensive evaluation of both recurring and new edges.To address these limitations, we propose a novel evaluation strategy called EXHAUSTIVE, which takes into account all relevant negative edges and separately assesses the performance on recurring and new edges. Using our proposed evaluation strategy, we compare the performance of five state-of-the-art dynamic graph learning models on seven benchmark datasets. Compared to the previous common evaluation strategy, we observe an average drop of 62% in Average Precision for dynamic link prediction. Additionally, the ranking of the models also changes under the new evaluation setting. Furthermore, we demonstrate that while all models perform considerably worse when predicting new edges compared to recurring ones, the best performing models differ between the two scenarios. This highlights the importance of employing the proposed evaluation strategy for both the assessment and design of dynamic link prediction models. By adopting our novel evaluation strategy, researchers can obtain a more accurate understanding of model performance in dynamic link prediction, leading to improved evaluation and design of such models.
Learning to Adapt: Communication Load Balancing via Adaptive Deep Reinforcement Learning
Di Wu
Yi Tian Xu
Jimmy Li
M. Jenkin
Ekram Hossain
Seowoo Jang
Yan Xin
Charlie Zhang
The association of mobile devices with network resources (e.g., base stations, frequency bands/channels), known as load balancing, is critic… (voir plus)al to reduce communication traffic congestion and network performance. Reinforcement learning (RL) has shown to be effective for communication load balancing and achieves better performance than currently used rule-based methods, especially when the traffic load changes quickly. However, RL-based methods usually need to interact with the environment for a large number of time steps to learn an effective policy and can be difficult to tune. In this work, we aim to improve the data efficiency of RL-based solutions to make them more suitable and applicable for real-world applications. Specifically, we propose a simple, yet efficient and effective deep RL-based wireless network load balancing framework. In this solution, a set of good initialization values for control actions are selected with some cost-efficient approach to center the training of the RL agent. Then, a deep RL-based agent is trained to find offsets from the initialization values that optimize the load balancing problem. Experimental evaluation on a set of dynamic traffic scenarios demonstrates the effectiveness and efficiency of the proposed method.
A Machine Learning Based Approach to Detect Machine Learning Design Patterns
Weitao Pan
Hironori Washizaki
Nobukazu Yoshioka
Yoshiaki Fukazawa
Yann‐Gaël Guéhéneuc
As machine learning expands to various domains, the demand for reusable solutions to similar problems increases. Machine learning design pat… (voir plus)terns are reusable solutions to design problems of machine learning applications. They can significantly enhance programmers' productivity in programming that requires machine learning algorithms. Given the critical role of machine learning design patterns, the automated detection of them becomes equally vital. However, identifying design patterns can be time-consuming and error-prone. We propose an approach to detect their occurrences in Python files. Our approach uses an Abstract Syntax Tree (AST) of Python files to build a corpus of data and train a refined Text-CNN model to automatically identify machine learning design patterns. We empirically validate our approach by conducting an exploratory study to detect four common machine learning design patterns: Embedding, Multilabel, Feature Cross, and Hashed Feature. We manually label 450 Python code files containing these design patterns from repositories of projects in GitHub. Our approach achieves accuracy values ranging from 80 % to 92% for each of the four patterns.
Step-GRAND: A Low Latency Universal Soft-Input Decoder
Syed Mohsin Abbas
Marwan Jalaleddine
Chi-Ying Tsui
GRAND features both soft-input and hard-input variants that are well suited to efficient hardware implementations that can be characterized … (voir plus)with achievable average and worst-case decoding latency. This paper introduces step-GRAND, a soft-input variant of GRAND that, in addition to achieving appealing average decoding latency, also reduces the worst-case decoding latency of the corresponding hardware implementation. The hardware implementation results demonstrate that the proposed step-GRAND can decode CA-polar code (128,105+11) with an average information throughput of 47.7 Gbps at the target FER of
Working Backwards: Learning to Place by Picking
Oliver Limoyo
Abhisek Konar
Trevor Ablett
Jonathan Kelly
Francois Hogan
Decision Diagrams in Space!
Isaac Rudich
Manuel L'opez-Ib'anez
Michael Romer
Louis-Martin Rousseau
Can We Learn Communication-Efficient Optimizers?
Charles-Étienne Joseph
Benjamin Thérien
Abhinav Moudgil
Boris Knyazev
Advancing Clinical Psychiatry: Integration of Clinical and Omics Data Using Machine Learning
Bill Qi