Publications

Learning to Adapt: Communication Load Balancing via Adaptive Deep Reinforcement Learning
Di Wu
Yi Tian Xu
Jimmy Li
M. Jenkin
Ekram Hossain
Seowoo Jang
Yan Xin
Charlie Zhang
The association of mobile devices with network resources (e.g., base stations, frequency bands/channels), known as load balancing, is critic… (voir plus)al to reduce communication traffic congestion and network performance. Reinforcement learning (RL) has shown to be effective for communication load balancing and achieves better performance than currently used rule-based methods, especially when the traffic load changes quickly. However, RL-based methods usually need to interact with the environment for a large number of time steps to learn an effective policy and can be difficult to tune. In this work, we aim to improve the data efficiency of RL-based solutions to make them more suitable and applicable for real-world applications. Specifically, we propose a simple, yet efficient and effective deep RL-based wireless network load balancing framework. In this solution, a set of good initialization values for control actions are selected with some cost-efficient approach to center the training of the RL agent. Then, a deep RL-based agent is trained to find offsets from the initialization values that optimize the load balancing problem. Experimental evaluation on a set of dynamic traffic scenarios demonstrates the effectiveness and efficiency of the proposed method.
A Machine Learning Based Approach to Detect Machine Learning Design Patterns
Weitao Pan
Hironori Washizaki
Nobukazu Yoshioka
Yoshiaki Fukazawa
Yann‐Gaël Guéhéneuc
As machine learning expands to various domains, the demand for reusable solutions to similar problems increases. Machine learning design pat… (voir plus)terns are reusable solutions to design problems of machine learning applications. They can significantly enhance programmers' productivity in programming that requires machine learning algorithms. Given the critical role of machine learning design patterns, the automated detection of them becomes equally vital. However, identifying design patterns can be time-consuming and error-prone. We propose an approach to detect their occurrences in Python files. Our approach uses an Abstract Syntax Tree (AST) of Python files to build a corpus of data and train a refined Text-CNN model to automatically identify machine learning design patterns. We empirically validate our approach by conducting an exploratory study to detect four common machine learning design patterns: Embedding, Multilabel, Feature Cross, and Hashed Feature. We manually label 450 Python code files containing these design patterns from repositories of projects in GitHub. Our approach achieves accuracy values ranging from 80 % to 92% for each of the four patterns.
Step-GRAND: A Low Latency Universal Soft-Input Decoder
Syed Mohsin Abbas
Marwan Jalaleddine
Chi-Ying Tsui
GRAND features both soft-input and hard-input variants that are well suited to efficient hardware implementations that can be characterized … (voir plus)with achievable average and worst-case decoding latency. This paper introduces step-GRAND, a soft-input variant of GRAND that, in addition to achieving appealing average decoding latency, also reduces the worst-case decoding latency of the corresponding hardware implementation. The hardware implementation results demonstrate that the proposed step-GRAND can decode CA-polar code (128,105+11) with an average information throughput of 47.7 Gbps at the target FER of
Working Backwards: Learning to Place by Picking
Oliver Limoyo
Abhisek Konar
Trevor Ablett
Jonathan Kelly
Francois Hogan
Decision Diagrams in Space!
Isaac Rudich
Manuel L'opez-Ib'anez
Michael Romer
Louis-Martin Rousseau
Can We Learn Communication-Efficient Optimizers?
Charles-Étienne Joseph
Benjamin Thérien
Abhinav Moudgil
Boris Knyazev
Advancing Clinical Psychiatry: Integration of Clinical and Omics Data Using Machine Learning
Bill Qi
Automatic Head and Neck Tumor segmentation and outcome prediction relying on FDG-PET/CT images: Findings from the second edition of the HECKTOR challenge
Vincent Andrearczyk
Valentin Oreiller
Sarah Boughdad
Catherine Cheze Le Rest
Olena Tankyevych
Hesham M. Elhalawani
Mario Jreige
John O. Prior
Dimitris Visvikis
Mathieu Hatt
Adrien Depeursinge
Balaur: Language Model Pretraining with Lexical Semantic Relations
Andrei Mircea
Brain decoding of the Human Connectome Project tasks in a dense individual fMRI dataset
Shima Rastegarnia
Marie St-Laurent
Elizabeth DuPre
Basile Pinsard
Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model
Parishad BehnamGhader
Santiago Miret
Augmenting pretrained language models with retrievers to select the supporting documents has shown promise in effectively solving common NLP… (voir plus) problems, including language modeling and question answering, in an interpretable way. In this paper, we first study the strengths and weaknesses of different retriever-augmented language models (REALM,
Cross-lingual Open-Retrieval Question Answering for African Languages
Odunayo Ogundepo
Tajuddeen Gwadabe
Clara E. Rivera
Jonathan H. Clark
Sebastian Ruder
Bonaventure F. P. Dossou
Abdou Aziz DIOP
Claytone Sikasote
Gilles HACHEME
Happy Buzaaba
Ignatius Ezeani
Rooweither Mabuya
Salomey Osei
Chris Emezue
Albert Kahira
Shamsuddeen Hassan Muhammad
Akintunde Oladipo
Abraham Toluwase Owodunni
Atnafu Lambebo Tonja … (voir 32 de plus)
Iyanuoluwa Shode
Akari Asai
Tunde Oluwaseyi Ajayi
Clemencia Siro
Stephen Arthur
Mofetoluwa Adeyemi
Orevaoghene Ahia
Aremu Anuoluwapo
Oyinkansola Awosan
Chiamaka Ijeoma Chukwuneke
Bernard Opoku
Ayodele Awokoya
Verrah Akinyi Otiende
Christine Mwase
Boyd Sinkala
Andre Niyongabo Rubungo
Daniel Ajisafe
Emeka Felix Onwuegbuzia
Habib Mbow
Emile Niyomutabazi
Eunice Mukonde
Falalu Lawan
Ibrahim Ahmad
Jesujoba Oluwadara Alabi
Martin Namukombo
CHINEDU EMMANUEL MBONU
Mofya Phiri
Neo Putini
Ndumiso Mngoma
Priscilla A. Amuok
Ruqayya Nasir Iro
Sonia Adhiambo