Publications

Dynamic Routing and Wavelength Assignment with Reinforcement Learning
Peyman Kafaei
Hamed Pouya
Louis-Martin Rousseau
With the rapid developments in communication systems, and considering their dynamic nature, all-optical networks are becoming increasingly c… (see more)omplex. This study proposes a novel method based on deep reinforcement learning for the routing and wavelength assignment problem in all-optical wavelength-decision-multiplexing networks. We consider dynamic incoming requests, in which their arrival and holding times are not known in advance. The objective is to devise a strategy that minimizes the number of rejected packages due to the lack of resources in the long term. We use graph neural networks to capture crucial latent information from the graph-structured input to develop the optimal strategy. The proposed deep reinforcement learning algorithm selects a route and a wavelength simultaneously for each incoming traffic connection as they arrive. The results demonstrate that the learned agent outperforms the methods used in practice and can be generalized on network topologies that did not participate in training.
Invariant Causal Set Covering Machines
Thibaud Godon
Baptiste Bauvin
Jacques Corbeil
Beyond Gaussian Noise: A Generalized Approach to Likelihood Analysis with Non-Gaussian Noise
Ronan Legin
Alexandre Adam
A Functional Data Perspective and Baseline On Multi-Layer Out-of-Distribution Detection
Eduardo Dadalto Câmara Gomes
Pierre Colombo
Guillaume Staerman
Nathan Noiry
The Stack: 3 TB of permissively licensed source code
Denis Kocetkov
Raymond Li
Loubna Ben allal
Jia LI
Chenghao Mou
Carlos Muñoz Ferrandis
Yacine Jernite
Margaret Mitchell
Sean Hughes
Thomas Wolf
Leandro Von Werra
Harm de Vries
Large Language Models (LLMs) play an ever-increasing role in the field of Artificial Intelligence (AI)--not only for natural language proces… (see more)sing but also for code understanding and generation. To stimulate open and responsible research on LLMs for code, we introduce The Stack, a 3.1 TB dataset consisting of permissively licensed source code in 30 programming languages. We describe how we collect the full dataset, construct a permissively licensed subset, present a data governance plan, discuss limitations, and show promising results on text2code benchmarks by training 350M-parameter decoders on different Python subsets. We find that (1) near-deduplicating the data significantly boosts performance across all experiments, and (2) it is possible to match previously reported HumanEval and MBPP performance using only permissively licensed data. We make the dataset available at https://hf.co/BigCode, provide a tool called"Am I in The Stack"(https://hf.co/spaces/bigcode/in-the-stack) for developers to search The Stack for copies of their code, and provide a process for code to be removed from the dataset by following the instructions at https://www.bigcode-project.org/docs/about/the-stack/.
AHA!: Facilitating AI Impact Assessment by Generating Examples of Harms
Zana Buçinca
Chau Minh Pham
Maurice Jakesch
Marco Túlio Ribeiro
Saleema Amershi
While demands for change and accountability for harmful AI consequences mount, foreseeing the downstream effects of deploying AI systems rem… (see more)ains a challenging task. We developed AHA! (Anticipating Harms of AI), a generative framework to assist AI practitioners and decision-makers in anticipating potential harms and unintended consequences of AI systems prior to development or deployment. Given an AI deployment scenario, AHA! generates descriptions of possible harms for different stakeholders. To do so, AHA! systematically considers the interplay between common problematic AI behaviors as well as their potential impacts on different stakeholders, and narrates these conditions through vignettes. These vignettes are then filled in with descriptions of possible harms by prompting crowd workers and large language models. By examining 4113 harms surfaced by AHA! for five different AI deployment scenarios, we found that AHA! generates meaningful examples of harms, with different problematic AI behaviors resulting in different types of harms. Prompting both crowds and a large language model with the vignettes resulted in more diverse examples of harms than those generated by either the crowd or the model alone. To gauge AHA!'s potential practical utility, we also conducted semi-structured interviews with responsible AI professionals (N=9). Participants found AHA!'s systematic approach to surfacing harms important for ethical reflection and discovered meaningful stakeholders and harms they believed they would not have thought of otherwise. Participants, however, differed in their opinions about whether AHA! should be used upfront or as a secondary-check and noted that AHA! may shift harm anticipation from an ideation problem to a potentially demanding review problem. Drawing on our results, we discuss design implications of building tools to help practitioners envision possible harms.
High-Throughput Edge Inference for BERT Models via Neural Architecture Search and Pipeline
Hung-Yang Chang
Seyyed Hasan Mozafari
James J. Clark
Brett Meyer
There has been growing interest in improving the BERT inference throughput on resource-constrained edge devices for a satisfactory user expe… (see more)rience. One methodology is to employ heterogeneous computing, which utilizes multiple processing elements to accelerate inference. Another methodology is to deploy Neural Architecture Search (NAS) to find optimal solutions in accuracy-throughput design space. In this paper, for the first time, we incorporate NAS with pipelining for BERT models. We show that performing NAS with pipelining achieves on average 53% higher throughput, compared to NAS with a homogeneous system. Additionally, we propose a NAS algorithm that incorporates hardware performance feedback to accelerate the NAS process. Our proposed NAS algorithm speeds up the search process by ~4x, and 5.5x on the design space of the BERT and CNNs, respectively. Also, by exploring the accuracy-throughput design space of BERT models, we demonstrate that performing pipelining then NAS (Pipeline-then-NAS) can lead to solutions with up to 9x higher inference throughput, compared to running homogeneous inference on the BERT-base model, with only a 1.3% decrease in accuracy.
Integer Programming Games: A Gentle Computational Overview
Gabriele Dragotto
Sriram Sankaranarayan
PAC-Bayesian Learning of Aggregated Binary Activated Neural Networks with Probabilities over Representations
Louis Fortier-Dubois
Gaël Letarte
Benjamin Leblanc
François Laviolette
Spatial variations in aromatic hydrocarbon emission in a dust-rich galaxy
Justin Spilker
Kedar A. Phadke
Manuel Aravena
Melanie Archipley
Matthew Bayliss
Jack E. Birkin
Matthieu Béthermin
James R. Burgoyne
Jared Cathey
Scott Chapman
Håkon Dahle
Anthony H. Gonzalez
Gayathri Gururajan
Christopher C Hayward
Ryley Hill
Taylor A. Hutchison
Keunho J. Kim
Seonwoo Kim
D. Law … (see 19 more)
Ronan Legin
M. Malkan
Daniel P. Marrone
E. Murphy
Desika Narayanan
Alexander Navarre
Grace M. Olivier
J. Rich
Jane R Rigby
Cassie Reuter
J. Rhoads
Keren Sharon
J. Smith
Manuel Solimano
Nikolaus Sulzenauer
Joaquin Vieira
David Vizgan
Axel Weiß
K. Whitaker
conn2res: A toolbox for connectome-based reservoir computing
Laura E. Suárez
Agoston Mihalik
Filip Milisav
Kenji Marshall
Mingze Li
Petra E. Vértes
Bratislav Mišić
Dialogue System with Missing Observation
Djallel Bouneffouf
Mayank Agarwal
Within the domain of dialogue, the ability to orchestrate multiple independently trained dialogue agents to create a unified system is of pa… (see more)rticular importance. Where we define orchestration as the task of selecting a subset of skills which most appropriately answer a user input using features extracted from both the user input and the individual skills. In this work, we study the task of online dialogue orchestration where the user feedback associated with the dialogue agent may not always be observed. In order to address the missing feedback setting, we propose to combine the attentive contextual bandit approach with an unsupervised learning mechanism such as clustering. By leveraging clustering to estimate missing reward, we are able to learn from each incoming event, even those with missing rewards. Promising empirical results are obtained on proprietary conversational datasets.