Publications

Canadarm, Canadarm2, and Canadarm3: The Evolution of Canada's Iconic Robotic System and Its Impacts from Space Down to Earth
Yianni Hudon-Castillo
Jean-Christophe Lamanque
Marion Thénault
Katherine Zamudio-Turcotte
Sri Venkata Vathsala Musunuri
Auriane Thilloy
Olivier Leclair
Mohamed Amine Elforaici
Rafael Daigneault
Rachad Chazbek
CATRO: Channel Pruning via Class-Aware Trace Ratio Optimization
Wenzheng Hu
Ning Liu
Zhengping Che
Mingyang Li
Changshui Zhang
Jianqiang Wang
Deep convolutional neural networks are shown to be overkill with high parametric and computational redundancy in many application scenarios,… (see more) and an increasing number of works have explored model pruning to obtain lightweight and efficient networks. However, most existing pruning approaches are driven by empirical heuristics and rarely consider the joint impact of channels, leading to unguaranteed and suboptimal performance. In this article, we propose a novel channel pruning method via class-aware trace ratio optimization (CATRO) to reduce the computational burden and accelerate the model inference. Utilizing class information from a few samples, CATRO measures the joint impact of multiple channels by feature space discriminations and consolidates the layerwise impact of preserved channels. By formulating channel pruning as a submodular set function maximization problem, CATRO solves it efficiently via a two-stage greedy iterative optimization procedure. More importantly, we present theoretical justifications on convergence of CATRO and performance of pruned networks. Experimental results demonstrate that CATRO achieves higher accuracy with similar computation cost or lower computation cost with similar accuracy than other state-of-the-art channel pruning algorithms. In addition, because of its class-aware property, CATRO is suitable to prune efficient networks adaptively for various classification subtasks, enhancing handy deployment and usage of deep networks in real-world applications.
Causal Adversarial Perturbations for Individual Fairness and Robustness in Heterogeneous Data Spaces
Ahmad-reza Ehyaei
Amir-Hossein Karimi
Samira Samadi
As responsible AI gains importance in machine learning algorithms, properties such as fairness, adversarial robustness, and causality have r… (see more)eceived considerable attention in recent years. However, despite their individual significance, there remains a critical gap in simultaneously exploring and integrating these properties. In this paper, we propose a novel approach that examines the relationship between individual fairness, adversarial robustness, and structural causal models in heterogeneous data spaces, particularly when dealing with discrete sensitive attributes. We use causal structural models and sensitive attributes to create a fair metric and apply it to measure semantic similarity among individuals. By introducing a novel causal adversarial perturbation and applying adversarial training, we create a new regularizer that combines individual fairness, causality, and robustness in the classifier. Our method is evaluated on both real-world and synthetic datasets, demonstrating its effectiveness in achieving an accurate classifier that simultaneously exhibits fairness, adversarial robustness, and causal awareness.
Caustics: A Python Package for Accelerated Strong Gravitational Lensing Simulations
M. J. Yantovski-Barth
Landung Setiawan
Cordero Core
Charles Wilson
Gabriel Missael Barco
ChainBuddy: An AI-assisted Agent System for Helping Users Set up LLM Pipelines
CL-MASR: A Continual Learning Benchmark for Multilingual ASR
Yusuf Cem Sübakan
Mirco Ravanaelli
Combine and Conquer: A Meta-Analysis on Data Shift and Out-of-Distribution Detection
Eduardo Dadalto Câmara Gomes
Florence Alberge
Pierre Duhamel
Common Challenges of Deep Reinforcement Learning Applications Development: An Empirical Study
Mohammad Mehdi Morovati
Florian Tambon
Mina Taraghi
Amin Nikanjam
Machine Learning (ML) is increasingly being adopted in different industries. Deep Reinforcement Learning (DRL) is a subdomain of ML used to … (see more)produce intelligent agents. Despite recent developments in DRL technology, the main challenges that developers face in the development of DRL applications are still unknown. To fill this gap, in this paper, we conduct a large-scale empirical study of 927 DRL-related posts extracted from Stack Overflow, the most popular Q&A platform in the software community. Through the process of labeling and categorizing extracted posts, we created a taxonomy of common challenges encountered in the development of DRL applications, along with their corresponding popularity levels. This taxonomy has been validated through a survey involving 65 DRL developers. Results show that at least 45% of developers experienced 18 of the 21 challenges identified in the taxonomy. The most frequent source of difficulty during the development of DRL applications are Comprehension, API usage, and Design problems, while Parallel processing, and DRL libraries/frameworks are classified as the most difficult challenges to address, with respect to the time required to receive an accepted answer. We hope that the research community will leverage this taxonomy to develop efficient strategies to address the identified challenges and improve the quality of DRL applications.
Connecting Weighted Automata, Tensor Networks and Recurrent Neural Networks through Spectral Learning
In this paper, we present connections between three models used in different research fields: weighted finite automata~(WFA) from formal lan… (see more)guages and linguistics, recurrent neural networks used in machine learning, and tensor networks which encompasses a set of optimization techniques for high-order tensors used in quantum physics and numerical analysis. We first present an intrinsic relation between WFA and the tensor train decomposition, a particular form of tensor network. This relation allows us to exhibit a novel low rank structure of the Hankel matrix of a function computed by a WFA and to design an efficient spectral learning algorithm leveraging this structure to scale the algorithm up to very large Hankel matrices.We then unravel a fundamental connection between WFA and second-orderrecurrent neural networks~(2-RNN): in the case of sequences of discrete symbols, WFA and 2-RNN with linear activationfunctions are expressively equivalent. Leveraging this equivalence result combined with the classical spectral learning algorithm for weighted automata, we introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous input vectors.This algorithm relies on estimating low rank sub-blocks of the Hankel tensor, from which the parameters of a linear 2-RNN can be provably recovered. The performances of the proposed learning algorithm are assessed in a simulation study on both synthetic and real-world data.
Consolidating Separate Degradations Model via Weights Fusion and Distillation
Dinesh Daultani
Real-world images prevalently contain different varieties of degradation, such as motion blur and luminance noise. Computer vision recogniti… (see more)on models trained on clean images perform poorly on degraded images. Previously, several works have explored how to perform image classification of degraded images while training a single model for each degradation. Nevertheless, it becomes challenging to host several degradation models for each degradation on limited hardware applications and to estimate degradation parameters correctly at the run-time. This work proposes a method for effectively combining several models trained separately on different degradations into a single model to classify images with different types of degradations. Our proposed method is four-fold: (1) train a base model on clean images, (2) fine-tune the base model in-dividually for all given image degradations, (3) perform a fusion of weights given the fine-tuned models for individual degradations, (4) perform fine-tuning on given task using distillation and cross-entropy loss. Our proposed method can outperform previous state-of-the-art methods of pretraining in out-of-distribution generalization based on degradations such as JPEG compression, salt-and-pepper noise, Gaussian blur, and additive white Gaussian noise by 2.5% on CIFAR-100 dataset and by 1.3% on CIFAR-10 dataset. Moreover, our proposed method can handle degra-dation used for training without any explicit information about degradation at the inference time. Code will be available at https://github.com/dineshdaultani/FusionDistill.
Context-Aware Assistant Selection for Improved Inference Acceleration with Large Language Models
Mehdi Rezagholizadeh
A. Chandar
Corticosteroids induce an early but limited decrease in IL-6 dependent pro-inflammatory responses in critically ill COVID-19 patients
Tomas URBINA
Paul GABARRE
Vincent BONNY
Jean-Rémi Lavillegrand
Marc GARNIER
Jérémie JOFFRE
Nathalie MARIO
Geoffroy HARIRI
Matthieu TURPIN
Emmanuel PARDO
Muriel FARTOUKH
Bertrand GUIDET
Eric Maury
Yannick CHANTRAN
Pierre-Yves BOELLE
Guillaume VOIRIOT
Hafid AIT-OUFELLA