Publications

Advanced MRI metrics improve the prediction of baseline disease severity for individuals with degenerative cervical myelopathy
Abdul Al-Shawwa
Kalum Ost
David Anderson
Newton Cho
Nathan Evaniew
W. Bradley Jacobs
Allan R. Martin
Ranjeet Gaekwad
Saswati Tripathy
Jacques Bouchard
Steven Casha
Roger Cho
Stephen duPlessis
Peter Lewkonia
Fred Nicholls
Paul T. Salo
Alex Soroceanu
Ganesh Swamy
Kenneth C. Thomas
Michael M.H. Yang … (voir 2 de plus)
David W. Cadotte
Co-developing longitudinal patient registries for phenylketonuria and mucopolysaccharidoses in Canada
John Adams
Kim Angel
John J. Mitchell
Pranesh Chakraborty
Beth K. Potter
Michal Inbar-Feigenberg
Sylvia Stockler
Monica Lamoureux
Alison H. Howie
Alex Pace
Nancy J. Butcher
Cheryl Rockman-Greenberg
Robin Hayeems
Anne-Marie Laberge
Thierry Lacaze-Masmonteil
Jeff Round
Martin Offringa
Maryam Oksoui
Andreas Schulze
Kathy N. Speechley … (voir 3 de plus)
Kednapa Thavorn
Kumanan Wilson
Increasing schedule reliability in the multiple depot vehicle scheduling problem with stochastic travel time
L'ea Ricard
Guy Desaulniers
Andrea Lodi
Louis-Martin Rousseau
Machine Learning Robustness: A Primer
Houssem Ben Braiek
This chapter explores the foundational concept of robustness in Machine Learning (ML) and its integral role in establishing trustworthiness … (voir plus)in Artificial Intelligence (AI) systems. The discussion begins with a detailed definition of robustness, portraying it as the ability of ML models to maintain stable performance across varied and unexpected environmental conditions. ML robustness is dissected through several lenses: its complementarity with generalizability; its status as a requirement for trustworthy AI; its adversarial vs non-adversarial aspects; its quantitative metrics; and its indicators such as reproducibility and explainability. The chapter delves into the factors that impede robustness, such as data bias, model complexity, and the pitfalls of underspecified ML pipelines. It surveys key techniques for robustness assessment from a broad perspective, including adversarial attacks, encompassing both digital and physical realms. It covers non-adversarial data shifts and nuances of Deep Learning (DL) software testing methodologies. The discussion progresses to explore amelioration strategies for bolstering robustness, starting with data-centric approaches like debiasing and augmentation. Further examination includes a variety of model-centric methods such as transfer learning, adversarial training, and randomized smoothing. Lastly, post-training methods are discussed, including ensemble techniques, pruning, and model repairs, emerging as cost-effective strategies to make models more resilient against the unpredictable. This chapter underscores the ongoing challenges and limitations in estimating and achieving ML robustness by existing approaches. It offers insights and directions for future research on this crucial concept, as a prerequisite for trustworthy AI systems.
Self-supervised anomaly detection in computer vision and beyond: A survey and outlook.
Hadi Hojjati
Thi Kieu Khanh Ho
Scaling up ridge regression for brain encoding in a massive individual fMRI dataset
Sana Ahmadi
Tristan Glatard
Fast burst fraction transients convey information independent of the firing rate
Richard Naud
Xingyun Wang
Zachary Friedenberger
Alexandre Payeur
Jiyun N. Shin
Jean-Claude Béïque
Moritz Drüke
Matthew E. Larkum
Guy Doron
Theories of attention and learning have hypothesized a central role for high-frequency bursting in cognitive functions, but experimental rep… (voir plus)orts of burst-mediated representations in vivo have been limited. Here we used a novel demultiplexing approach by considering a conjunctive burst code. We studied this code in vivo while animals learned to report direct electrical stimulation of the somatosensory cortex and found two acquired yet independent representations. One code, the event rate, showed a sparse and succint stiumulus representation and a small modulation upon detection errors. The other code, the burst fraction, correlated more globally with stimulation and more promptly responded to detection errors. Bursting modulation was potent and its time course evolved, even in cells that were considered unresponsive based on the firing rate. During the later stages of training, this modulation in bursting happened earlier, gradually aligning temporally with the representation in event rate. The alignment of bursting and event rate modulation sharpened the firing rate response, and was strongly associated behavioral accuracy. Thus a fine-grained separation of spike timing patterns reveals two signals that accompany stimulus representations: an error signal that can be essential to guide learning and a sharpening signal that could implement attention mechanisms.
Improving Text-to-Image Consistency via Automatic Prompt Optimization
Oscar Mañas
Pietro Astolfi
Melissa Hall
Candace Ross
Jack Urbanek
Adina Williams
Michal Drozdzal
Predicting Species Occurrence Patterns from Partial Observations
Hager Radi
Mélisande Teng
To address the interlinked biodiversity and climate crises, we need an understanding of where species occur and how these patterns are chang… (voir plus)ing. However, observational data on most species remains very limited, and the amount of data available varies greatly between taxonomic groups. We introduce the problem of predicting species occurrence patterns given (a) satellite imagery, and (b) known information on the occurrence of other species. To evaluate algorithms on this task, we introduce SatButterfly, a dataset of satellite images, environmental data and observational data for butterflies, which is designed to pair with the existing SatBird dataset of bird observational data. To address this task, we propose a general model, R-Tran, for predicting species occurrence patterns that enables the use of partial observational data wherever found. We find that R-Tran outperforms other methods in predicting species encounter rates with partial information both within a taxon (birds) and across taxa (birds and butterflies). Our approach opens new perspectives to leveraging insights from species with abundant data to other species with scarce data, by modelling the ecosystems in which they co-occur.
Synthetic Data Generation and Joint Learning for Robust Code-Mixed Translation
Hi Bn
Ramakrishna Appicharla
Kamal Kumar
Asif Gupta
Kyunghyun Cho
Yoshua Ben­
Ondrej Bojar
Christian Buck
Christian Federmann
Yong Cheng
Lu Jiang
Wolfgang Macherey
Alexis Conneau
Guillaume Lample. 2019
Cross­
Yinhan Liu
Jiatao Gu
Naman Goyal
Sergey Xian Li … (voir 45 de plus)
Carol Myers­Scotton. 1997
El Moatez
Billah Nagoudi
AbdelRahim Elmadany
Muhammad Abdul­Mageed. 2021. Investigat­
Myle Ott
Sergey Edunov
Alexei R Baevski
Parth Patwa
Gustavo Aguilar
Sudipta Kar
Suraj
Srinivas Pandey
Björn Pykl
Gambäck
Tanmoy
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
dukasz Kaiser
Illia Polosukhin. 2017
Attention
Genta Indra Winata
Andrea Madotto
Chien­Sheng
Wu Pascale
Fung
Code­switching
ing. In
Felix Wu
Angela Fan
Yann Dauphin
Linting Xue
Noah Constant
Mihir Adam Roberts
Rami Kale
Aditya Al­Rfou
Aditya Siddhant
Barua
Shuyan Zhou
Xiangkai Zeng
Antonios Yingqi Zhou
Anastasopoulos Graham
Neubig. 2019
Im­
The widespread online communication in a modern multilingual world has provided opportunities to blend more than one language (aka code-mixe… (voir plus)d language) in a single utterance. This has resulted a formidable challenge for the computational models due to the scarcity of annotated data and presence of noise. A potential solution to mitigate the data scarcity problem in low-resource setup is to leverage existing data in resource-rich language through translation. In this paper, we tackle the problem of code-mixed (Hinglish and Bengalish) to English machine translation. First, we synthetically develop HINMIX, a parallel corpus of Hinglish to English, with ~4.2M sentence pairs. Subsequently, we propose RCMT, a robust perturbation based joint-training model that learns to handle noise in the real-world code-mixed text by parameter sharing across clean and noisy words. Further, we show the adaptability of RCMT in a zero-shot setup for Bengalish to English translation. Our evaluation and comprehensive analyses qualitatively and quantitatively demonstrate the superiority of RCMT over state-of-the-art code-mixed and robust translation methods.
Adversarial Attacks on the Interpretation of Neuron Activation Maximization
G'eraldin Nanfack
Alexander Fulleringer
Jonathan Marty
Michael Eickenberg
Feature visualization is one of the most popular techniques used to interpret the internal behavior of individual units of trained deep neur… (voir plus)al networks. Based on activation maximization, they consist of finding synthetic or natural inputs that maximize neuron activations. This paper introduces an optimization framework that aims to deceive feature visualization through adversarial model manipulation. It consists of finetuning a pre-trained model with a specifically introduced loss that aims to maintain model performance, while also significantly changing feature visualization. We provide evidence of the success of this manipulation on several pre-trained models for the classification task with ImageNet.
Generalizing across Temporal Domains with Koopman Operators
Qiuhao Zeng
Wei Wang
Fan Zhou
Gezheng Xu
Ruizhi Pu
Changjian Shui
Shichun Yang
Boyu Wang
Charles Ling