Publications

Jointly-Learned Exit and Inference for a Dynamic Neural Network : JEI-DNN
Florence Regol
Joud Chataoui
Posterior Sampling of the Initial Conditions of the Universe from Non-linear Large Scale Structures using Score-Based Generative Models
Ronan Legin
Matthew Ho
Pablo Lemos
Shirley Ho
Benjamin Wandelt
Predicting Solar PV Output Based on Hybrid Deep Learning and Physical
Models: Case Study of Morocco
Samira Abousaid
Loubna Benabbou
Ismail Belhaj
Abdelaziz Berrado
Hicham Bouzekri
Summary of the Fourth International Workshop on Deep Learning for Testing and Testing for Deep Learning (DeepTest 2023)
Matteo Biagiola
Nicolás Cardozo
Donghwan Shin
Andrea Stocco
Vincenzo Riccio
A cry for help: Early detection of brain injury in newborns
Charles Onu
Samantha Latremouille
Arsenii Gorin
Junhao Wang
Uchenna Ekwochi
P. Ubuane
O. Kehinde
Muhammad A. Salisu
Datonye Briggs
Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
Kashif Rasul
Arjun Ashok
Andrew Robert Williams
Arian Khorasani
George Adamopoulos
Rishika Bhagwatkar
Marin Bilovs
Hena Ghonia
N. Hassen
Anderson Schneider
Sahil Garg
Yuriy Nevmyvaka
Over the past years, foundation models have caused a paradigm shift in machine learning due to their unprecedented capabilities for zero-sho… (see more)t and few-shot generalization. However, despite the success of foundation models in modalities such as natural language processing and computer vision, the development of foundation models for time series forecasting has lagged behind. We present Lag-Llama, a general-purpose foundation model for univariate probabilistic time series forecasting based on a decoder-only transformer architecture that uses lags as covariates. Lag-Llama is pretrained on a large corpus of diverse time series data from several domains, and demonstrates strong zero-shot generalization capabilities compared to a wide range of forecasting models on downstream datasets across domains. Moreover, when fine-tuned on relatively small fractions of such previously unseen datasets, Lag-Llama achieves state-of-the-art performance, outperforming prior deep learning approaches, emerging as the best general-purpose model on average. Lag-Llama serves as a strong contender to the current state-of-art in time series forecasting and paves the way for future advancements in foundation models tailored to time series data.
Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
Kashif Rasul
Arjun Ashok
Andrew Robert Williams
Arian Khorasani
George Adamopoulos
Rishika Bhagwatkar
Marin Bilovs
Hena Ghonia
Nadhir Hassen
Anderson Schneider
Sahil Garg
Yuriy Nevmyvaka
AAPM Medical Physics Practice Guideline 14.a: Yttrium‐90 microsphere radioembolization
Nathan C. Busse
Muthana S. A. L. Al‐Ghazi
Nadine Abi‐Jaoudeh
Diane Alvarez
Ahmet S. Ayan
Erli Chen
Michael D. Chuong
William A. Dezarn
Stephen A. Graves
Robert F. Hobbs
Mary Ellen Jafari
S. Peter Kim
Nichole M. Maughan
Andrew M. Polemi
Jennifer R. Stickel
Explainable Attention for Few-shot Learning and Beyond
Bahareh Nikpour
A general framework for the practical disintegration of PAC-Bayesian bounds
Paul Viallard
Amaury Habrard
Emilie Morvant
Deep Learning Benchmark for First Break Detection from Hardrock Seismic Reflection Data
Pierre-Luc St-Charles
Bruno Rousseau
Joumana Ghosn
Gilles Bellefleur
E. Schetselaar
Deep learning techniques are used to tackle a variety of tasks related to seismic data processing and interpretation. While many works have … (see more)shown the benefits of deep learning, assessing the generalization capabilities of proposed methods to data acquired in different conditions and geological environments remains challenging. This is especially true for applications in hardrock environments where seismic surveys are still relatively rare. The primary factors that impede the adoption of machine learning in geosciences include the lack of publicly available and labeled datasets, and the use of inadequate evaluation methodologies. Since machine learning models are prone to overfit and underperform when the data used to train them is site-specific, the applicability of these models on new survey data that could be considered “out-of-distribution” is rarely addressed. This is unfortunate, as evaluating predictive models in out-of-distribution settings can provide a good insight into their usefulness in real-world use cases. To tackle these issues, we propose a simple benchmarking methodology for first break picking to evaluate the transferability of deep learning models that are trained across different environments and acquisition conditions. For this, we consider a reflection seismic survey dataset acquired at five distinct hardrock mining sites combined with annotations for first break picking. We train and evaluate a baseline deep learning solution based on a U-Net for future comparisons, and discuss potential improvements to this approach.
Debiasing Counterfactuals in the Presence of Spurious Correlations
Amar Kumar
Nima Fathi
Raghav Mehta
Brennan Nichyporuk
Jean-Pierre R. Falet
Sotirios A. Tsaftaris
Deep learning models can perform well in complex medical imaging classification tasks, even when basing their conclusions on spurious correl… (see more)ations (i.e. confounders), should they be prevalent in the training dataset, rather than on the causal image markers of interest. This would thereby limit their ability to generalize across the population. Explainability based on counterfactual image generation can be used to expose the confounders but does not provide a strategy to mitigate the bias. In this work, we introduce the first end-to-end training framework that integrates both (i) popular debiasing classifiers (e.g. distributionally robust optimization (DRO)) to avoid latching onto the spurious correlations and (ii) counterfactual image generation to unveil generalizable imaging markers of relevance to the task. Additionally, we propose a novel metric, Spurious Correlation Latching Score (SCLS), to quantify the extent of the classifier reliance on the spurious correlation as exposed by the counterfactual images. Through comprehensive experiments on two public datasets (with the simulated and real visual artifacts), we demonstrate that the debiasing method: (i) learns generalizable markers across the population, and (ii) successfully ignores spurious correlations and focuses on the underlying disease pathology.