Publications

Predicting Solar PV Output Based on Hybrid Deep Learning and Physical
Models: Case Study of Morocco
Samira Abousaid
Loubna Benabbou
Ismail Belhaj
Abdelaziz Berrado
Hicham Bouzekri
Summary of the Fourth International Workshop on Deep Learning for Testing and Testing for Deep Learning (DeepTest 2023)
Matteo Biagiola
Nicolás Cardozo
Donghwan Shin
Andrea Stocco
Vincenzo Riccio
A cry for help: Early detection of brain injury in newborns
Charles Onu
Samantha Latremouille
Arsenii Gorin
Junhao Wang
Uchenna Ekwochi
P. Ubuane
O. Kehinde
Muhammad A. Salisu
Datonye Briggs
Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
Kashif Rasul
Arjun Ashok
Andrew Robert Williams
Arian Khorasani
George Adamopoulos
Rishika Bhagwatkar
Marin Bilovs
Hena Ghonia
N. Hassen
Anderson Schneider
Sahil Garg
Yuriy Nevmyvaka
Over the past years, foundation models have caused a paradigm shift in machine learning due to their unprecedented capabilities for zero-sho… (see more)t and few-shot generalization. However, despite the success of foundation models in modalities such as natural language processing and computer vision, the development of foundation models for time series forecasting has lagged behind. We present Lag-Llama, a general-purpose foundation model for univariate probabilistic time series forecasting based on a decoder-only transformer architecture that uses lags as covariates. Lag-Llama is pretrained on a large corpus of diverse time series data from several domains, and demonstrates strong zero-shot generalization capabilities compared to a wide range of forecasting models on downstream datasets across domains. Moreover, when fine-tuned on relatively small fractions of such previously unseen datasets, Lag-Llama achieves state-of-the-art performance, outperforming prior deep learning approaches, emerging as the best general-purpose model on average. Lag-Llama serves as a strong contender to the current state-of-art in time series forecasting and paves the way for future advancements in foundation models tailored to time series data.
Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
Kashif Rasul
Arjun Ashok
Andrew Robert Williams
Arian Khorasani
George Adamopoulos
Rishika Bhagwatkar
Marin Bilovs
Hena Ghonia
Nadhir Hassen
Anderson Schneider
Sahil Garg
Yuriy Nevmyvaka
AAPM Medical Physics Practice Guideline 14.a: Yttrium‐90 microsphere radioembolization
Nathan C. Busse
Muthana S. A. L. Al‐Ghazi
Nadine Abi‐Jaoudeh
Diane Alvarez
Ahmet S. Ayan
Erli Chen
Michael D. Chuong
William A. Dezarn
Stephen A. Graves
Robert F. Hobbs
Mary Ellen Jafari
S. Peter Kim
Nichole M. Maughan
Andrew M. Polemi
Jennifer R. Stickel
Explainable Attention for Few-shot Learning and Beyond
Bahareh Nikpour
A general framework for the practical disintegration of PAC-Bayesian bounds
Paul Viallard
Amaury Habrard
Emilie Morvant
Quality of Service-Constrained Online Routing in High Throughput Satellites
Olivier B'elanger
Olfa Ben Yahia
St'ephane Martel
Gunes Karabulut-kurt
High throughput satellites (HTSs) outpace traditional satellites due to their multi-beam transmission. The rise of low Earth orbit mega cons… (see more)tellations amplifies HTS data rate demands to terabits/second with acceptable latency. This surge in data rate necessitates multiple modems, often exceeding single device capabilities. Consequently, satellites employ several processors, forming a complex packet-switch network. This can lead to potential internal congestion and challenges in adhering to strict quality of service (QoS) constraints. While significant research exists on constellation-level routing, a literature gap remains on the internal routing within a single HTS. The intricacy of this internal network architecture presents a significant challenge to achieve high data rates. This paper introduces an online optimal flow allocation and scheduling method for HTSs. The problem is presented as a multi-commodity flow instance with different priority data streams. An initial full time horizon model is proposed as a benchmark. We apply a model predictive control (MPC) approach to enable adaptive routing based on current information and the forecast within the prediction time horizon while allowing for deviation of the latter. Importantly, MPC is inherently suited to handle uncertainty in incoming flows. Our approach minimizes the packet loss by optimally and adaptively managing the priority queue schedulers and flow exchanges between satellite processing modules. Central to our method is a routing model focusing on optimal priority scheduling to enhance data rates and maintain QoS. The model's stages are critically evaluated, and results are compared to traditional methods via numerical simulations. Through simulations, our method demonstrates performance nearly on par with the hindsight optimum, showcasing its efficiency and adaptability in addressing satellite communication challenges.
Deep Learning Benchmark for First Break Detection from Hardrock Seismic Reflection Data
Pierre-Luc St-Charles
Bruno Rousseau
Joumana Ghosn
Gilles Bellefleur
E. Schetselaar
Deep learning techniques are used to tackle a variety of tasks related to seismic data processing and interpretation. While many works have … (see more)shown the benefits of deep learning, assessing the generalization capabilities of proposed methods to data acquired in different conditions and geological environments remains challenging. This is especially true for applications in hardrock environments where seismic surveys are still relatively rare. The primary factors that impede the adoption of machine learning in geosciences include the lack of publicly available and labeled datasets, and the use of inadequate evaluation methodologies. Since machine learning models are prone to overfit and underperform when the data used to train them is site-specific, the applicability of these models on new survey data that could be considered “out-of-distribution” is rarely addressed. This is unfortunate, as evaluating predictive models in out-of-distribution settings can provide a good insight into their usefulness in real-world use cases. To tackle these issues, we propose a simple benchmarking methodology for first break picking to evaluate the transferability of deep learning models that are trained across different environments and acquisition conditions. For this, we consider a reflection seismic survey dataset acquired at five distinct hardrock mining sites combined with annotations for first break picking. We train and evaluate a baseline deep learning solution based on a U-Net for future comparisons, and discuss potential improvements to this approach.
Debiasing Counterfactuals in the Presence of Spurious Correlations
Amar Kumar
Nima Fathi
Raghav Mehta
Brennan Nichyporuk
Jean-Pierre R. Falet
Sotirios A. Tsaftaris
Deep learning models can perform well in complex medical imaging classification tasks, even when basing their conclusions on spurious correl… (see more)ations (i.e. confounders), should they be prevalent in the training dataset, rather than on the causal image markers of interest. This would thereby limit their ability to generalize across the population. Explainability based on counterfactual image generation can be used to expose the confounders but does not provide a strategy to mitigate the bias. In this work, we introduce the first end-to-end training framework that integrates both (i) popular debiasing classifiers (e.g. distributionally robust optimization (DRO)) to avoid latching onto the spurious correlations and (ii) counterfactual image generation to unveil generalizable imaging markers of relevance to the task. Additionally, we propose a novel metric, Spurious Correlation Latching Score (SCLS), to quantify the extent of the classifier reliance on the spurious correlation as exposed by the counterfactual images. Through comprehensive experiments on two public datasets (with the simulated and real visual artifacts), we demonstrate that the debiasing method: (i) learns generalizable markers across the population, and (ii) successfully ignores spurious correlations and focuses on the underlying disease pathology.
Guiding Language Model Math Reasoning with Planning Tokens
Xinyi Wang
Lucas Caccia
Oleksiy Ostapenko
Xingdi Yuan
William Yang Wang
Large language models (LLMs) have recently attracted considerable interest for their ability to perform complex reasoning tasks, such as cha… (see more)in-of-thought reasoning. However, most of the existing approaches to enhance this ability rely heavily on data-driven methods, while neglecting the structural aspects of the model's reasoning capacity. We find that while LLMs can manage individual reasoning steps well, they struggle with maintaining consistency across an entire reasoning chain. To solve this, we introduce planning tokens at the start of each reasoning step, serving as a guide for the model, and add their embeddings to the model parameters. Our approach requires a negligible increase in trainable parameters (just 0.001%) and can be applied through either full fine-tuning or a more parameter-efficient scheme. We demonstrate our method's effectiveness by applying it to three different LLMs, showing notable accuracy improvements across three math word problem datasets w.r.t. standard fine-tuning baselines.