Publications

RelationalUNet for Image Segmentation
Ivaxi Sheth
Pedro H. M. Braga
Shiva Kanth Sujit
Sahar Dastani
Defining Feasibility as a Criterion for Essential Surgery: A Qualitative Study with Global Children’s Surgery Experts
Alizeh Abbas
Henry E. Rice
Lubna Samad
Jointly-Learned Exit and Inference for a Dynamic Neural Network : JEI-DNN
Florence Regol
Joud Chataoui
Posterior Sampling of the Initial Conditions of the Universe from Non-linear Large Scale Structures using Score-Based Generative Models
Ronan Legin
Matthew Ho
Pablo Lemos
Shirley Ho
Benjamin Wandelt
Predicting Solar PV Output Based on Hybrid Deep Learning and Physical
Models: Case Study of Morocco
Samira Abousaid
Loubna Benabbou
Ismail Belhaj
Abdelaziz Berrado
Hicham Bouzekri
Summary of the Fourth International Workshop on Deep Learning for Testing and Testing for Deep Learning (DeepTest 2023)
Matteo Biagiola
Nicolás Cardozo
Donghwan Shin
Andrea Stocco
Vincenzo Riccio
A cry for help: Early detection of brain injury in newborns
Charles Onu
Samantha Latremouille
Arsenii Gorin
Junhao Wang
Uchenna Ekwochi
P. Ubuane
O. Kehinde
Muhammad A. Salisu
Datonye Briggs
Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
Kashif Rasul
Arjun Ashok
Andrew Robert Williams
Arian Khorasani
George Adamopoulos
Rishika Bhagwatkar
Marin Bilovs
Hena Ghonia
N. Hassen
Anderson Schneider
Sahil Garg
Yuriy Nevmyvaka
Over the past years, foundation models have caused a paradigm shift in machine learning due to their unprecedented capabilities for zero-sho… (voir plus)t and few-shot generalization. However, despite the success of foundation models in modalities such as natural language processing and computer vision, the development of foundation models for time series forecasting has lagged behind. We present Lag-Llama, a general-purpose foundation model for univariate probabilistic time series forecasting based on a decoder-only transformer architecture that uses lags as covariates. Lag-Llama is pretrained on a large corpus of diverse time series data from several domains, and demonstrates strong zero-shot generalization capabilities compared to a wide range of forecasting models on downstream datasets across domains. Moreover, when fine-tuned on relatively small fractions of such previously unseen datasets, Lag-Llama achieves state-of-the-art performance, outperforming prior deep learning approaches, emerging as the best general-purpose model on average. Lag-Llama serves as a strong contender to the current state-of-art in time series forecasting and paves the way for future advancements in foundation models tailored to time series data.
Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
Kashif Rasul
Arjun Ashok
Andrew Robert Williams
Arian Khorasani
George Adamopoulos
Rishika Bhagwatkar
Marin Bilovs
Hena Ghonia
Nadhir Hassen
Anderson Schneider
Sahil Garg
Yuriy Nevmyvaka
AAPM Medical Physics Practice Guideline 14.a: Yttrium‐90 microsphere radioembolization
Nathan C. Busse
Muthana S. A. L. Al‐Ghazi
Nadine Abi‐Jaoudeh
Diane Alvarez
Ahmet S. Ayan
Erli Chen
Michael D. Chuong
William A. Dezarn
Stephen A. Graves
Robert F. Hobbs
Mary Ellen Jafari
S. Peter Kim
Nichole M. Maughan
Andrew M. Polemi
Jennifer R. Stickel
Explainable Attention for Few-shot Learning and Beyond
Bahareh Nikpour
A general framework for the practical disintegration of PAC-Bayesian bounds
Paul Viallard
Amaury Habrard
Emilie Morvant