Canada's provincial COVID-19 pandemic modelling efforts: A review of mathematical models and their impacts on the responses.
Yiqing Xia
Jorge Luis Flores Anato
Caroline Colijin
Naveed Janjua
Mike Irvine
Tyler Williamson
Marie B. Varughese
Michael Li
Nathaniel D. Osgood
David J. D. Earn
Beate Sander
Lauren E. Cipriano
Kumar Murty
Fanyu Xiu
Arnaud Godin
Amy Hurford
Sharmistha Mishra
Mathieu Maheu-Giroux
SETTING Mathematical modelling played an important role in the public health response to COVID-19 in Canada. Variability in epidemic traject… (voir plus)ories, modelling approaches, and data infrastructure across provinces provides a unique opportunity to understand the factors that shaped modelling strategies. INTERVENTION Provinces implemented stringent pandemic interventions to mitigate SARS-CoV-2 transmission, considering evidence from epidemic models. This study aimed to summarize provincial COVID-19 modelling efforts. We identified modelling teams working with provincial decision-makers, through referrals and membership in Canadian modelling networks. Information on models, data sources, and knowledge translation were abstracted using standardized instruments. OUTCOMES We obtained information from six provinces. For provinces with sustained community transmission, initial modelling efforts focused on projecting epidemic trajectories and healthcare demands, and evaluating impacts of proposed interventions. In provinces with low community transmission, models emphasized quantifying importation risks. Most of the models were compartmental and deterministic, with projection horizons of a few weeks. Models were updated regularly or replaced by new ones, adapting to changing local epidemic dynamics, pathogen characteristics, vaccines, and requests from public health. Surveillance datasets for cases, hospitalizations and deaths, and serological studies were the main data sources for model calibration. Access to data for modelling and the structure for knowledge translation differed markedly between provinces. IMPLICATION Provincial modelling efforts during the COVID-19 pandemic were tailored to local contexts and modulated by available resources. Strengthening Canadian modelling capacity, developing and sustaining collaborations between modellers and governments, and ensuring earlier access to linked and timely surveillance data could help improve pandemic preparedness.
Canada's Provincial Covid-19 Pandemic Modelling Efforts: A Review of Mathematical Models and Their Impacts on the Responses
Yiqing Xia
Jorge Luis Flores Anato
Caroline Colijin
Naveed Janjua
Michael Otterstatter
Mike Irvine
Tyler Williamson
Marie B. Varughese
Michael Li
Nathaniel Osgood
David J. D. Earn
Beate Sander
Lauren E. Cipriano
Kumar Murty
Fanyu Xiu
Arnaud Godin
Amy Hurford
Sharmistha Mishra
Mathieu Maheu-Giroux
Effects of Scale on Language Model Robustness
Nikolaus H. R. Howe
Ian R. McKenzie
Oskar John Hollinsworth
Michał Zając
Tom Tseng
Aaron David Tucker
Adam Gleave
Language models exhibit scaling laws, whereby increasing model and dataset size yields predictable decreases in negative log likelihood, unl… (voir plus)ocking a dazzling array of capabilities. This phenomenon spurs many companies to train ever larger models in pursuit of ever improved performance. Yet, these models are vulnerable to adversarial inputs such as ``jailbreaks'' and prompt injections that induce models to perform undesired behaviors, posing a growing risk as models become more capable. Prior work indicates that computer vision models become more robust with model and data scaling, raising the question: does language model robustness also improve with scale? We study this question empirically in the classification setting, finding that without explicit defense training, larger models tend to be modestly more robust on most tasks, though the effect is not reliable. Even with the advantage conferred by scale, undefended models remain easy to attack in absolute terms, and we thus turn our attention to explicitly training models for adversarial robustness, which we show to be a much more compute-efficient defense than scaling model size alone. In this setting, we also observe that adversarially trained larger models generalize faster and better to modified attacks not seen during training when compared with smaller models. Finally, we analyze the offense/defense balance of increasing compute, finding parity in some settings and an advantage for offense in others, suggesting that adversarial training alone is not sufficient to solve robustness, even at greater model scales.
Scaling Trends in Language Model Robustness
Nikolaus H. R. Howe
Ian R. McKenzie
Oskar John Hollinsworth
Michał Zając
Tom Tseng
Aaron David Tucker
Adam Gleave
Development of Error Passing Network for Optimizing the Prediction of VO$_2$ peak in Childhood Acute Leukemia Survivors
Nicolas Raymond
Hakima Laribi
Maxime Caru
Mehdi Mitiche
Valerie Marcil
Maja Krajinovic
Daniel Curnier
Daniel Sinnett
Approximately two-thirds of survivors of childhood acute lymphoblastic leukemia (ALL) cancer develop late adverse effects post-treatment. Pr… (voir plus)ior studies explored prediction models for personalized follow-up, but none integrated the usage of neural networks to date. In this work, we propose the Error Passing Network (EPN), a graph-based method that leverages relationships between samples to propagate residuals and adjust predictions of any machine learning model. We tested our approach to estimate patients’ \vo peak, a reliable indicator of their cardiac health. We used the EPN in conjunction with several baseline models and observed up to 12.16% improvement in the mean average percentage error compared to the last established equation predicting \vo peak in childhood ALL survivors. Along with this performance improvement, our final model is more efficient considering that it relies only on clinical variables that can be self-reported by patients, therefore removing the previous need of executing a resource-consuming physical test.
SCIsegV2: A Universal Tool for Segmentation of Intramedullary Lesions in Spinal Cord Injury
Enamundram Naga Karthik
Jan Valošek
Lynn Farner
Dario Pfyffer
Simon Schading-Sassenhausen
A. Lebret
Gergely David
Andrew Smith
Kenneth A. Weber
Maryam Seif
Rhscir Network Imaging Group
Patrick Freund
Spinal cord injury (SCI) is a devastating incidence leading to permanent paralysis and loss of sensory-motor functions potentially resulting… (voir plus) in the formation of lesions within the spinal cord. Imaging biomarkers obtained from magnetic resonance imaging (MRI) scans can predict the functional recovery of individuals with SCI and help choose the optimal treatment strategy. Currently, most studies employ manual quantification of these MRI-derived biomarkers, which is a subjective and tedious task. In this work, we propose (i) a universal tool for the automatic segmentation of intramedullary SCI lesions, dubbed \texttt{SCIsegV2}, and (ii) a method to automatically compute the width of the tissue bridges from the segmented lesion. Tissue bridges represent the spared spinal tissue adjacent to the lesion, which is associated with functional recovery in SCI patients. The tool was trained and validated on a heterogeneous dataset from 7 sites comprising patients from different SCI phases (acute, sub-acute, and chronic) and etiologies (traumatic SCI, ischemic SCI, and degenerative cervical myelopathy). Tissue bridges quantified automatically did not significantly differ from those computed manually, suggesting that the proposed automatic tool can be used to derive relevant MRI biomarkers. \texttt{SCIsegV2} and the automatic tissue bridges computation are open-source and available in Spinal Cord Toolbox (v6.4 and above) via the \texttt{sct\_deepseg -task seg\_sc\_lesion\_t2w\_sci} and \texttt{sct\_analyze\_lesion} functions, respectively.
Machine Translation Hallucination Detection for Low and High Resource Languages using Large Language Models
Kenza Benkirane
Laura Gongas
Shahar Pelles
Naomi Fuchs
Joshua Darmon
Pontus Stenetorp
Eduardo Sánchez
Meta
Recent advancements in massively multilingual machine translation systems have significantly enhanced translation accuracy; however, even th… (voir plus)e best performing systems still generate hallucinations, severely impacting user trust. Detecting hallucinations in Machine Translation (MT) remains a critical challenge, particularly since existing methods excel with High-Resource Languages (HRLs) but exhibit substantial limitations when applied to Low-Resource Languages (LRLs). This paper evaluates sentence-level hallucination detection approaches using Large Language Models (LLMs) and semantic similarity within massively multilingual embeddings. Our study spans 16 language directions, covering HRLs, LRLs, with diverse scripts. We find that the choice of model is essential for performance. On average, for HRLs, Llama3-70B outperforms the previous state of the art by as much as 0.16 MCC (Matthews Correlation Coefficient). However, for LRLs we observe that Claude Sonnet outperforms other LLMs on average by 0.03 MCC. The key takeaway from our study is that LLMs can achieve performance comparable or even better than previously proposed models, despite not being explicitly trained for any machine translation task. However, their advantage is less significant for LRLs.
Variable Star Light Curves in Koopman Space
Nicolas Mekhaël
Mario Pasquato
Gaia Carenini
V. Braga
Piero Trevisan
Giuseppe Bono
We present the first application of data-driven techniques for dynamical system analysis based on Koopman theory to variable stars. We focus… (voir plus) on light curves of RRLyrae type variables, in the Galactic globular cluster
VisMin: Visual Minimal-Change Understanding
Rabiul Awal
Saba Ahmadi
Le Zhang
Fine-grained understanding of objects, attributes, and relationships between objects is crucial for visual-language models (VLMs). Existing … (voir plus)benchmarks primarily focus on evaluating VLMs' capability to distinguish between two very similar captions given an image. In this paper, we introduce a new, challenging benchmark termed Visual Minimal-Change Understanding (VisMin), which requires models to predict the correct image-caption match given two images and two captions. The image pair and caption pair contain minimal changes, i.e., only one aspect changes at a time from among the following: object, attribute, count, and spatial relation. These changes test the models' understanding of objects, attributes (such as color, material, shape), counts, and spatial relationships between objects. We built an automatic framework using large language models and diffusion models, followed by a rigorous 4-step verification process by human annotators. Empirical experiments reveal that current VLMs exhibit notable deficiencies in understanding spatial relationships and counting abilities. We also generate a large-scale training dataset to finetune CLIP and Idefics2, showing significant improvements in fine-grained understanding across benchmarks and in CLIP's general image-text alignment. We release all resources, including the benchmark, training data, and finetuned model checkpoints, at https://vismin.net/.
VisMin: Visual Minimal-Change Understanding
Rabiul Awal
Saba Ahmadi
Le Zhang
Fine-grained understanding of objects, attributes, and relationships between objects is crucial for visual-language models (VLMs). Existing … (voir plus)benchmarks primarily focus on evaluating VLMs' capability to distinguish between two very similar \textit{captions} given an image. In this paper, we introduce a new, challenging benchmark termed \textbf{Vis}ual \textbf{Min}imal-Change Understanding (VisMin), which requires models to predict the correct image-caption match given two images and two captions. The image pair and caption pair contain minimal changes, i.e., only one aspect changes at a time from among the following: \textit{object}, \textit{attribute}, \textit{count}, and \textit{spatial relation}. These changes test the models' understanding of objects, attributes (such as color, material, shape), counts, and spatial relationships between objects. We built an automatic framework using large language models and diffusion models, followed by a rigorous 4-step verification process by human annotators. Empirical experiments reveal that current VLMs exhibit notable deficiencies in understanding spatial relationships and counting abilities. We also generate a large-scale training dataset to finetune CLIP and Idefics2, showing significant improvements in fine-grained understanding across benchmarks and in CLIP's general image-text alignment. We release all resources, including the benchmark, training data, and finetuned model checkpoints, at https://vismin.net/.
VisMin: Visual Minimal-Change Understanding
Rabiul Awal
Saba Ahmadi
Le Zhang
Fine-grained understanding of objects, attributes, and relationships between objects is crucial for visual-language models (VLMs). Existing … (voir plus)benchmarks primarily focus on evaluating VLMs' capability to distinguish between two very similar \textit{captions} given an image. In this paper, we introduce a new, challenging benchmark termed \textbf{Vis}ual \textbf{Min}imal-Change Understanding (VisMin), which requires models to predict the correct image-caption match given two images and two captions. The image pair and caption pair contain minimal changes, i.e., only one aspect changes at a time from among the following: \textit{object}, \textit{attribute}, \textit{count}, and \textit{spatial relation}. These changes test the models' understanding of objects, attributes (such as color, material, shape), counts, and spatial relationships between objects. We built an automatic framework using large language models and diffusion models, followed by a rigorous 4-step verification process by human annotators. Empirical experiments reveal that current VLMs exhibit notable deficiencies in understanding spatial relationships and counting abilities. We also generate a large-scale training dataset to finetune CLIP and Idefics2, showing significant improvements in fine-grained understanding across benchmarks and in CLIP's general image-text alignment. We release all resources, including the benchmark, training data, and finetuned model checkpoints, at https://vismin.net/.
Wasserstein Distributionally Robust Shallow Convex Neural Networks
Julien Pallage
In this work, we propose Wasserstein distributionally robust shallow convex neural networks (WaDiRo-SCNNs) to provide reliable nonlinear pre… (voir plus)dictions when subject to adverse and corrupted datasets. Our approach is based on a new convex training program for