Publications

On Dynamic Program Decompositions of Static Risk Measures
Jia Lin Hau
Mohammad Ghavamzadeh
Marek Petrik
Optimizing static risk-averse objectives in Markov decision processes is challenging because they do not readily admit dynamic programming d… (see more)ecompositions. Prior work has proposed to use a dynamic decomposition of risk measures that help to formulate dynamic programs on an augmented state space. This paper shows that several existing decompositions are inherently inexact, contradicting several claims in the literature. In particular, we give examples that show that popular decompositions for CVaR and EVaR risk measures are strict overestimates of the true risk values. However, an exact decomposition is possible for VaR, and we give a simple proof that illustrates the fundamental difference between VaR and CVaR dynamic programming properties.
Effective test generation using pre-trained Large Language Models and mutation testing
Amin Nikanjam
Vahid Majdinasab
Michel C. Desmarais
One of the critical phases in software development is software testing. Testing helps with identifying potential bugs and reducing maintenan… (see more)ce costs. The goal of automated test generation tools is to ease the development of tests by suggesting efficient bug-revealing tests. Recently, researchers have leveraged Large Language Models (LLMs) of code to generate unit tests. While the code coverage of generated tests was usually assessed, the literature has acknowledged that the coverage is weakly correlated with the efficiency of tests in bug detection. To improve over this limitation, in this paper, we introduce MuTAP for improving the effectiveness of test cases generated by LLMs in terms of revealing bugs by leveraging mutation testing. Our goal is achieved by augmenting prompts with surviving mutants, as those mutants highlight the limitations of test cases in detecting bugs. MuTAP is capable of generating effective test cases in the absence of natural language descriptions of the Program Under Test (PUTs). We employ different LLMs within MuTAP and evaluate their performance on different benchmarks. Our results show that our proposed method is able to detect up to 28% more faulty human-written code snippets. Among these, 17% remained undetected by both the current state-of-the-art fully automated test generation tool (i.e., Pynguin) and zero-shot/few-shot learning approaches on LLMs. Furthermore, MuTAP achieves a Mutation Score (MS) of 93.57% on synthetic buggy code, outperforming all other approaches in our evaluation. Our findings suggest that although LLMs can serve as a useful tool to generate test cases, they require specific post-processing steps to enhance the effectiveness of the generated test cases which may suffer from syntactic or functional errors and may be ineffective in detecting certain types of bugs and testing corner cases PUTs.
An Empirical Investigation of the Role of Pre-training in Lifelong Learning
Sanket Vaibhav Mehta
Emma Strubell
The lifelong learning paradigm in machine learning is an attractive alternative to the more prominent isolated learning scheme not only due … (see more)to its resemblance to biological learning but also its potential to reduce energy waste by obviating excessive model re-training. A key challenge to this paradigm is the phenomenon of catastrophic forgetting. With the increasing popularity and success of pre-trained models in machine learning, we pose the question: What role does pre-training play in lifelong learning, specifically with respect to catastrophic forgetting? We investigate existing methods in the context of large, pre-trained models and evaluate their performance on a variety of text and image classification tasks, including a large-scale study using a novel data set of 15 diverse NLP tasks. Across all settings, we observe that generic pre-training implicitly alleviates the effects of catastrophic forgetting when learning multiple tasks sequentially compared to randomly initialized models. We then further investigate why pre-training alleviates forgetting in this setting. We study this phenomenon by analyzing the loss landscape, finding that pre-trained weights appear to ease forgetting by leading to wider minima. Based on this insight, we propose jointly optimizing for current task loss and loss basin sharpness to explicitly encourage wider basins during sequential fine-tuning. We show that this optimization approach outperforms several state-of-the-art task-sequential continual learning algorithms across multiple settings, occasionally even without retaining a memory that scales in size with the number of tasks.
Enjeux de l’adaptation à la chaleur en ville et action publique : apports de l’interdisciplinarité et de la recherche-action - Cas de la métropole toulousaine
G. Bretagne
Julia Hidalgo
Sinda Haouès-Jouve
Lise Debrye
Aurélie Hanna
Valéry Masson
Le contexte législatif national, comme les attentes citoyennes exprimées pour plus d’informations et d’actions relatives aux enjeux cl… (see more)imatiques, ont progressivement incité à la territorialisation des politiques climatiques et énergétiques locales, ainsi qu’à l’émergence de l’enjeu d’adaptation climatique sur les territoires. Cette dynamique de spatialisation des enjeux climatiques trouve sa déclinaison à l’échelle de la métropole toulousaine depuis plus de 10 ans, du fait d’enjeux multiples sur le territoire : géographiques, climatiques et urbains. Les travaux de recherche menés localement autour des thématiques Ville, Environnement et Climat ont répondu au contexte favorable d’interdisciplinarité et de collaboration avec les acteurs urbains, soutenues par plusieurs appels à projets de recherche nationaux et européens. Deux objectifs majeurs sont affichés : coconstruire une connaissance afin de caractériser les enjeux climatiques et énergétiques propres au territoire toulousain, et proposer un accompagnement spécifique auprès des acteurs urbains pour mieux faire comprendre et objectiver les enjeux locaux, afin d’intégrer ces derniers dans les politiques et les actions publiques locales. Le présent article propose de revenir sur la synergie permise par cette collaboration, en s’attachant d’une part à présenter le processus de travail interdisciplinaire mis en place et, d’autre part, à montrer les productions de données et d’expertises qui en ont résulté.
Evaluating Dependencies in Fact Editing for Language Models: Specificity and Implication Awareness
Jackie CK Cheung
The potential of using a large language model (LLM) as a knowledge base (KB) has sparked significant interest. To maintain the knowledge acq… (see more)uired by LLMs, we need to ensure that the editing of learned facts respects internal logical constraints, which are known as dependency of knowledge. Existing work on editing LLMs has partially addressed the issue of dependency, when the editing of a fact should apply to its lexical variations without disrupting irrelevant ones. However, they neglect the dependency between a fact and its logical implications. We propose an evaluation protocol with an accompanying question-answering dataset, StandUp, that provides a comprehensive assessment of the editing process considering the above notions of dependency. Our protocol involves setting up a controlled environment in which we edit facts and monitor their impact on LLMs, along with their implications based on If-Then rules. Extensive experiments on StandUp show that existing knowledge editing methods are sensitive to the surface form of knowledge, and that they have limited performance in inferring the implications of edited facts.
Explaining Graph Neural Networks Using Interpretable Local Surrogates
Exploring Self-Attention Mechanisms for Speech Separation
Samuele Cornell
François Grondin
Mirko Bronzi
Transformers have enabled impressive improvements in deep learning. They often outperform recurrent and convolutional models in many tasks w… (see more)hile taking advantage of parallel processing. Recently, we proposed the SepFormer, which obtains state-of-the-art performance in speech separation with the WSJ0-2/3 Mix datasets. This paper studies in-depth Transformers for speech separation. In particular, we extend our previous findings on the SepFormer by providing results on more challenging noisy and noisy-reverberant datasets, such as LibriMix, WHAM!, and WHAMR!. Moreover, we extend our model to perform speech enhancement and provide experimental evidence on denoising and dereverberation tasks. Finally, we investigate, for the first time in speech separation, the use of efficient self-attention mechanisms such as Linformers, Lonformers, and ReFormers. We found that they reduce memory requirements significantly. For example, we show that the Reformer-based attention outperforms the popular Conv-TasNet model on the WSJ0-2Mix dataset while being faster at inference and comparable in terms of memory consumption.
Exploring trust development in families of children towards surgical and emergency care providers: A scoping review of the literature.
Olivia Serhan
Alexander Moise
Elena Guadagno
Amalia M. Issa
Family risk communication preferences in pediatric surgery: A scoping review.
Arthega Selvarajan
Brandon Arulanandam
Elena Guadagno
Feature Likelihood Divergence: Evaluating the Generalization of Generative Models Using Samples
Avishek (Joey) Bose
Ian Gemp
Chongli Qin
Yoram Bachrach
The past few years have seen impressive progress in the development of deep generative models capable of producing high-dimensional, complex… (see more), and photo-realistic data. However, current methods for evaluating such models remain incomplete: standard likelihood-based metrics do not always apply and rarely correlate with perceptual fidelity, while sample-based metrics, such as FID, are insensitive to overfitting, i.e., inability to generalize beyond the training set. To address these limitations, we propose a new metric called the Feature Likelihood Divergence (FLD), a parametric sample-based metric that uses density estimation to provide a comprehensive trichotomic evaluation accounting for novelty (i.e., different from the training samples), fidelity, and diversity of generated samples. We empirically demonstrate the ability of FLD to identify overfitting problem cases, even when previously proposed metrics fail. We also extensively evaluate FLD on various image datasets and model classes, demonstrating its ability to match intuitions of previous metrics like FID while offering a more comprehensive evaluation of generative models. Code is available at https://github.com/marcojira/fld.
Filtering Pixel Latent Variables for Unmixing Volumetric Images
Measurements of different overlapping components require robust unmixing algorithms to convert the raw multi-dimensional measurements to use… (see more)ful unmixed images. Such algorithms perform reliable separation of the components when the raw signal is fully resolved and contains enough information to fit curves on the raw distributions. In experimental physics, measurements are often noisy, undersam-pled, or unresolved spatially or spectrally. We propose a novel method where bandpass filters are applied to the latent space of a multi-dimensional convolutional neural network to separate the overlapping signal components and extract each of their relative contributions. Simultaneously processing all dimensions with multi-dimensional convolution kernels empowers the network to combine the information from adjacent pixels and time-or spectral-bins, facilitating component separation in instances where individual pixels lack well-resolved information. We demonstrate the applicability of the method to real experimental physics problems using fluorescence lifetime microscopy and mode decomposition in optical fibers as test cases. The successful application of our approach to these two distinct experimental cases, characterized by different measured distributions, highlights the versatility of our approach in addressing a wide array of imaging tasks.
Findings of the 1st Shared Task on Multi-lingual Multi-task Information Retrieval at MRL 2023
Francesco Tinner
Mammad Hajili
Omer Goldman
Muhammad Farid Adilazuarda
Muhammad Dehan Al Kautsar
Aziza Mirsaidova
Müge Kural
Dylan Massey
Chiamaka Ijeoma Chukwuneke
CHINEDU EMMANUEL MBONU
Damilola Oluwaseun Oloyede
Kayode Olaleye
Jonathan Atala
Benjamin A. Ajibade
Saksham Bassi
Najoung Kim
Duygu Ataman
Large language models (LLMs) excel in language understanding and generation, especially in English which has ample public benchmarks for var… (see more)ious natural language processing (NLP) tasks. Nevertheless, their reliability across different languages and domains remains uncertain. Our new shared task introduces a novel benchmark to assess the ability of multilingual LLMs to comprehend and produce language under sparse settings, particularly in scenarios with under-resourced languages, with an emphasis on the ability to capture logical, factual, or causal relationships within lengthy text contexts. The shared task consists of two sub-tasks crucial to information retrieval: Named Entity Recognition (NER) and Reading Comprehension (RC), in 7 data-scarce languages: Azerbaijani, Igbo, Indonesian, Swiss German, Turkish, Uzbek and Yorùbá, which previously lacked annotated resources in information retrieval tasks. Our evaluation of leading LLMs reveals that, despite their competitive performance, they still have notable weaknesses such as producing output in the non-target language or providing counterfactual information that cannot be inferred from the context. As more advanced models emerge, the benchmark will remain essential for supporting fairness and applicability in information retrieval systems.