Caustics: A Python Package for Accelerated Strong Gravitational Lensing Simulations
Connor Stone
Alexandre Adam
Adam Coogan
M. J. Yantovski-Barth
Andreas Filipp
Landung Setiawan
Cordero Core
Ronan Legin
Charles Wilson
Gabriel Missael Barco
ChainBuddy: An AI-assisted Agent System for Helping Users Set up LLM Pipelines
Jingyue Zhang
CL-MASR: A Continual Learning Benchmark for Multilingual ASR
Luca Della Libera
Pooneh Mousavi
Salah Zaiem
Common Challenges of Deep Reinforcement Learning Applications Development: An Empirical Study
Mohammad Mehdi Morovati
Florian Tambon
Mina Taraghi
Amin Nikanjam
Connecting Weighted Automata, Tensor Networks and Recurrent Neural Networks through Spectral Learning
Consolidating Separate Degradations Model via Weights Fusion and Distillation
Dinesh Daultani
Real-world images prevalently contain different varieties of degradation, such as motion blur and luminance noise. Computer vision recogniti… (see more)on models trained on clean images perform poorly on degraded images. Previously, several works have explored how to perform image classification of degraded images while training a single model for each degradation. Nevertheless, it becomes challenging to host several degradation models for each degradation on limited hardware applications and to estimate degradation parameters correctly at the run-time. This work proposes a method for effectively combining several models trained separately on different degradations into a single model to classify images with different types of degradations. Our proposed method is four-fold: (1) train a base model on clean images, (2) fine-tune the base model in-dividually for all given image degradations, (3) perform a fusion of weights given the fine-tuned models for individual degradations, (4) perform fine-tuning on given task using distillation and cross-entropy loss. Our proposed method can outperform previous state-of-the-art methods of pretraining in out-of-distribution generalization based on degradations such as JPEG compression, salt-and-pepper noise, Gaussian blur, and additive white Gaussian noise by 2.5% on CIFAR-100 dataset and by 1.3% on CIFAR-10 dataset. Moreover, our proposed method can handle degra-dation used for training without any explicit information about degradation at the inference time. Code will be available at https://github.com/dineshdaultani/FusionDistill.
Context-Aware Assistant Selection for Improved Inference Acceleration with Large Language Models
Jerry Huang
Prasanna Parthasarathi
Mehdi Rezagholizadeh
Corticosteroids induce an early but limited decrease in IL-6 dependent pro-inflammatory responses in critically ill COVID-19 patients
Tomas URBINA
Paul GABARRE
Vincent BONNY
Jean-Rémi Lavillegrand
Marc GARNIER
Jérémie JOFFRE
Nathalie MARIO
Geoffroy HARIRI
Matthieu TURPIN
Emmanuel PARDO
Muriel FARTOUKH
Bertrand GUIDET
Eric Maury
Yannick CHANTRAN
Pierre-Yves BOELLE
Guillaume VOIRIOT
Hafid AIT-OUFELLA
Dance of the Neurons: Unraveling Sex from Brain Signals (short paper).
Mohammad-Javad Darvishi Bayazi
Mohammad S. Ghaemi
Jocelyn Faubert
Data-access performance anti-patterns in data-intensive systems
Biruk Asmare Muse
Kawser Wazed Nafi
Giuliano Antoniol
Data-intensive systems handle variable, high volume, and high-velocity data generated by human and digital devices. Like traditional softwar… (see more)e, data-intensive systems are prone to technical debts introduced to cope-up with the pressure of time and resource constraints on developers. Data-access is a critical component of data-intensive systems as it determines the overall performance and functionality of such systems. While data access technical debts are getting attention from the research community, technical debts affecting the performance, are not well investigated. Objective: Identify, categorize, and validate data access performance issues in the context of NoSQL-based and polyglot persistence data-intensive systems using qualitative study. Method: We collect issues from NoSQL-based and polyglot persistence open-source data-intensive systems and identify data access performance issues using inductive coding and build a taxonomy of the root causes. Then, we validate the perceived relevance of the newly identified performance issues using a developer survey.
Deciphering lineage-relevant gene regulatory networks during endoderm formation by InPheRNo-ChIP.
Chen Su
William A Pastor
Deciphering the underlying gene regulatory networks (GRNs) that govern early human embryogenesis is critical for understanding developmental… (see more) mechanisms yet remains challenging due to limited sample availability and the inherent complexity of the biological processes involved. To address this, we developed InPheRNo-ChIP, a computational framework that integrates multimodal data, including RNA-seq, transcription factor (TF)-specific ChIP-seq, and phenotypic labels, to reconstruct phenotype-relevant GRNs associated with endoderm development. The core of this method is a probabilistic graphical model that models the simultaneous effect of TFs on their putative target genes to influence a particular phenotypic outcome. Unlike the majority of existing GRN inference methods that are agnostic to the phenotypic outcomes, InPheRNo-ChIP directly incorporates phenotypic information during GRN inference, enabling the distinction between lineage-specific and general regulatory interactions. We integrated data from three experimental studies and applied InPheRNo-ChIP to infer the GRN governing the differentiation of human embryonic stem cells into definitive endoderm. Benchmarking against a scRNA-seq CRISPRi study demonstrated InPheRNo-ChIP's ability to identify regulatory interactions involving endoderm markers FOXA2, SMAD2, and SOX17, outperforming other methods. This highlights the importance of incorporating the phenotypic context during network inference. Furthermore, an ablation study confirms the synergistic contribution of ChIP-seq, RNA-seq, and phenotypic data, highlighting the value of multimodal integration for accurate phenotype-relevant GRN reconstruction.
DeCoDEx: Confounder Detector Guidance for Improved Diffusion-based Counterfactual Explanations
Nima Fathi
Amar Kumar
Brennan Nichyporuk
Mohammad Havaei
Deep learning classifiers are prone to latching onto dominant confounders present in a dataset rather than on the causal markers associated … (see more)with the target class, leading to poor generalization and biased predictions. Although explainability via counterfactual image generation has been successful at exposing the problem, bias mitigation strategies that permit accurate explainability in the presence of dominant and diverse artifacts remain unsolved. In this work, we propose the DeCoDEx framework and show how an external, pre-trained binary artifact detector can be leveraged during inference to guide a diffusion-based counterfactual image generator towards accurate explainability. Experiments on the CheXpert dataset, using both synthetic artifacts and real visual artifacts (support devices), show that the proposed method successfully synthesizes the counterfactual images that change the causal pathology markers associated with Pleural Effusion while preserving or ignoring the visual artifacts. Augmentation of ERM and Group-DRO classifiers with the DeCoDEx generated images substantially improves the results across underrepresented groups that are out of distribution for each class. The code is made publicly available at https://github.com/NimaFathi/DeCoDEx.