Publications

DIG In: Evaluating Disparities in Image Generations with Indicators for Geographic Diversity
Melissa Hall
Candace Ross
Adina Williams
Nicolas Carion
Michal Drozdzal
The unprecedented photorealistic results achieved by recent text-to-image generative systems and their increasing use as plug-and-play conte… (see more)nt creation solutions make it crucial to understand their potential biases. In this work, we introduce three indicators to evaluate the realism, diversity and prompt-generation consistency of text-to-image generative systems when prompted to generate objects from across the world. Our indicators complement qualitative analysis of the broader impact of such systems by enabling automatic and efficient benchmarking of geographic disparities, an important step towards building responsible visual content creation systems. We use our proposed indicators to analyze potential geographic biases in state-of-the-art visual content creation systems and find that: (1) models have less realism and diversity of generations when prompting for Africa and West Asia than Europe, (2) prompting with geographic information comes at a cost to prompt-consistency and diversity of generated images, and (3) models exhibit more region-level disparities for some objects than others. Perhaps most interestingly, our indicators suggest that progress in image generation quality has come at the cost of real-world geographic representation. Our comprehensive evaluation constitutes a crucial step towards ensuring a positive experience of visual content creation for everyone. Code is available at https://github.com/facebookresearch/DIG-In/.
Influence of scanning plane on Human Spinal Cord functional Magnetic Resonance echo planar imaging
Marta Moraschi
Silvia Tommasin
Laura Maugeri
Mauro Dinuzzo
Marco Masullo
Fabio Mangini
Lorenzo Giovannelli
Daniele Mascali
Tommaso Gili
Valerio Pisani
Ugo Md Nocentini
Federico Giove
Michela Fratini
BACKGROUND: Functional Magnetic Resonance Imaging (fMRI) is based on the Blood Oxygenation Level Dependent contrast and has been exploited f… (see more)or the indirect study of the neuronal activity within both the brain and the spinal cord. However, the interpretation of spinal cord fMRI (scfMRI) is still controversial and its diffusion is rather limited because of technical limitations. Overcoming these limitations would have a beneficial effect for the assessment and follow-up of spinal injuries and neurodegenerative diseases. PURPOSE: This study was aimed at systematically verify whether sagittal scanning in scfMRI using EPI readout is a viable alternative to the more common axial scanning, and at optimizing a pipeline for EPI-based scfMRI data analysis, based on Spinal Cord Toolbox (SCT). METHODS: Forty-five healthy subjects underwent MRI acquisition in a Philips Achieva 3T MRI scanner. T2*-weighted fMRI data were acquired using a GE-EPI sequence along sagittal and axial planes during an isometric motor task. Differences on benchmarks were assessed via paired two-sample t-test at p=0.05. RESULTS: We investigated the impact of the acquisition strategy by means of various metrics such as Temporal Signal to Noise Ratio (tSNR), Dice Coefficient to assess geometric distortions, Reproducibility and Sensitivity. tSNR was higher in axial than in sagittal scans, as well as reproducibility within the whole cord mask (t=7.4, p0.01) and within the GM mask (t=4.2, p0.01). The other benchmarks, associated with distortion and functional response, showed no differenc
More than one way to skin a dose volume: the impact of dose-surface map calculation approach on study reproducibility.
Haley Patrick
Uncertainty Resolution in Misinformation Detection
Yury Orlovskiy
Camille Thibault
Anne Imouza
Jean-François Godbout
Kellin Pelrine
An Analysis of Quantile Temporal-Difference Learning
Mark Rowland
Remi Munos
Mohammad Gheshlaghi Azar
Yunhao Tang
Georg Ostrovski
Anna Harutyunyan
K. Tuyls
Will Dabney
We analyse quantile temporal-difference learning (QTD), a distributional reinforcement learning algorithm that has proven to be a key compon… (see more)ent in several successful large-scale applications of reinforcement learning. Despite these empirical successes, a theoretical understanding of QTD has proven elusive until now. Unlike classical TD learning, which can be analysed with standard stochastic approximation tools, QTD updates do not approximate contraction mappings, are highly non-linear, and may have multiple fixed points. The core result of this paper is a proof of convergence to the fixed points of a related family of dynamic programming procedures with probability 1, putting QTD on firm theoretical footing. The proof establishes connections between QTD and non-linear differential inclusions through stochastic approximation theory and non-smooth analysis.
Assessing Neural Network Representations During Training Using Noise-Resilient Diffusion Spectral Entropy
Danqi Liao
Chen Liu
Benjamin W Christensen
Alexander Tong
Guillaume Huguet
Maximilian Nickel
Ian Adelstein
Smita Krishnaswamy
Entropy and mutual information in neural networks provide rich information on the learning process, but they have proven difficult to comput… (see more)e reliably in high dimensions. Indeed, in noisy and high-dimensional data, traditional estimates in ambient dimensions approach a fixed entropy and are prohibitively hard to compute. To address these issues, we leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures. Specifically, we define diffusion spectral entropy (DSE) in neural representations of a dataset as well as diffusion spectral mutual information (DSMI) between different variables representing data. First, we show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data that outperform classic Shannon entropy, nonparametric estimation, and mutual information neural estimation (MINE). We then study the evolution of representations in classification networks with supervised learning, self-supervision, or overfitting. We observe that (1) DSE of neural representations increases during training; (2) DSMI with the class label increases during generalizable learning but stays stagnant during overfitting; (3) DSMI with the input signal shows differing trends: on MNIST it increases, while on CIFAR-10 and STL-10 it decreases. Finally, we show that DSE can be used to guide better network initialization and that DSMI can be used to predict downstream classification accuracy across 962 models on ImageNet.
BAND: Biomedical Alert News Dataset
Zihao Fu
Meiru Zhang
Zaiqiao Meng
Yannan Shen
Anya Okhmatovskaia
Nigel Collier
Carbon capture, utilization and sequestration systems design and operation optimization: Assessment and perspectives of artificial intelligence opportunities
Eslam G. Al-Sakkari
Ahmed Ragab
Daria Camilla Boffito
Mouloud Amazouz
Causal Adversarial Perturbations for Individual Fairness and Robustness in Heterogeneous Data Spaces
Ahmad-reza Ehyaei
Kiarash Mohammadi
Amir-Hossein Karimi
S. Samadi
Common Challenges of Deep Reinforcement Learning Applications Development: An Empirical Study
Mohammad Mehdi Morovati
Florian Tambon
Mina Taraghi
Amin Nikanjam
Connecting Weighted Automata, Tensor Networks and Recurrent Neural Networks through Spectral Learning
Consolidating Separate Degradations Model via Weights Fusion and Distillation
Dinesh Daultani
Real-world images prevalently contain different varieties of degradation, such as motion blur and luminance noise. Computer vision recogniti… (see more)on models trained on clean images perform poorly on degraded images. Previously, several works have explored how to perform image classification of degraded images while training a single model for each degradation. Nevertheless, it becomes challenging to host several degradation models for each degradation on limited hardware applications and to estimate degradation parameters correctly at the run-time. This work proposes a method for effectively combining several models trained separately on different degradations into a single model to classify images with different types of degradations. Our proposed method is four-fold: (1) train a base model on clean images, (2) fine-tune the base model in-dividually for all given image degradations, (3) perform a fusion of weights given the fine-tuned models for individual degradations, (4) perform fine-tuning on given task using distillation and cross-entropy loss. Our proposed method can outperform previous state-of-the-art methods of pretraining in out-of-distribution generalization based on degradations such as JPEG compression, salt-and-pepper noise, Gaussian blur, and additive white Gaussian noise by 2.5% on CIFAR-100 dataset and by 1.3% on CIFAR-10 dataset. Moreover, our proposed method can handle degra-dation used for training without any explicit information about degradation at the inference time. Code will be available at https://github.com/dineshdaultani/FusionDistill.