Publications

Are LLMs Breaking MT Metrics? Results of the WMT24 Metrics Shared Task
Markus Freitag
Nitika Mathur
Daniel Deutsch
Chi-kiu Lo
Eleftherios Avramidis
Ricardo Rei
Brian Thompson
Frédéric Blain
Tom Kocmi
Jiayi Wang
Marianna Buchicchio
Chrysoula Zerva
Assessing Neural Network Representations During Training Using Noise-Resilient Diffusion Spectral Entropy
Danqi Liao
Chen Liu
Benjamin W Christensen
Alexander Tong
Maximilian Nickel
Ian Adelstein
Entropy and mutual information in neural networks provide rich information on the learning process, but they have proven difficult to comput… (see more)e reliably in high dimensions. Indeed, in noisy and high-dimensional data, traditional estimates in ambient dimensions approach a fixed entropy and are prohibitively hard to compute. To address these issues, we leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures. Specifically, we define diffusion spectral entropy (DSE) in neural representations of a dataset as well as diffusion spectral mutual information (DSMI) between different variables representing data. First, we show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data that outperform classic Shannon entropy, nonparametric estimation, and mutual information neural estimation (MINE). We then study the evolution of representations in classification networks with supervised learning, self-supervision, or overfitting. We observe that (1) DSE of neural representations increases during training; (2) DSMI with the class label increases during generalizable learning but stays stagnant during overfitting; (3) DSMI with the input signal shows differing trends: on MNIST it increases, while on CIFAR-10 and STL-10 it decreases. Finally, we show that DSE can be used to guide better network initialization and that DSMI can be used to predict downstream classification accuracy across 962 models on ImageNet.
Attention-based Class-Conditioned Alignment for Multi-Source Domain Adaptive Object Detection
Atif Belal
Akhil Meethal
Francisco Perdigon Romero
Eric Granger
Attention-based Class-Conditioned Alignment for Multi-Source Domain Adaptive Object Detection
Atif Belal
Akhil Meethal
Francisco Perdigon Romero
Eric Granger
BAND: Biomedical Alert News Dataset
Zihao Fu
Meiru Zhang
Zaiqiao Meng
Anya Okhmatovskaia
Nigel Collier
Beyond Model Collapse: Scaling Up with Synthesized Data Requires Reinforcement
Yunzhen Feng
Pu Yang
Francois Charton
Julia Kempe
Beyond Model Collapse: Scaling Up with Synthesized Data Requires Reinforcement
Yunzhen Feng
Pu Yang
Francois Charton
Julia Kempe
Bidirectional Generative Pre-training for Improving Healthcare Time-series Representation Learning
Qincheng Lu
Mike He Zhu
Bio-Mechanical Poet: An Immersive Audiovisual Playground for Brain Signals and Generative AI.
Philipp Thölke
Antoine Bellemare‐Pepin
Yann Harel
François Lespinasse
Building on Efficient Foundations: Effective Training of LLMs with Structured Feedforward Layers.
Xiuying Wei
Skander Moalla
Caglar Gulcehre
Carbon capture, utilization and sequestration systems design and operation optimization: Assessment and perspectives of artificial intelligence opportunities.
Eslam G. Al-Sakkari
Ahmed Ragab
Daria Camilla Boffito
Mouloud Amazouz
Carbon capture, utilization and sequestration systems design and operation optimization: Assessment and perspectives of artificial intelligence opportunities.
Eslam G. Al-Sakkari
Ahmed Ragab
Daria Camilla Boffito
Mouloud Amazouz