Publications

295. Rare Variant Genetic Architecture of the Human Cortical MRI Phenotypes in General Population
Kuldeep Kumar
Sayeh Kazem
Zhijie Liao
Jakub Kopal
Guillaume Huguet
Thomas Renne
Martineau Jean-Louis
Zhe Xie
Zohra Saci
Laura Almasy
David C. Glahn
Tomas Paus
Carrie Bearden
Paul Thompson
Richard A.I. Bethlehem
Varun Warrier
Sébastien Jacquemont
Beyond the Norms: Detecting Prediction Errors in Regression Models
Andres Altieri
Marco Romanelli
Georg Pichler
Florence Alberge
This paper tackles the challenge of detecting unreliable behavior in regression algorithms, which may arise from intrinsic variability (e.g.… (voir plus), aleatoric uncertainty) or modeling errors (e.g., model uncertainty). First, we formally introduce the notion of unreliability in regression, i.e., when the output of the regressor exceeds a specified discrepancy (or error). Then, using powerful tools for probabilistic modeling, we estimate the discrepancy density, and we measure its statistical diversity using our proposed metric for statistical dissimilarity. In turn, this allows us to derive a data-driven score that expresses the uncertainty of the regression outcome. We show empirical improvements in error detection for multiple regression tasks, consistently outperforming popular baseline approaches, and contributing to the broader field of uncertainty quantification and safe machine learning systems.
Body size interacts with the structure of the central nervous system: A multi-center in vivo neuroimaging study
René Labounek
Monica T. Bondy
Amy L. Paulson
Sandrine Bédard
Mihael Abramovic
Eva Alonso‐Ortiz
Nicole Atcheson
Laura R. Barlow
Robert L. Barry
Markus Barth
Marco Battiston
Christian Büchel
Matthew D. Budde
Virginie Callot
Anna Combes
Benjamin De Leener
Maxime Descoteaux
Paulo Loureiro de Sousa
Marek Dostál
Julien Doyon … (voir 74 de plus)
Adam Dvorak
Falk Eippert
Karla R. Epperson
Kevin S. Epperson
Patrick Freund
Jürgen Finsterbusch
Alexandru Foias
Michela Fratini
Issei Fukunaga
Claudia A. M. Gandini Wheeler-Kingshott
Giancarlo Germani
Guillaume Gilbert
Federico Giove
Francesco Grussu
Akifumi Hagiwara
Pierre-Gilles Henry
Tomáš Horák
Masaaki Hori
James M. Joers
Kouhei Kamiya
Haleh Karbasforoushan
Miloš Keřkovský
Ali Khatibi
Joo‐Won Kim
Nawal Kinany
Hagen H. Kitzler
Shannon Kolind
Yazhuo Kong
Petr Kudlička
Paul Kuntke
Nyoman D. Kurniawan
Slawomir Kusmia
Maria Marcella Lagana
Cornelia Laule
Christine S. W. Law
Csw Law
Tobias Leutritz
Yaou Liu
Sara Llufriu
Sean Mackey
Allan R. Martin
Eloy Martinez-Heras
Loan Mattera
Kristin P. O’Grady
Nico Papinutto
Daniel Papp
Deborah Pareto
Todd B. Parrish
Anna Pichiecchio
Ferran Prados
Àlex Rovira
Marc J. Ruitenberg
Rebecca S. Samson
Giovanni Savini
Maryam Seif
Alan C. Seifert
Alex K. Smith
Seth A. Smith
Zachary A. Smith
Elisabeth Solana
Yuichi Suzuki
George Tackley
Alexandra Tinnermann
Jan Valošek
Dimitri Van De Ville
Marios C. Yiannakas
Kenneth A. Weber
Nikolaus Weiskopf
Richard G. Wise
Patrik O. Wyss
Junqian Xu
Christophe Lenglet
Igor Nestrašil
Clinical research emphasizes the implementation of rigorous and reproducible study designs that rely on between-group matching or controllin… (voir plus)g for sources of biological variation such as subject’s sex and age. However, corrections for body size (i.e. height and weight) are mostly lacking in clinical neuroimaging designs. This study investigates the importance of body size parameters in their relationship with spinal cord (SC) and brain magnetic resonance imaging (MRI) metrics. Data were derived from a cosmopolitan population of 267 healthy human adults (age 30.1±6.6 years old, 125 females). We show that body height correlated strongly or moderately with brain gray matter (GM) volume, cortical GM volume, total cerebellar volume, brainstem volume, and cross-sectional area (CSA) of cervical SC white matter (CSA-WM; 0.44≤r≤0.62). In comparison, age correlated weakly with cortical GM volume, precentral GM volume, and cortical thickness (-0.21≥r≥-0.27). Body weight correlated weakly with magnetization transfer ratio in the SC WM, dorsal columns, and lateral corticospinal tracts (-0.20≥r≥-0.23). Body weight further correlated weakly with the mean diffusivity derived from diffusion tensor imaging (DTI) in SC WM (r=-0.20) and dorsal columns (-0.21), but only in males. CSA-WM correlated strongly or moderately with brain volumes (0.39≤r≤0.64), and weakly with precentral gyrus thickness and DTI-based fractional anisotropy in SC dorsal columns and SC lateral corticospinal tracts (-0.22≥r≥-0.25). Linear mixture of sex and age explained 26±10% of data variance in brain volumetry and SC CSA. The amount of explained variance increased at 33±11% when body height was added into the mixture model. Age itself explained only 2±2% of such variance. In conclusion, body size is a significant biological variable. Along with sex and age, body size should therefore be included as a mandatory variable in the design of clinical neuroimaging studies examining SC and brain structure.
ChatGPT: What Every Pediatric Surgeon Should Know About Its Potential Uses and Pitfalls
Raquel González
Russell Woo
A Francois Trappey
Stewart Carter
David Darcy
Ellen Encisco
Brian Gulack
Doug Miniati
Edzhem Tombash
Eunice Y. Huang
CKGConv: General Graph Convolution with Continuous Kernels
Liheng Ma
Soumyasundar Pal
Yitian Zhang
Jiaming Zhou
Yingxue Zhang
The existing definitions of graph convolution, either from spatial or spectral perspectives, are inflexible and not unified. Defining a gene… (voir plus)ral convolution operator in the graph domain is challenging due to the lack of canonical coordinates, the presence of irregular structures, and the properties of graph symmetries. In this work, we propose a novel graph convolution framework by parameterizing the kernels as continuous functions of pseudo-coordinates derived via graph positional encoding. We name this Continuous Kernel Graph Convolution (CKGConv). Theoretically, we demonstrate that CKGConv is flexible and expressive. CKGConv encompasses many existing graph convolutions, and exhibits the same expressiveness as graph transformers in terms of distinguishing non-isomorphic graphs. Empirically, we show that CKGConv-based Networks outperform existing graph convolutional networks and perform comparably to the best graph transformers across a variety of graph datasets.
Code as Reward: Empowering Reinforcement Learning with VLMs
David Venuto
Mohammad Sami Nur Islam
Martin Klissarov
Sherry Yang
Ankit Anand
A Distributional Analogue to the Successor Representation
Harley Wiltzer
Jesse Farebrother
Arthur Gretton
Yunhao Tang
Andre Barreto
Will Dabney
Mark Rowland
This paper contributes a new approach for distributional reinforcement learning which elucidates a clean separation of transition structure … (voir plus)and reward in the learning process. Analogous to how the successor representation (SR) describes the expected consequences of behaving according to a given policy, our distributional successor measure (SM) describes the distributional consequences of this behaviour. We formulate the distributional SM as a distribution over distributions and provide theory connecting it with distributional and model-based reinforcement learning. Moreover, we propose an algorithm that learns the distributional SM from data by minimizing a two-level maximum mean discrepancy. Key to our method are a number of algorithmic techniques that are independently valuable for learning generative models of state. As an illustration of the usefulness of the distributional SM, we show that it enables zero-shot risk-sensitive policy evaluation in a way that was not previously possible.
Fairness-aware data-driven-based model predictive controller: A study on thermal energy storage in a residential building
Ying Sun
Fariborz Haghighat
Faithfulness Measurable Masked Language Models
Andreas Madsen
Generative AI in Software Engineering Must Be Human-Centered: The Copenhagen Manifesto
Daniel Russo
Sebastian Baltes
Niels van Berkel
Paris Avgeriou
Fabio Calefato
Beatriz Cabrero-Daniel
Gemma Catolino
Jürgen Cito
Neil Ernst
Thomas Fritz
Hideaki Hata
Reid Holmes
Maliheh Izadi
Mikkel Baun Kjærgaard
Grischa Liebel
Alberto Lluch Lafuente
Stefano Lambiase
Walid Maalej
Gail Murphy … (voir 15 de plus)
Nils Brede Moe
Gabrielle O'Brien
Elda Paja
Mauro Pezzè
John Stouby Persson
Rafael Prikladnicki
Paul Ralph
Martin P. Robillard
Thiago Rocha Silva
Klaas-Jan Stol
Margaret-Anne Storey
Viktoria Stray
Paolo Tell
Christoph Treude
Bogdan Vasilescu
Harmony in Diversity: Merging Neural Networks with Canonical Correlation Analysis
Stefan Horoi
Albert Manuel Orozco Camacho
Ensembling multiple models enhances predictive performance by utilizing the varied learned features of the different models but incurs signi… (voir plus)ficant computational and storage costs. Model fusion, which combines parameters from multiple models into one, aims to mitigate these costs but faces practical challenges due to the complex, non-convex nature of neural network loss landscapes, where learned minima are often separated by high loss barriers. Recent works have explored using permutations to align network features, reducing the loss barrier in parameter space. However, permutations are restrictive since they assume a one-to-one mapping between the different models' neurons exists. We propose a new model merging algorithm, CCA Merge, which is based on Canonical Correlation Analysis and aims to maximize the correlations between linear combinations of the model features. We show that our method of aligning models leads to better performances than past methods when averaging models trained on the same, or differing data splits. We also extend this analysis into the harder many models setting where more than 2 models are merged, and we find that CCA Merge works significantly better in this setting than past methods.
Information Complexity of Stochastic Convex Optimization: Applications to Generalization and Memorization
Idan Attias
MAHDI HAGHIFAM
Roi Livni
Daniel M. Roy
In this work, we investigate the interplay between memorization and learning in the context of \emph{stochastic convex optimization} (SCO). … (voir plus)We define memorization via the information a learning algorithm reveals about its training data points. We then quantify this information using the framework of conditional mutual information (CMI) proposed by Steinke and Zakynthinou (2020). Our main result is a precise characterization of the tradeoff between the accuracy of a learning algorithm and its CMI, answering an open question posed by Livni (2023). We show that, in the