Publications

Learning to Rewrite: Generalized LLM-Generated Text Detection
Wei Hao
Ran Li
Weiliang Zhao
Junfeng Yang
Chengzhi Mao
Large language models (LLMs) can be abused at scale to create non-factual content and spread disinformation. Detecting LLM-generated content… (see more) is essential to mitigate these risks, but current classifiers often fail to generalize in open-world contexts. Prior work shows that LLMs tend to rewrite LLM-generated content less frequently, which can be used for detection and naturally generalizes to unforeseen data. However, we find that the rewriting edit distance between human and LLM content can be indistinguishable across domains, leading to detection failures. We propose training an LLM to rewrite input text, producing minimal edits for LLM-generated content and more edits for human-written text, deriving a distinguishable and generalizable edit distance difference across different domains. Experiments on text from 21 independent domains and three popular LLMs (e.g., GPT-4o, Gemini, and Llama-3) show that our classifier outperforms the state-of-the-art zero-shot classifier by up to 20.6% on AUROC score and the rewriting classifier by 9.2% on F1 score. Our work suggests that LLM can effectively detect machine-generated text if they are trained properly.
Critical dynamics in spontaneous EEG predict anesthetic-induced loss of consciousness and perturbational complexity
Charlotte Maschke
Jordan O’Byrne
Michele Angelo Colombo
Melanie Boly
Olivia Gosseries
Steven Laureys
Mario Rosanova
Stefanie Blain-Moraes
Neural differential equations for temperature control in buildings under demand response programs
Vincent Taboga
Clement Gehring
Mathieu Le Cam
Noise covariance estimation in multi-task high-dimensional linear models
Kai Tan
Gabriel Romon
Perfectly Accurate Membership Inference by a Dishonest Central Server in Federated Learning
Georg Pichler
Marco Romanelli
Leonardo Rey Vega
The effect of gestational age on short- and long-term complications following primary esophageal atresia repair
Mathias Johansen
Samuel Wasserman
Jean Martin Laberge
Sam J. Daniel
Thomas Engelhardt
scHiCyclePred: a deep learning framework for predicting cell cycle phases from single-cell Hi-C data using multi-scale interaction information
Yingfu Wu
Zhenqi Shi
Xiangfei Zhou
Pengyu Zhang
Xiuhui Yang
Hao Wu
Assessing Programming Task Difficulty for Efficient Evaluation of Large Language Models
Florian Tambon
Amin Nikanjam
Giuliano Antoniol
Strong gravitational lensing as a probe of dark matter
Simona Vegetti
Simon Birrer
Giulia Despali
C. Fassnacht
Daniel A. Gilman
L.
J. McKean
D. Powell
Conor M. O'riordan
G.
Vernardos
Dark matter structures within strong gravitational lens galaxies and along their line of sight leave a gravitational imprint on the multiple… (see more) images of lensed sources. Strong gravitational lensing provides, therefore, a key test of different dark matter models in a way that is independent of the baryonic content of matter structures on subgalactic scales. In this chapter, we describe how galaxy-scale strong gravitational lensing observations are sensitive to the physical nature of dark matter. We provide a historical perspective of the field, and review its current status. We discuss the challenges and advances in terms of data, treatment of systematic errors and theoretical predictions, that will enable one to deliver a stringent and robust test of different dark matter models in the near future. With the advent of the next generation of sky surveys, the number of known strong gravitational lens systems is expected to increase by several orders of magnitude. Coupled with high-resolution follow-up observations, these data will provide a key opportunity to constrain the properties of dark matter with strong gravitational lensing.
AAPM task group report 288: Recommendations for guiding radiotherapy event narratives
Bruce Thomadsen
Ajay Kapur
Bette Blankenship
Barrett Caldwell
Lindsey Claps
Joanne Cunningham
Jennifer Elee
Suzanne Evans
Eric Ford
Debbie Gilley
Sandra Hayden
Kathleen Hintenlang
Rishabh Kapoor
Linda Kroger
Ksenija Kujundzic
Qing Liang
Sasa Mutic
Anita O'Donovan
Michael O'Hara … (see 6 more)
Zoubir Ouhib
Jatinder Palta
Todd Pawlicki
William Salter
Stacey Schmidt
Sugata Tripathi
Implicitly Bayesian Prediction Rules in Deep Learning
Bruno Mlodozeniec
Richard Turner
The Bayesian approach leads to coherent updates of predictions under new data, which makes adhering to Bayesian principles appealing in deci… (see more)sion-making contexts. Traditionally, integrating Bayesian principles into models like deep neural networks involves setting priors on parameters and approximating posteriors. This is done despite the fact that, typically, priors on parameters reflect any prior beliefs only insofar as they dictate function space behaviour. In this paper, we rethink this approach and consider what properties characterise a prediction rule as being Bayesian. Algorithms meeting such criteria can be deemed implicitly Bayesian — they make the same predictions as some Bayesian model, without explicitly manifesting priors and posteriors. We argue this might be a more fruitful approach towards integrating Bayesian principles into deep learning. In this paper, we propose how to measure how close a general prediction rule is to being implicitly Bayesian, and empirically evaluate multiple prediction strategies using our approach. We also show theoretically that agents relying on non-implicitly Bayesian prediction rules can be easily exploited in adversarial betting settings.
Long-term plasticity induces sparse and specific synaptic changes in a biophysically detailed cortical model
András Ecker
Daniela Egas Santander
Marwan Abdellah
Jorge Blanco Alonso
Sirio Bolaños-Puchet
Giuseppe Chindemi
Dhuruva Priyan Gowri Mariyappan
James B. Isbister
James King
Pramod Kumbhar
Ioannis Magkanaris
Michael W. Reimann