The Normative Leadership of the World Health Organization : a quantitative analysis 
Gaelle Foucault
Jean-Louis Denis
Pierre Larouche
Miriam Cohen
The Normative Leadership of the World Health Organization : a quantitative analysis 
Gaelle Foucault
Jean-Louis Denis
Pierre Larouche
Miriam Cohen
The Normative Leadership of the World Health Organization : a quantitative analysis 
Gaelle Foucault
Jean-Louis Denis
Pierre Larouche
Miriam Cohen
The role of AI for MRI-analysis in multiple sclerosis—A brief overview
Jean-Pierre R. Falet
Steven Nobile
Aliya Szpindel
Berardino Barile
Amar Kumar
Joshua D. Durso-Finley
Douglas Arnold
The Superposition of Diffusion Models Using the Itô Density Estimator
Marta Skreta
Lazar Atanackovic
Alexander Tong
The Cambrian explosion of easily accessible pre-trained diffusion models suggests a demand for methods that combine multiple different pre-t… (see more)rained diffusion models without incurring the significant computational burden of re-training a larger combined model. In this paper, we cast the problem of combining multiple pre-trained diffusion models at the generation stage under a novel proposed framework termed superposition. Theoretically, we derive superposition from rigorous first principles stemming from the celebrated continuity equation and design two novel algorithms tailor-made for combining diffusion models in SuperDiff. SuperDiff leverages a new scalable It\^o density estimator for the log likelihood of the diffusion SDE which incurs no additional overhead compared to the well-known Hutchinson's estimator needed for divergence calculations. We demonstrate that SuperDiff is scalable to large pre-trained diffusion models as superposition is performed solely through composition during inference, and also enjoys painless implementation as it combines different pre-trained vector fields through an automated re-weighting scheme. Notably, we show that SuperDiff is efficient during inference time, and mimics traditional composition operators such as the logical OR and the logical AND. We empirically demonstrate the utility of using SuperDiff for generating more diverse images on CIFAR-10, more faithful prompt conditioned image editing using Stable Diffusion, as well as improved conditional molecule generation and unconditional de novo structure design of proteins. https://github.com/necludov/super-diffusion
Towards contrast-agnostic soft segmentation of the spinal cord
Sandrine Bédard
Enamundram Naga Karthik
Charidimos Tsagkas
Emanuele Pravatà
Cristina Granziera
Andrew C. Smith
Kenneth Arnold Weber
Spinal cord segmentation is clinically relevant and is notably used to compute spinal cord cross-sectional area (CSA) for the diagnosis and … (see more)monitoring of cord compression or neurodegenerative diseases such as multiple sclerosis. While several semi and automatic methods exist, one key limitation remains: the segmentation depends on the MRI contrast, resulting in different CSA across contrasts. This is partly due to the varying appearance of the boundary between the spinal cord and the cerebrospinal fluid that depends on the sequence and acquisition parameters. This contrast-sensitive CSA adds variability in multi-center studies where protocols can vary, reducing the sensitivity to detect subtle atrophies. Moreover, existing methods enhance the CSA variability by training one model per contrast, while also producing binary masks that do not account for partial volume effects. In this work, we present a deep learning-based method that produces soft segmentations of the spinal cord. Using the Spine Generic Public Database of healthy participants (
Training Language Models to Self-Correct via Reinforcement Learning
Aviral Kumar
Vincent Zhuang
Yi Su
John D Co-Reyes
Avi Singh
Kate Baumli
Shariq Iqbal
Colton Bishop
Rebecca Roelofs
Lei M Zhang
Kay McKinney
Disha Shrivastava
Cosmin Paduraru
George Tucker
Feryal Behbahani
Aleksandra Faust
Self-correction is a highly desirable capability of large language models (LLMs), yet it has consistently been found to be largely ineffecti… (see more)ve in modern LLMs. Existing approaches for training self-correction either require multiple models or rely on a more capable model or other forms of supervision. To this end, we develop a multi-turn online reinforcement learning (RL) approach, SCoRe, that significantly improves an LLM's self-correction ability using entirely self-generated data. To build SCoRe, we first show that variants of supervised fine-tuning (SFT) on offline model-generated correction traces are insufficient for instilling self-correction behavior. In particular, we observe that training via SFT either suffers from a distribution mismatch between the training data and the model's own responses or implicitly prefers only a certain mode of correction behavior that is often not effective at test time. SCoRe addresses these challenges by training under the model's own distribution of self-generated correction traces and using appropriate regularization to steer the learning process into learning a self-correction strategy that is effective at test time as opposed to simply fitting high-reward responses for a given prompt. This regularization prescribes running a first phase of RL on a base model to generate a policy initialization that is less susceptible to collapse and then using a reward bonus to amplify self-correction during training. When applied to Gemini 1.0 Pro and 1.5 Flash models, we find that SCoRe achieves state-of-the-art self-correction performance, improving the base models' self-correction by 15.6% and 9.1% respectively on the MATH and HumanEval benchmarks.
Trajectory Balance with Asynchrony: Decoupling Exploration and Learning for Fast, Scalable LLM Post-Training
Brian R. Bartoldson
Siddarth Venkatraman
James Diffenderfer
Moksh J. Jain
Tal Ben-Nun
Seanie Lee
Minsu Kim
Johan Samir Obando Ceron
Bhavya Kailkhura
The Romantic Historicism and The Rise of the Historical Novel in the 19th Century Romanian Literature
Medium-scale flexible integrated circuits based on 2D semiconductors
Yalin Peng
Chenyang Cui
Lu Li
Yuchen Wang
Qinqin Wang
Jinpeng Tian
Zhiheng Huang
Biying Huang
Yangkun Zhang
Xiuzhen Li
Yanbang Chu
Wei Yang
Dongxia Shi
Luojun Du
Na Li
Guangyu Zhang
Sliding ferroelectric memories and synapses based on rhombohedral-stacked bilayer MoS2
Xiuzhen Li
Biao Qin
Yaxian Wang
Yue Xi
Zhiheng Huang
Mengze Zhao
Yalin Peng
Zitao Chen
Zitian Pan
Jundong Zhu
Chenyang Cui
Rong Yang
Wei Yang
Sheng Meng
Dongxia Shi
Xuedong Bai
Can Liu
Na Li
Kaihui Liu … (see 3 more)
Kai-Wen Liu
Luojun Du
Guangyu Zhang
AfriHG: News headline generation for African Languages
Toyib Ogunremi
Serah Akojenu
Anthony Soronnadi
Olubayo Adekanmbi
This paper introduces AfriHG -- a news headline generation dataset created by combining from XLSum and MasakhaNEWS datasets focusing on 16 l… (see more)anguages widely spoken by Africa. We experimented with two seq2eq models (mT5-base and AfriTeVa V2), and Aya-101 LLM. Our results show that Africa-centric seq2seq models such as AfriTeVa V2 outperform the massively multilingual mT5-base model. Finally, we show that the performance of fine-tuning AfriTeVa V2 with 313M parameters is competitive to prompting Aya-101 LLM with more than 13B parameters.