Selection for immune evasion in SARS-CoV-2 revealed by high-resolution epitope mapping and sequence analysis
Arnaud N’Guessan
Senthilkumar Kailasam
Fatima Mostefai
Raphael Poujol
Jean-Christophe Grenier
Nailya Ismailova
Paola Contini
Raffaele De Palma
Carsten Haber
Volker Stadler
Guillaume Bourque
B. Jesse Shapiro
Jörg H. Fritz
Ciriaco A. Piccirillo
Subcortical Brain Alterations in Carriers of Genomic Copy Number Variants.
Kuldeep Kumar
Claudia Modenato
Clara A. Moreau
Christopher R. K. Ching
C. Ching
Annabelle Harvey
Sandra Martin-Brevet
Guillaume Huguet
Martineau Jean-Louis
Elise Douard
Charles-Olivier Martin
C.O. Martin
Nadine Younis
Petra Tamer
Anne M. Maillard
Borja Rodriguez-Herreros
Aurélie Pain
Sonia Richetin
Leila Kushan
Dmitry Isaev … (see 26 more)
Kathryn Alpert
Anjani Ragothaman
Jessica A. Turner
Wei Wang
T. Ho
Tiffany C. Ho
Lianne Schmaal
Ana I. Silva
Marianne B.M. van den Bree
V. Marianne
David E.J. Linden
M. J. Owen
Marie Owen
Jeremy Hall
Sarah Lippé
Bogdan Draganski
Boris A. Gutman
Ida E. Sønderby
Ole A. Andreassen
Laura Schultz
Laura Almasy
David C. Glahn
Carrie E. Bearden
Paul M. Thompson
Sébastien Jacquemont
OBJECTIVE Copy number variants (CNVs) are well-known genetic pleiotropic risk factors for multiple neurodevelopmental and psychiatric disord… (see more)ers (NPDs), including autism (ASD) and schizophrenia. Little is known about how different CNVs conferring risk for the same condition may affect subcortical brain structures and how these alterations relate to the level of disease risk conferred by CNVs. To fill this gap, the authors investigated gross volume, vertex-level thickness, and surface maps of subcortical structures in 11 CNVs and six NPDs. METHODS Subcortical structures were characterized using harmonized ENIGMA protocols in 675 CNV carriers (CNVs at 1q21.1, TAR, 13q12.12, 15q11.2, 16p11.2, 16p13.11, and 22q11.2; age range, 6-80 years; 340 males) and 782 control subjects (age range, 6-80 years; 387 males) as well as ENIGMA summary statistics for ASD, schizophrenia, attention deficit hyperactivity disorder, obsessive-compulsive disorder, bipolar disorder, and major depression. RESULTS All CNVs showed alterations in at least one subcortical measure. Each structure was affected by at least two CNVs, and the hippocampus and amygdala were affected by five. Shape analyses detected subregional alterations that were averaged out in volume analyses. A common latent dimension was identified, characterized by opposing effects on the hippocampus/amygdala and putamen/pallidum, across CNVs and across NPDs. Effect sizes of CNVs on subcortical volume, thickness, and local surface area were correlated with their previously reported effect sizes on cognition and risk for ASD and schizophrenia. CONCLUSIONS The findings demonstrate that subcortical alterations associated with CNVs show varying levels of similarities with those associated with neuropsychiatric conditions, as well distinct effects, with some CNVs clustering with adult-onset conditions and others with ASD. These findings provide insight into the long-standing questions of why CNVs at different genomic loci increase the risk for the same NPD and why a single CNV increases the risk for a diverse set of NPDs.
Transformers in Reinforcement Learning: A Survey
Pranav Agarwal
Aamer Abdul Rahman
Pierre-Luc St-Charles
Simon J. D. Prince
Transformers have significantly impacted domains like natural language processing, computer vision, and robotics, where they improve perform… (see more)ance compared to other neural networks. This survey explores how transformers are used in reinforcement learning (RL), where they are seen as a promising solution for addressing challenges such as unstable training, credit assignment, lack of interpretability, and partial observability. We begin by providing a brief domain overview of RL, followed by a discussion on the challenges of classical RL algorithms. Next, we delve into the properties of the transformer and its variants and discuss the characteristics that make them well-suited to address the challenges inherent in RL. We examine the application of transformers to various aspects of RL, including representation learning, transition and reward function modeling, and policy optimization. We also discuss recent research that aims to enhance the interpretability and efficiency of transformers in RL, using visualization techniques and efficient training strategies. Often, the transformer architecture must be tailored to the specific needs of a given application. We present a broad overview of how transformers have been adapted for several applications, including robotics, medicine, language modeling, cloud computing, and combinatorial optimization. We conclude by discussing the limitations of using transformers in RL and assess their potential for catalyzing future breakthroughs in this field.
Benchmarking Bayesian Causal Discovery Methods for Downstream Treatment Effect Estimation
Chris Emezue
Tristan Deleu
Stefan Bauer
The practical utility of causality in decision-making is widespread and brought about by the intertwining of causal discovery and causal inf… (see more)erence. Nevertheless, a notable gap exists in the evaluation of causal discovery methods, where insufficient emphasis is placed on downstream inference. To address this gap, we evaluate seven established baseline causal discovery methods including a newly proposed method based on GFlowNets, on the downstream task of treatment effect estimation. Through the implementation of a distribution-level evaluation, we offer valuable and unique insights into the efficacy of these causal discovery methods for treatment effect estimation, considering both synthetic and real-world scenarios, as well as low-data scenarios. The results of our study demonstrate that some of the algorithms studied are able to effectively capture a wide range of useful and diverse ATE modes, while some tend to learn many low-probability modes which impacts the (unrelaxed) recall and precision.
AI For Global Climate Cooperation 2023 Competition Proceedings
Prateek Arun Gupta
Lu Li
Soham R. Phade
Sunil Srinivasa
andrew williams
Tianyu Zhang
Yangtian Zhang
Stephan Tao Zheng
The international community must collaborate to mitigate climate change and sustain economic growth. However, collaboration is hard to achie… (see more)ve, partly because no global authority can ensure compliance with international climate agreements. Combining AI with climate-economic simulations offers a promising solution to design international frameworks, including negotiation protocols and climate agreements, that promote and incentivize collaboration. In addition, these frameworks should also have policy goals fulfillment, and sustained commitment, taking into account climate-economic dynamics and strategic behaviors. These challenges require an interdisciplinary approach across machine learning, economics, climate science, law, policy, ethics, and other fields. Towards this objective, we organized AI for Global Climate Cooperation, a Mila competition in which teams submitted proposals and analyses of international frameworks, based on (modifications of) RICE-N, an AI-driven integrated assessment model (IAM). In particular, RICE-N supports modeling regional decision-making using AI agents. Furthermore, the IAM then models the climate-economic impact of those decisions into the future. Whereas the first track focused only on performance metrics, the proposals submitted to the second track were evaluated both quantitatively and qualitatively. The quantitative evaluation focused on a combination of (i) the degree of mitigation of global temperature rise and (ii) the increase in economic productivity. On the other hand, an interdisciplinary panel of human experts in law, policy, sociology, economics and environmental science, evaluated the solutions qualitatively. In particular, the panel considered the effectiveness, simplicity, feasibility, ethics, and notions of climate justice of the protocols. In the third track, the participants were asked to critique and improve RICE-N.
AI For Global Climate Cooperation 2023 Competition Proceedings
Prateek Arun Gupta
Lu Li
Soham R. Phade
Sunil Srinivasa
andrew williams
Tianyu Zhang
Yang Zhang
Stephan Tao Zheng
The international community must collaborate to mitigate climate change and sustain economic growth. However, collaboration is hard to achie… (see more)ve, partly because no global authority can ensure compliance with international climate agreements. Combining AI with climate-economic simulations offers a promising solution to design international frameworks, including negotiation protocols and climate agreements, that promote and incentivize collaboration. In addition, these frameworks should also have policy goals fulfillment, and sustained commitment, taking into account climate-economic dynamics and strategic behaviors. These challenges require an interdisciplinary approach across machine learning, economics, climate science, law, policy, ethics, and other fields. Towards this objective, we organized AI for Global Climate Cooperation, a Mila competition in which teams submitted proposals and analyses of international frameworks, based on (modifications of) RICE-N, an AI-driven integrated assessment model (IAM). In particular, RICE-N supports modeling regional decision-making using AI agents. Furthermore, the IAM then models the climate-economic impact of those decisions into the future. Whereas the first track focused only on performance metrics, the proposals submitted to the second track were evaluated both quantitatively and qualitatively. The quantitative evaluation focused on a combination of (i) the degree of mitigation of global temperature rise and (ii) the increase in economic productivity. On the other hand, an interdisciplinary panel of human experts in law, policy, sociology, economics and environmental science, evaluated the solutions qualitatively. In particular, the panel considered the effectiveness, simplicity, feasibility, ethics, and notions of climate justice of the protocols. In the third track, the participants were asked to critique and improve RICE-N.
International Institutions for Advanced AI
Lewis Ho
Joslyn N. Barnhart
Robert Frederic Trager
Miles Brundage
Allison Sovey Carnegie
Rumman Chowdhury
Allan Dafoe
Gillian K. Hadfield
Margaret Levi
D. Snidal
Robust and Versatile Bipedal Jumping Control through Reinforcement Learning
Zhongyu Li
Xue Bin Peng
Pieter Abbeel
Sergey Levine
Koushil Sreenath
Overcoming the Technical Challenges of Coordinating Distributed Load Resources at Scale
Johanna Mathieu
Ian Hiskens
Ioannis Marios Granitsas
Oluwagbemileke Oyefeso
Gregory Ledva
Sebastian Nugroho
Salman Nazir
Scott Hinson
Suzanne Russo
Steve Mock
Rachel Jenkins
Jill Harlow
Grant Fisher
Drew Geller
Duncan Callaway
Phillippe Phanivong
Capacity Planning in Stable Matching: An Application to School Choice
Federico Bobbio
Andrea Lodi
Ignacio Rios
Alfredo Torrico
Centralized mechanisms are becoming the standard approach to solve several assignment problems. Examples include the allocation of students … (see more)to schools (school choice), high-school graduates to colleges, residents to hospitals and refugees to cities. In most of these markets, a desirable property of the assignment is stability, which guarantees that no pair of agents has incentive to circumvent the matching. Using school choice as our matching market application, we introduce the problem of jointly allocating a school capacity expansion and finding the best stable matching for the students in the expanded market. We analyze theoretically the problem, focusing on the trade-off behind the multiplicity of student-optimal assignments, and the problem complexity. Since the theoretical intractability of the problem precludes the adaptation of classical approaches to solve it efficiently, we generalize existent mathematical programming formulations of stability constraints to our setting. These generalizations result in integer quadratically-constrained programs, which are computationally hard to solve. In addition, we propose a novel mixed-integer linear programming formulation that is exponentially-large on the problem size. We show that the stability constraints can be separated in linear time, leading to an effective cutting-plane method. We evaluate the performance of our approaches in a detailed computational study, and we find that our cutting-plane method outperforms mixed-integer programming solvers applied to existent formulations extended to our problem setting. We also propose two heuristics that are effective for large instances of the problem. Finally, we use the Chilean school choice system data to demonstrate the impact of capacity planning under stability conditions. Our results show that each additional school seat can benefit multiple students. On the one hand, we can focus on access by prioritizing extra seats that benefit previously unassigned students; on the other hand, we can focus on merit by allocating extra seats that benefit several students via chains of improvement. These insights empower the decision-maker in tuning the matching algorithm to provide a fair application-oriented solution.
Deep Multirepresentation Learning for Data Clustering.
Mohammadreza Sadeghi
Deep clustering incorporates embedding into clustering in order to find a lower-dimensional space suitable for clustering tasks. Conventiona… (see more)l deep clustering methods aim to obtain a single global embedding subspace (aka latent space) for all the data clusters. In contrast, in this article, we propose a deep multirepresentation learning (DML) framework for data clustering whereby each difficult-to-cluster data group is associated with its own distinct optimized latent space and all the easy-to-cluster data groups are associated with a general common latent space. Autoencoders (AEs) are employed for generating cluster-specific and general latent spaces. To specialize each AE in its associated data cluster(s), we propose a novel and effective loss function which consists of weighted reconstruction and clustering losses of the data points, where higher weights are assigned to the samples more probable to belong to the corresponding cluster(s). Experimental results on benchmark datasets demonstrate that the proposed DML framework and loss function outperform state-of-the-art clustering approaches. In addition, the results show that the DML method significantly outperforms the SOTA on imbalanced datasets as a result of assigning an individual latent space to the difficult clusters.
Additive Decoders for Latent Variables Identification and Cartesian-Product Extrapolation
Sébastien Lachapelle
Divyat Mahajan
We tackle the problems of latent variables identification and ``out-of-support'' image generation in representation learning. We show that b… (see more)oth are possible for a class of decoders that we call additive, which are reminiscent of decoders used for object-centric representation learning (OCRL) and well suited for images that can be decomposed as a sum of object-specific images. We provide conditions under which exactly solving the reconstruction problem using an additive decoder is guaranteed to identify the blocks of latent variables up to permutation and block-wise invertible transformations. This guarantee relies only on very weak assumptions about the distribution of the latent factors, which might present statistical dependencies and have an almost arbitrarily shaped support. Our result provides a new setting where nonlinear independent component analysis (ICA) is possible and adds to our theoretical understanding of OCRL methods. We also show theoretically that additive decoders can generate novel images by recombining observed factors of variations in novel ways, an ability we refer to as Cartesian-product extrapolation. We show empirically that additivity is crucial for both identifiability and extrapolation on simulated data.