Soup to go: mitigating forgetting during continual learning with model averaging
Anat Kleiman
Jonathan Frankle
Sham M. Kakade
Mansheej Paul
In continual learning, where task data arrives in a sequence, fine-tuning on later tasks will often lead to performance degradation on earli… (see more)er tasks. This is especially pronounced when these tasks come from diverse domains. In this setting, how can we mitigate catastrophic forgetting of earlier tasks and retain what the model has learned with minimal computational expenses? Inspired by other merging methods, and L2-regression, we propose Sequential Fine-tuning with Averaging (SFA), a method that merges currently training models with earlier checkpoints during the course of training. SOTA approaches typically maintain a data buffer of past tasks or impose a penalty at each gradient step. In contrast, our method achieves comparable results without the need to store past data, or multiple copies of parameters for each gradient step. Furthermore, our method outperforms common merging techniques such as Task Arithmetic, TIES Merging, and WiSE-FT, as well as other penalty methods like L2 and Elastic Weight Consolidation. In turn, our method offers insight into the benefits of merging partially-trained models during training across both image and language domains.
Soup to go: mitigating forgetting during continual learning with model averaging
Anat Kleiman
Jonathan Frankle
Sham M. Kakade
Mansheej Paul
In continual learning, where task data arrives in a sequence, fine-tuning on later tasks will often lead to performance degradation on earli… (see more)er tasks. This is especially pronounced when these tasks come from diverse domains. In this setting, how can we mitigate catastrophic forgetting of earlier tasks and retain what the model has learned with minimal computational expenses? Inspired by other merging methods, and L2-regression, we propose Sequential Fine-tuning with Averaging (SFA), a method that merges currently training models with earlier checkpoints during the course of training. SOTA approaches typically maintain a data buffer of past tasks or impose a penalty at each gradient step. In contrast, our method achieves comparable results without the need to store past data, or multiple copies of parameters for each gradient step. Furthermore, our method outperforms common merging techniques such as Task Arithmetic, TIES Merging, and WiSE-FT, as well as other penalty methods like L2 and Elastic Weight Consolidation. In turn, our method offers insight into the benefits of merging partially-trained models during training across both image and language domains.
GNN-based Decentralized Perception in Multirobot Systems for Predicting Worker Actions
Ali Imran
David St-Onge
In industrial environments, predicting human actions is essential for ensuring safe and effective collaboration between humans and robots. T… (see more)his paper introduces a perception framework that enables mobile robots to understand and share information about human actions in a decentralized way. The framework first allows each robot to build a spatial graph representing its surroundings, which it then shares with other robots. This shared spatial data is combined with temporal information to track human behavior over time. A swarm-inspired decision-making process is used to ensure all robots agree on a unified interpretation of the human's actions. Results show that adding more robots and incorporating longer time sequences improve prediction accuracy. Additionally, the consensus mechanism increases system resilience, making the multi-robot setup more reliable in dynamic industrial settings.
GNN-based Decentralized Perception in Multirobot Systems for Predicting Worker Actions
Ali Imran
David St-Onge
In industrial environments, predicting human actions is essential for ensuring safe and effective collaboration between humans and robots. T… (see more)his paper introduces a perception framework that enables mobile robots to understand and share information about human actions in a decentralized way. The framework first allows each robot to build a spatial graph representing its surroundings, which it then shares with other robots. This shared spatial data is combined with temporal information to track human behavior over time. A swarm-inspired decision-making process is used to ensure all robots agree on a unified interpretation of the human's actions. Results show that adding more robots and incorporating longer time sequences improve prediction accuracy. Additionally, the consensus mechanism increases system resilience, making the multi-robot setup more reliable in dynamic industrial settings.
Top-down feedback matters: Functional impact of brainlike connectivity motifs on audiovisual integration
Mashbayar Tugsbayar
Mingze Li
Artificial neural networks (ANNs) are an important tool for studying neural computation, but many features of the brain are not captured by … (see more)standard ANN architectures. One notable missing feature in most ANN models is top-down feedback, i.e. projections from higher-order layers to lower-order layers in the network. Top-down feedback is ubiquitous in the brain, and it has a unique modulatory impact on activity in neocortical pyramidal neurons. However, we still do not understand its computational role. Here we develop a deep neural network model that captures the core functional properties of top-down feedback in the neocortex, allowing us to construct hierarchical recurrent ANN models that more closely reflect the architecture of the brain. We use this to explore the impact of different hierarchical recurrent architectures on an audiovisual integration task. We find that certain hierarchies, namely those that mimic the architecture of the human brain, impart ANN models with a light visual bias similar to that seen in humans. This bias does not impair performance on the audiovisual tasks. The results further suggest that different configurations of top-down feedback make otherwise identically connected models functionally distinct from each other, and from traditional feedforward-only models. Altogether our findings demonstrate that modulatory top-down feedback is a computationally relevant feature of biological brains, and that incorporating it into ANNs can affect their behavior and helps to determine the solutions that the network can discover.
Adaptive Experiments Under High-Dimensional and Data Sparse Settings: Applications for Educational Platforms
Haochen Song
Ilya Musabirov
Ananya Bhattacharjee
Meredith Franklin
Anna Rafferty
Joseph Jay Williams
Adaptive Experiments Under High-Dimensional and Data Sparse Settings: Applications for Educational Platforms
Haochen Song
Ilya Musabirov
Ananya Bhattacharjee
Meredith Franklin
Anna Rafferty
Joseph Jay Williams
In online educational platforms, adaptive experiment designs play a critical role in personalizing learning pathways, instructional sequenci… (see more)ng, and content recommendations. Traditional adaptive policies, such as Thompson Sampling, struggle with scalability in high-dimensional and sparse settings such as when there are large amount of treatments (arms) and limited resources such as funding and time to conduct to a classroom constraint student size. Furthermore, the issue of under-exploration in large-scale educational interventions can lead to suboptimal learning recommendations. To address these challenges, we build upon the concept of lenient regret, which tolerates limited suboptimal selections to enhance exploratory learning, and propose a framework for determining the feasible number of treatments given a sample size. We illustrate these ideas with a case study in online educational learnersourcing examples, where adaptive algorithms dynamically allocate peer-crafted interventions to other students under active recall exercise. Our proposed Weighted Allocation Probability Adjusted Thompson Sampling (WAPTS) algorithm enhances the efficiency of treatment allocation by adjusting sampling weights to balance exploration and exploitation in data-sparse environments. We present comparative evaluations of WAPTS across various sample sizes (N=50, 300, 1000) and treatment conditions, demonstrating its ability to mitigate under-exploration while optimizing learning outcomes.
Galaxy cluster characterization with machine learning techniques
Maria Sadikov
Julie Hlavacek-larrondo
C. Rhea
Michael McDonald
Michelle Ntampaka
John ZuHone
We present an analysis of the X-ray properties of the galaxy cluster population in the z=0 snapshot of the IllustrisTNG simulations, utilizi… (see more)ng machine learning techniques to perform clustering and regression tasks. We examine five properties of the hot gas (the central cooling time, the central electron density, the central entropy excess, the concentration parameter, and the cuspiness) which are commonly used as classification metrics to identify cool core (CC), weak cool core (WCC) and non cool core (NCC) clusters of galaxies. Using mock Chandra X-ray images as inputs, we first explore an unsupervised clustering scheme to see how the resulting groups correlate with the CC/WCC/NCC classification based on the different criteria. We observe that the groups replicate almost exactly the separation of the galaxy cluster images when classifying them based on the concentration parameter. We then move on to a regression task, utilizing a ResNet model to predict the value of all five properties. The network is able to achieve a mean percentage error of 1.8% for the central cooling time, and a balanced accuracy of 0.83 on the concentration parameter, making them the best-performing metrics. Finally, we use simulation-based inference (SBI) to extract posterior distributions for the network predictions. Our neural network simultaneously predicts all five classification metrics using only mock Chandra X-ray images. This study demonstrates that machine learning is a viable approach for analyzing and classifying the large galaxy cluster datasets that will soon become available through current and upcoming X-ray surveys, such as eROSITA.
Galaxy cluster characterization with machine learning techniques
Maria Sadikov
Julie Hlavacek-larrondo
C. Rhea
Michael McDonald
Michelle Ntampaka
John ZuHone
We present an analysis of the X-ray properties of the galaxy cluster population in the z=0 snapshot of the IllustrisTNG simulations, utilizi… (see more)ng machine learning techniques to perform clustering and regression tasks. We examine five properties of the hot gas (the central cooling time, the central electron density, the central entropy excess, the concentration parameter, and the cuspiness) which are commonly used as classification metrics to identify cool core (CC), weak cool core (WCC) and non cool core (NCC) clusters of galaxies. Using mock Chandra X-ray images as inputs, we first explore an unsupervised clustering scheme to see how the resulting groups correlate with the CC/WCC/NCC classification based on the different criteria. We observe that the groups replicate almost exactly the separation of the galaxy cluster images when classifying them based on the concentration parameter. We then move on to a regression task, utilizing a ResNet model to predict the value of all five properties. The network is able to achieve a mean percentage error of 1.8% for the central cooling time, and a balanced accuracy of 0.83 on the concentration parameter, making them the best-performing metrics. Finally, we use simulation-based inference (SBI) to extract posterior distributions for the network predictions. Our neural network simultaneously predicts all five classification metrics using only mock Chandra X-ray images. This study demonstrates that machine learning is a viable approach for analyzing and classifying the large galaxy cluster datasets that will soon become available through current and upcoming X-ray surveys, such as eROSITA.
Galaxy cluster characterization with machine learning techniques
Maria Sadikov
Julie Hlavacek-larrondo
C. Rhea
Michael McDonald
Michelle Ntampaka
John ZuHone
We present an analysis of the X-ray properties of the galaxy cluster population in the z=0 snapshot of the IllustrisTNG simulations, utilizi… (see more)ng machine learning techniques to perform clustering and regression tasks. We examine five properties of the hot gas (the central cooling time, the central electron density, the central entropy excess, the concentration parameter, and the cuspiness) which are commonly used as classification metrics to identify cool core (CC), weak cool core (WCC) and non cool core (NCC) clusters of galaxies. Using mock Chandra X-ray images as inputs, we first explore an unsupervised clustering scheme to see how the resulting groups correlate with the CC/WCC/NCC classification based on the different criteria. We observe that the groups replicate almost exactly the separation of the galaxy cluster images when classifying them based on the concentration parameter. We then move on to a regression task, utilizing a ResNet model to predict the value of all five properties. The network is able to achieve a mean percentage error of 1.8% for the central cooling time, and a balanced accuracy of 0.83 on the concentration parameter, making them the best-performing metrics. Finally, we use simulation-based inference (SBI) to extract posterior distributions for the network predictions. Our neural network simultaneously predicts all five classification metrics using only mock Chandra X-ray images. This study demonstrates that machine learning is a viable approach for analyzing and classifying the large galaxy cluster datasets that will soon become available through current and upcoming X-ray surveys, such as eROSITA.
L’appréhension empirique du leadership normatif d’une organisation internationale : l’exemple de l’Organisation mondiale de la Santé
Gaelle Foucault
Pierre Larouche
Jean-Louis Denis
Miriam Cohen
En plein essor, la recherche empirique en droit participe à la création de nouvelles connaissances et ouvre aux juristes d’autres voies … (see more)pour étudier une question, un phénomène. Oser l’empirisme n’est pas chose aisée, mais les auteurs du présent article ont pris ce virage et proposent d’en exposer le récit. En construisant deux méthodes distinctes (pour deux projets), ils ont pu tester les possibilités qu’offre la recherche empirique pour appréhender l’enjeu du leadership normatif de l’Organisation mondiale de la Santé (OMS). Destiné à aiguiller à partir d’une expérience celles et ceux qui voudraient s’aventurer dans l’empirisme, cet article met en lumière les défis rencontrés, mais surtout les atouts d’une telle recherche. La richesse des informations obtenues a en effet grandement bonifié la compréhension de la trajectoire des normes de l’OMS et de leurs impacts sur les États.
Mirror effect of genomic deletions and duplications on cognitive ability across the human cerebral cortex
Kuldeep Kumar
Sayeh Kazem
Guillaume Huguet
Thomas Renne
Worrawat Engchuan
Martineau Jean-Louis
Jakub Kopal
Zohra Saci
Omar Shanta
Bhooma Thiruvahindrapuram
Jeffrey R. MacDonald
Josephine Mollon
Laura Schultz
Emma E M Knowles
David Porteous
Gail Davies
Paul Redmond
Sarah E. Harris
Simon R. Cox
Gunter Schumann … (see 9 more)
Zdenka Pausova
Celia M. T. Greenwood
Tomas Paus
Stephen W Scherer
Laura Almasy
Jonathan Sebat
David C. Glahn
Sébastien Jacquemont
Regulation of gene expression shapes the interaction between brain networks which in-turn supports psychological processes such as cognitive… (see more) ability. How changes in level of gene expression across the cerebral cortex influence cognitive ability remains unknown. Here, we tackle this by leveraging genomic deletions and duplications - copy number variants (CNVs) that fully encompass one or more genes expressed in the human cortex - which lead to large effects on gene-expression levels. We assigned genes to 180 regions of the human cerebral cortex based on their preferential expression across the cortex computed using data from the Allen Human Brain Atlas. We aggregated CNVs in cortical regions, and ran a burden association analysis to compute the mean effect size of genes on general cognitive ability for each of the 180 regions. When affected by CNVs, most of the regional gene-sets were associated with lower cognitive ability. The spatial patterns of effect sizes across the cortex were correlated negatively between deletions and duplications. The largest effect sizes for deletions and duplications were observed for gene-sets with high expression in sensorimotor and association regions, respectively. These two opposing patterns of effect sizes were not influenced by intolerance to loss of function, demonstrating orthogonality to dosage-sensitivity scores. The same mirror patterns were also observed after stratifying genes based on cell types and developmental epochs markers. These results suggest that the effect size of gene dosage on cognitive ability follows a cortical gradient. The same brain region and corresponding gene-set may show different effects on cognition depending on whether variants increase or decrease transcription. The latter has major implications for the association of brain networks with phenotypes