Continuing professional education of Iranian healthcare professionals in shared decision-making: lessons learned
Charo Rodriguez
Jordie Croteau
Alireza Sadeghpour
Amir-Mohammad Navali
France Légaré
Staying Ahead of the Epidemiologic Curve: Evaluation of the British Columbia Asthma Prediction System (BCAPS) During the Unprecedented 2018 Wildfire Season
Sarah B. Henderson
Kathryn T. Morrison
Kathleen E. McLean
Yue Ding
Jiayun Yao
Gavin Shaddick
Parallel inference of hierarchical latent dynamics in two-photon calcium imaging of neuronal populations
Luke Y. Prince
Colleen J Gillon
Dynamic latent variable modelling has provided a powerful tool for understanding how populations of neurons compute. For spiking data, such … (voir plus)latent variable modelling can treat the data as a set of point-processes, due to the fact that spiking dynamics occur on a much faster timescale than the computational dynamics being inferred. In contrast, for other experimental techniques, the slow dynamics governing the observed data are similar in timescale to the computational dynamics that researchers want to infer. An example of this is in calcium imaging data, where calcium dynamics can have timescales on the order of hundreds of milliseconds. As such, the successful application of dynamic latent variable modelling to modalities like calcium imaging data will rest on the ability to disentangle the deeper- and shallower-level dynamical systems’ contributions to the data. To-date, no techniques have been developed to directly achieve this. Here we solve this problem by extending recent advances using sequential variational autoencoders for dynamic latent variable modelling of neural data. Our system VaLPACa (Variational Ladders for Parallel Autoencoding of Calcium imaging data) solves the problem of disentangling deeper- and shallower-level dynamics by incorporating a ladder architecture that can infer a hierarchy of dynamical systems. Using some built-in inductive biases for calcium dynamics, we show that we can disentangle calcium flux from the underlying dynamics of neural computation. First, we demonstrate with synthetic calcium data that we can correctly disentangle an underlying Lorenz attractor from calcium dynamics. Next, we show that we can infer appropriate rotational dynamics in spiking data from macaque motor cortex after it has been converted into calcium fluorescence data via a calcium dynamics model. Finally, we show that our method applied to real calcium imaging data from primary visual cortex in mice allows us to infer latent factors that carry salient sensory information about unexpected stimuli. These results demonstrate that variational ladder autoencoders are a promising approach for inferring hierarchical dynamics in experimental settings where the measured variable has its own slow dynamics, such as calcium imaging data. Our new, open-source tool thereby provides the neuroscience community with the ability to apply dynamic latent variable modelling to a wider array of data modalities.
Enabling Technologies for Energy Cloud
Thar Intisar Baker
Zehua Guo
Ali Ismail Ali Awad
Shangguang Wang
Training a First-Order Theorem Prover from Synthetic Data
Vlad Firoiu
Eser Aygün
Zafarali Ahmed
Xavier Glorot
Laurent Orseau
Lei Zhang
Shibl Mourad
Comment on Starke et al.: “Computing schizophrenia: ethical challenges for machine learning in psychiatry”: From machine learning to student learning: pedagogical challenges for psychiatry – Corrigendum
Christophe Gauld
Jean‐Arthur Micoulaud‐Franchi
A Two-Stream Continual Learning System With Variational Domain-Agnostic Feature Replay
Qicheng Lao
Xiang Jiang
Mohammad Havaei
Learning in nonstationary environments is one of the biggest challenges in machine learning. Nonstationarity can be caused by either task dr… (voir plus)ift, i.e., the drift in the conditional distribution of labels given the input data, or the domain drift, i.e., the drift in the marginal distribution of the input data. This article aims to tackle this challenge with a modularized two-stream continual learning (CL) system, where the model is required to learn new tasks from a support stream and adapted to new domains in the query stream while maintaining previously learned knowledge. To deal with both drifts within and across the two streams, we propose a variational domain-agnostic feature replay-based approach that decouples the system into three modules: an inference module that filters the input data from the two streams into domain-agnostic representations, a generative module that facilitates the high-level knowledge transfer, and a solver module that applies the filtered and transferable knowledge to solve the queries. We demonstrate the effectiveness of our proposed approach in addressing the two fundamental scenarios and complex scenarios in two-stream CL.
Functional specialization within the inferior parietal lobes across cognitive domains
Ole Numssen
Gesa Hartwigsen
QBSUM: a Large-Scale Query-Based Document Summarization Dataset from Real-world Applications
Mingjun Zhao
Shengli Yan
Xinwang Zhong
Qian Hao
Haolan Chen
Di Niu
Bo Long
Wei-dong Guo
Towards robust and replicable sex differences in the intrinsic brain function of autism
Dorothea L. Floris
José O. A. Filho
Meng-Chuan Lai
Steve Giavasis
Marianne Oldehinkel
Maarten Mennes
Tony Charman
Julian Tillmann
Christine Ecker
Flavio Dell’Acqua
Tobias Banaschewski
Carolin Moessnang
Simon Baron-Cohen
Sarah Durston
Eva Loth
Declan Murphy
Jan K. Buitelaar
Christian Beckmann
Michael P. Milham … (voir 1 de plus)
Adriana Di Martino
From Generative Models to Generative Passages: A Computational Approach to (Neuro) Phenomenology
Maxwell J. D. Ramstead
Anil K. Seth
Casper Hesp
Lars Sandved-Smith
Jonas Mago
Michael Lifshitz
Giuseppe Pagnoni
Ryan Smith
Andrew E. Lutz
Antoine Lutz
Karl Friston
Axel Constant
Towards Causal Representation Learning
Bernhard Schölkopf
Francesco Locatello
Stefan Bauer
Nan Rosemary Ke
Nal Kalchbrenner
Anirudh Goyal
The two fields of machine learning and graphical causality arose and developed separately. However, there is now cross-pollination and incre… (voir plus)asing interest in both fields to benefit from the advances of the other. In the present paper, we review fundamental concepts of causal inference and relate them to crucial open problems of machine learning, including transfer and generalization, thereby assaying how causality can contribute to modern machine learning research. This also applies in the opposite direction: we note that most work in causality starts from the premise that the causal variables are given. A central problem for AI and causality is, thus, causal representation learning, the discovery of high-level causal variables from low-level observations. Finally, we delineate some implications of causality for machine learning and propose key research areas at the intersection of both communities.