Class imbalance should not throw you off balance: Choosing the right classifiers and performance metrics for brain decoding with imbalanced data
Using rare genetic mutations to revisit structural brain asymmetry
Jakub Kopal
Kuldeep Kumar
Kimia Shafighi
Karin Saltoun
Claudia Modenato
Clara A. Moreau
Guillaume Huguet
Martineau Jean-Louis
Charles-Olivier Martin
C.O. Martin
Zohra Saci
Nadine Younis
Elise Douard
Khadije Jizi
Alexis Beauchamp-Chatel
Leila Kushan
Ana I. Silva
Marianne B.M. van den Bree
David E.J. Linden
M. J. Owen … (voir 11 de plus)
Jeremy Hall
Sarah Lippé
Bogdan Draganski
Ida E. Sønderby
Ole A. Andreassen
David C. Glahn
Paul M. Thompson
Carrie E. Bearden
Robert Zatorre
Sébastien Jacquemont
Fast D
<sub>M,M</sub> calculation in LDR brachytherapy using deep learning methods
Francisco Berumen
Luc Beaulieu
Meta Pseudo Labels for Anomaly Detection via Partially Observed Anomalies
Sinong Zhao
Zhaoyang Yu
Xiaofei Wang
T. Marbach
Gang Wang
Xiaoguang Liu
VulANalyzeR: Explainable Binary Vulnerability Detection with Multi-task Learning and Attentional Graph Convolution
Litao Li
Steven H. H. Ding
Yuan Tian
Philippe Charland
Weihan Ou
Leo Song
Congwei Chen
SemEval-2023 Task 12: Sentiment Analysis for African Languages (AfriSenti-SemEval)
Shamsuddeen Hassan Muhammad
Idris Abdulmumin
Seid Muhie Yimam
Ibrahim Ahmad
Nedjma OUSIDHOUM
Abinew Ayele
Saif Mohammad
Meriem Beloucif
Structure-aware protein self-supervised learning
Can Chen
Jingbo Zhou
Fan Wang
Dejing Dou
Adaptive patch foraging in deep reinforcement learning agents
Nathan Wispinski
Andrew Butcher
Craig S Chapman
Matthew Botvinick
Patrick M. Pilarski
Patch foraging is one of the most heavily studied behavioral optimization challenges in biology. However, despite its importance to biologic… (voir plus)al intelligence, this behavioral optimization problem is understudied in artificial intelligence research. Patch foraging is especially amenable to study given that it has a known optimal solution, which may be difficult to discover given current techniques in deep reinforcement learning. Here, we investigate deep reinforcement learning agents in an ecological patch foraging task. For the first time, we show that machine learning agents can learn to patch forage adaptively in patterns similar to biological foragers, and approach optimal patch foraging behavior when accounting for temporal discounting. Finally, we show emergent internal dynamics in these agents that resemble single-cell recordings from foraging non-human primates, which complements experimental and theoretical work on the neural mechanisms of biological foraging. This work suggests that agents interacting in complex environments with ecologically valid pressures arrive at common solutions, suggesting the emergence of foundational computations behind adaptive, intelligent behavior in both biological and artificial agents.
Autonomous optimization of neuroprosthetic stimulation parameters that drive the motor cortex and spinal cord outputs in rats and monkeys
Rose Guay Hottin
Sandrine L. Côté
Elena Massai
Léo Choinière
Uzay Macar
Samuel Laferrière
Parikshat Sirpal
Stephan Quessy
Marina Martinez
Numa Dancause
Finite time analysis of temporal difference learning with linear function approximation: Tail averaging and regularisation
Gandharv Patil
Prashanth L.A.
Dheeraj M. Nagaraj
We study the finite-time behaviour of the popular temporal difference (TD) learning algorithm, when combined with tail-averaging. We derive … (voir plus)finite time bounds on the parameter error of the tail-averaged TD iterate under a step-size choice that does not require information about the eigenvalues of the matrix underlying the projected TD fixed point. Our analysis shows that tail-averaged TD converges at the optimal O (1/t) rate, both in expectation and with high probability. In addition, our bounds exhibit a sharper rate of decay for the initial error (bias), which is an improvement over averaging all iterates. We also propose and analyse a variant of TD that incorporates regularisation, and show that this variant fares favourably in problems with ill-conditioned features.
A Novel Stochastic Gradient Descent Algorithm for LearningPrincipal Subspaces
Charline Le Lan
Joshua Greaves
Jesse Farebrother
Mark Rowland
Fabian Pedregosa
In this paper, we derive an algorithm that learns a principal subspace from sample entries, can be applied when the approximate subspace i… (voir plus)s represented by a neural network, and hence can bescaled to datasets with an effectively infinite number of rows and columns. Our method consistsin defining a loss function whose minimizer is the desired principal subspace, and constructing agradient estimate of this loss whose bias can be controlled.
A surprisingly simple technique to control the pretraining bias for better transfer: Expand or Narrow your representation
Florian Bordes
Samuel Lavoie
Randall Balestriero
Nicolas Ballas