RLeXplore: Accelerating Research in Intrinsically-Motivated Reinforcement Learning
Mingqi Yuan
Roger Creus Castanyer
Bo Li
Xin Jin
Wenjun Zeng
What makes a good public EV charging station? A revealed preference study
Steven Lamontagne
Ribal Atallah
To determine the optimal locations for electric vehicle charging stations, optimisation models need to predict which charging stations users… (voir plus) will select. We estimate discrete choice models to predict the usage of charging stations using only readily available information for charging network operators. Our parameter values are estimated from a unique, revealed preferences dataset of charging sessions in Montreal, Quebec. We find that user distance to stations, proximity to home areas, and the number of outlets at each station are significant factors for predicting station usage. Additionally, amenities near charging stations have a neutral effect overall, with some users demonstrating strong preference or aversion for these locations. High variability among the preferences of users highlight the importance of models which incorporate panel effects. Moreover, integrating mixed logit models within the optimization of charging station network design yields high-quality solutions, even when evaluated under other model specifications.
What makes a good public EV charging station? A revealed preference study
Steven Lamontagne
Ribal Atallah
Distilling semantically aware orders for autoregressive image generation
Rishav Pramanik
Antoine Poupon
Juan A. Rodriguez
Masih Aminbeidokhti
David Vazquez
Zhaozheng Yin
Distilling semantically aware orders for autoregressive image generation
Rishav Pramanik
Antoine Poupon
Juan A. Rodriguez
Masih Aminbeidokhti
David Vazquez
Zhaozheng Yin
A flaw in using pre-trained pLLMs in protein-protein interaction inference models
Joseph Szymborski
With the growing pervasiveness of pre-trained protein large language models (pLLMs), pLLM-based methods are increasingly being put forward f… (voir plus)or the protein-protein interaction (PPI) inference task. Here, we identify and confirm that existing pre-trained pLLMs are a source of data leakage for the downstream PPI task. We characterize the extent of the data leakage problem by training and comparing small and efficient pLLMs on a dataset that controls for data leakage (“strict”) with one that does not (“non-strict”). While data leakage from pre-trained pLLMs cause measurable inflation of testing scores, we find that this does not necessarily extend to other, non-paired biological tasks such as protein keyword annotation. Further, we find no connection between the context-lengths of pLLMs and the performance of pLLM-based PPI inference methods on proteins with sequence lengths that surpass it. Furthermore, we show that pLLM-based and non-pLLM-based models fail to generalize in tasks such as prediction of the human-SARS-CoV-2 PPIs or the effect of point mutations on binding-affinities. This study demonstrates the importance of extending existing protocols for the evaluation of pLLM-based models applied to paired biological datasets and identifies areas of weakness of current pLLM models.
Representation Learning via Non-Contrastive Mutual Information
Zhaohan Daniel Guo
Bernardo Avila Pires
Dale Schuurmans
Bo Dai
Representation Learning via Non-Contrastive Mutual Information
Zhaohan Daniel Guo
Bernardo Avila Pires
Dale Schuurmans
Bo Dai
LLMs are Greedy Agents: Effects of RL Fine-tuning on Decision-Making Abilities
Thomas Schmied
Jorg Bornschein
Jordi Grau-Moya
Markus Wulfmeier
Neural Kinematic Bases for Fluids
Yibo Liu
Paul Kry
Kenny Erleben
Sune Darkner
Teseo Schneider
Refining sequence-to-expression modelling with chromatin accessibility
Orsolya Lapohos
Gregory J. Fonseca
Cortical differences across psychiatric disorders and associated common and rare genetic variants
Kuldeep Kumar
Zhijie Liao
Jakub Kopal
Clara Moreau
Christopher R. K. Ching
Claudia Modenato
Will Snyder
Sayeh Kazem
Charles-Olivier Martin
C.O. Martin
Anne-Marie Bélanger
Valérie K. Fontaine
Khadije Jizi
Rune Boen
Guillaume Huguet
Zohra Saci
Leila Kushan
Ana I. Silva
Marianne B.M. van den Bree
David E.J. Linden … (voir 16 de plus)
Michael J. Owen
Jeremy Hall
Sarah Lippé
Bogdan Draganski
Laura Almasy
Sophia I. Thomopoulos
Neda Jahanshad
Ida E. Sønderby
Ole A. Andreassen
David C. Glahn
Armin Raznahan
Carrie Bearden
Tomas Paus
Paul M. Thompson
Sébastien Jacquemont