Publications

Recommandations pratiques pour une utilisation responsable de l’intelligence artificielle en santé mentale en contexte de pandémie
Carl-Maria Mörch
Pascale Lehoux
Xavier Dionne
La pandémie actuelle a provoqué une onde de choc dont les conséquences se font sentir dans tous les aspects de notre vie. Alors que la sa… (see more)nté physique a été généralement au cœur de l’attention scientifique et politique, il est devenu clair que la pandémie de COVID-19 a influé significativement sur la santé mentale de nombreux individus. Plus encore, elle aurait accentué les fragilités déjà existantes dans nos systèmes de santé mentale. Souvent moins financé ou soutenu que la santé physique, le domaine de la santé mentale pourrait-il bénéficier d’innovations en intelligence artificielle en période de pandémie ? Et si oui comment ? Que vous soyez développeur.e.s en IA, chercheur.e.s ou entrepreneur.e.s, ce document vise à vous fournir une synthèse des pistes d’actions et des ressources pour prévenir les principaux risques éthiques liés au développement d’applications d’IA dans le champ de la santé mentale. Pour illustrer ces principes, ce document propose de découvrir quatre cas fictif, à visée réaliste, à partir desquels il vous sera proposé de porter attention aux enjeux éthiques potentiels dans cette situation, aux enjeux d’innovation responsable à envisager, aux pistes d’action possibles inspirées de la liste de contrôle (Protocole Canadien conçu pour favoriser une utilisation responsable de l’IA en santé mentale et prévention du suicide, Mörch et al., 2020), aux ressources pratiques et à certains enjeux juridiques pertinents. Ce document a été élaboré par Carl-Maria Mörch, PhD, Algora Lab, Université de Montréal, Observatoire International sur les impacts sociétaux de l’Intelligence Artificielle et du Numérique (OBVIA), Mila – Institut Québécois d’Intelligence Artificielle, avec les contributions de Pascale Lehoux, Marc-Antoine Dilhac, Catherine Régis et Xavier Dionne.
Inductive biases for deep learning of higher-level cognition
Anirudh Goyal
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles (rather than an encyclopaedic list of … (see more)heuristics). If that hypothesis was correct, we could more easily both understand our own intelligence and build intelligent machines. Just like in physics, the principles themselves would not be sufficient to predict the behaviour of complex systems like brains, and substantial computation might be needed to simulate human-like intelligence. This hypothesis would suggest that studying the kind of inductive biases that humans and animals exploit could help both clarify these principles and provide inspiration for AI research and neuroscience theories. Deep learning already exploits several key inductive biases, and this work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing. The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans’ abilities in terms of flexible out-of-distribution and systematic generalization, which is currently an area where a large gap exists between state-of-the-art machine learning and human intelligence.
#EEGManyLabs: Investigating the replicability of influential EEG experiments
Y. Pavlov
N. Adamian
Stefan Appelhoff
Mahnaz Arvaneh
C. Benwell
Christian Beste
A. Bland
Daniel E. Bradford
Florian Bublatzky
N. Busch
P. Clayson
Damian Cruse
Artur Czeszumski
Anna Dreber
Benedikt V. Ehinger
Giorgio Ganis
Xun He
J. Hinojosa
Christoph Huber-Huber … (see 39 more)
Michael Inzlicht
B. Jack
Magnus Johannesson
Rhiannon Jones
Evgenii Kalenkovich
Laura Kaltwasser
Hamid Karimi-rouzbahani
And Andreas Keil
P. König
Layla Kouara
Louisa V. Kulke
C. Ladouceur
Nicolas Langer
Heinrich R Liesefeld
David Luque
Annmarie E Macnamara
Liad Mudrik
Muthuraman Muthuraman
Lauren Browning Neal
Gustav Nilsonne
Guiomar Niso
Sebastian Ocklenburg
Robert Oostenveld
Cyril R. Pernet
G. Pourtois
Manuela Ruzzoli
S. Sass
Alexandre Schaefer
Magdalena Senderecka
Joel S. Snyder
Christian Krog Tamnes
E Tognoli
M. V. Vugt
Edelyn Verona
Robin Vloeberghs
Dominik Welke
J. Wessel
Ilya V Zakharov
Faisal Mushtaq
Human attachments shape interbrain synchrony toward efficient performance of social goals
Amir Djalovski
Sivan Kinreich
Ruth Pinkenson Feldman
Interactive Psychometrics for Autism With the Human Dynamic Clamp: Interpersonal Synchrony From Sensorimotor to Sociocognitive Domains
Florence Baillin
Aline Lefebvre
Amandine Pedoux
Yann Beauxis
Denis-Alexander Engemann
Anna Maruani
Frederique Amsellem
J. A. Scott Kelso
Thomas Bourgeron
Richard Delorme
The human dynamic clamp (HDC) is a human–machine interface designed on the basis of coordination dynamics for studying realistic social in… (see more)teraction under controlled and reproducible conditions. Here, we propose to probe the validity of the HDC as a psychometric instrument for quantifying social abilities in children with autism spectrum disorder (ASD) and neurotypical development. To study interpersonal synchrony with the HDC, we derived five standardized scores following a gradient from sensorimotor and motor to higher sociocognitive skills in a sample of 155 individuals (113 participants with ASD, 42 typically developing participants; aged 5 to 25 years; IQ > 70). Regression analyses were performed using normative modeling on global scores according to four subconditions (HDC behavior “cooperative/competitive,” human task “in-phase/anti-phase,” diagnosis, and age at inclusion). Children with ASD had lower scores than controls for motor skills. HDC motor coordination scores were the best candidates for stratification and diagnostic biomarkers according to exploratory analyses of hierarchical clustering and multivariate classification. Independently of phenotype, sociocognitive skills increased with developmental age while being affected by the ongoing task and HDC behavior. Weaker performance in ASD for motor skills suggests the convergent validity of the HDC for evaluating social interaction. Results provided additional evidence of a relationship between sensorimotor and sociocognitive skills. HDC may also be used as a marker of maturation of sociocognitive skills during real-time social interaction. Through its standardized and objective evaluation, the HDC not only represents a valid paradigm for the study of interpersonal synchrony but also offers a promising, clinically relevant psychometric instrument for the evaluation and stratification of sociomotor dysfunctions.
Linear Lower Bounds and Conditioning of Differentiable Games
Adam Ibrahim
Waiss Azizian
Recent successes of game-theoretic formulations in ML have caused a resurgence of research interest in differentiable games. Overwhelmingly,… (see more) that research focuses on methods and upper bounds on their speed of convergence. In this work, we approach the question of fundamental iteration complexity by providing lower bounds to complement the linear (i.e. geometric) upper bounds observed in the literature on a wide class of problems. We cast saddle-point and min-max problems as 2-player games. We leverage tools from single-objective convex optimisation to propose new linear lower bounds for convex-concave games. Notably, we give a linear lower bound for
Revisiting Fundamentals of Experience Replay
William Fedus
Prajit Ramachandran
Rishabh Agarwal
Mark Rowland
Will Dabney
Experience replay is central to off-policy algorithms in deep reinforcement learning (RL), but there remain significant gaps in our understa… (see more)nding. We therefore present a systematic and extensive analysis of experience replay in Q-learning methods, focusing on two fundamental properties: the replay capacity and the ratio of learning updates to experience collected (replay ratio). Our additive and ablative studies upend conventional wisdom around experience replay -- greater capacity is found to substantially increase the performance of certain algorithms, while leaving others unaffected. Counterintuitively we show that theoretically ungrounded, uncorrected n-step returns are uniquely beneficial while other techniques confer limited benefit for sifting through larger memory. Separately, by directly controlling the replay ratio we contextualize previous observations in the literature and empirically measure its importance across a variety of deep RL algorithms. Finally, we conclude by testing a set of hypotheses on the nature of these performance benefits.
What can I do here? A Theory of Affordances in Reinforcement Learning
Zafarali Ahmed
Gheorghe Comanici
David Abel
An Effective Anti-Aliasing Approach for Residual Networks
Cristina Vasconcelos
Vincent Dumoulin
Ross Goroshin
Image pre-processing in the frequency domain has traditionally played a vital role in computer vision and was even part of the standard pipe… (see more)line in the early days of deep learning. However, with the advent of large datasets, many practitioners concluded that this was unnecessary due to the belief that these priors can be learned from the data itself. Frequency aliasing is a phenomenon that may occur when sub-sampling any signal, such as an image or feature map, causing distortion in the sub-sampled output. We show that we can mitigate this effect by placing non-trainable blur filters and using smooth activation functions at key locations, particularly where networks lack the capacity to learn them. These simple architectural changes lead to substantial improvements in out-of-distribution generalization on both image classification under natural corruptions on ImageNet-C [10] and few-shot learning on Meta-Dataset [17], without introducing additional trainable parameters and using the default hyper-parameters of open source codebases.
Multiscale PHATE Exploration of SARS-CoV-2 Data Reveals Multimodal Signatures of Disease
Manik Kuchroo
Jessie Huang
Patrick Wong
Jean-Christophe Grenier
Dennis Shung
Alexander Tong
Carolina Lucas
Jon Klein
Daniel B. Burkhardt
Scott Gigante
Abhinav Godavarthi
Benjamin Israelow
Tianyang Mao
Ji Eun Oh
Julio Silva
Takehiro Takahashi
Camila D. Odio
Arnau Casanovas-Massana
John Fournier
Shelli Farhadian … (see 7 more)
Charles S. Dela Cruz
Albert I. Ko
F. Perry Wilson
Akiko Iwasaki
Smita Krishnaswamy
Learning Inter-Modal Correspondence and Phenotypes From Multi-Modal Electronic Health Records
Kejing Yin
William K. Cheung
Jonathan Poon
Non-negative tensor factorization has been shown a practical solution to automatically discover phenotypes from the electronic health record… (see more)s (EHR) with minimal human supervision. Such methods generally require an input tensor describing the inter-modal interactions to be pre-established; however, the correspondence between different modalities (e.g., correspondence between medications and diagnoses) can often be missing in practice. Although heuristic methods can be applied to estimate them, they inevitably introduce errors, and leads to sub-optimal phenotype quality. This is particularly important for patients with complex health conditions (e.g., in critical care) as multiple diagnoses and medications are simultaneously present in the records. To alleviate this problem and discover phenotypes from EHR with unobserved inter-modal correspondence, we propose the collective hidden interaction tensor factorization (cHITF) to infer the correspondence between multiple modalities jointly with the phenotype discovery. We assume that the observed matrix for each modality is marginalization of the unobserved inter-modal correspondence, which are reconstructed by maximizing the likelihood of the observed matrices. Extensive experiments conducted on the real-world MIMIC-III dataset demonstrate that cHITF effectively infers clinically meaningful inter-modal correspondence, discovers phenotypes that are more clinically relevant and diverse, and achieves better predictive performance compared with a number of state-of-the-art computational phenotyping models.
Using Open Source Licensing to Regulate the Assembly of LAWS: A Preliminary Analysis
Cheng Lin
Lethal autonomous weapons (LAWS) are an emerging technology capable of automatically targeting and exercising lethal force. Many scholars an… (see more)d advocates have petitioned to ban the technology internationally for a myriad of reasons. However, there are practical challenges to implementing a ban. One such challenge is posed by the “intangible” nature of the software that LAWS depends on, which is incompatible with implementation mechanisms such as export control. Given the dual-use nature of software, and the fact that software is developed by teams of individuals, a number of soft governance mechanisms have been proposed to regulate this technology. In this paper, we investigate the feasibility of one particular approach: leveraging open source licenses as a means to prohibit the use of certain software in LAWS. This approach is largely motivated by the fact that open source software underpins all of technology, especially AI. Through a review of the recent tech activism and open source activism, we evaluate whether open source licenses can feasibly limit the use of open source software to only non-LAWS applications. We distill the current challenges facing “ethics-driven” open source licensing efforts into three main obstacles: the need for clarity of licensing language, the lack of enforceability of licenses, and the lack of cohesiveness of the open source community. We propose that addressing these factors are also success criteria for future anti-LAWS open source initiatives. We find that open source licenses provide more theoretical than practical promise in regulating LAWS, and conclude that cohesion in the open source community is the key to their potential practical success in the future.