Le Studio d'IA pour le climat de Mila vise à combler l’écart entre la technologie et l'impact afin de libérer le potentiel de l'IA pour lutter contre la crise climatique rapidement et à grande échelle.
Le programme a récemment publié sa première note politique, intitulée « Considérations politiques à l’intersection des technologies quantiques et de l’intelligence artificielle », réalisée par Padmapriya Mohan.
Hugo Larochelle nommé directeur scientifique de Mila
Professeur associé à l’Université de Montréal et ancien responsable du laboratoire de recherche en IA de Google à Montréal, Hugo Larochelle est un pionnier de l’apprentissage profond et fait partie des chercheur·euses les plus respecté·es au Canada.
Nous utilisons des témoins pour analyser le trafic et l’utilisation de notre site web, afin de personnaliser votre expérience. Vous pouvez désactiver ces technologies à tout moment, mais cela peut restreindre certaines fonctionnalités du site. Consultez notre Politique de protection de la vie privée pour en savoir plus.
Paramètre des cookies
Vous pouvez activer et désactiver les types de cookies que vous souhaitez accepter. Cependant certains choix que vous ferez pourraient affecter les services proposés sur nos sites (ex : suggestions, annonces personnalisées, etc.).
Cookies essentiels
Ces cookies sont nécessaires au fonctionnement du site et ne peuvent être désactivés. (Toujours actif)
Cookies analyse
Acceptez-vous l'utilisation de cookies pour mesurer l'audience de nos sites ?
Multimedia Player
Acceptez-vous l'utilisation de cookies pour afficher et vous permettre de regarder les contenus vidéo hébergés par nos partenaires (YouTube, etc.) ?
We consider a model for optimal decision referrals in human-automation teams performing binary classification tasks. The automation observes… (voir plus) a batch of independent tasks, analyzes them, and has the option to refer a subset of them to a human operator. The human operator performs fresh analysis of the tasks referred to him. Our key modeling assumption is that the human performance degrades with workload (i.e., the number of tasks referred to human). We model the problem as a stochastic optimization problem. We first consider the special case when the workload of the human is pre-specified. We show that in this setting it is optimal to myopically refer tasks which lead to the largest reduction in the conditional expected cost until the desired workload target is met. We next consider the general setting where there is no constraint on the workload. We leverage the solution of the previous step and provide a search algorithm to efficiently find the optimal set of tasks to refer. Finally, we present a numerical study to compare the performance of our algorithm with some baseline allocation policies.
We study a class of Keynesian beauty contest games where a large number of heterogeneous players attempt to estimate a common parameter base… (voir plus)d on their own observations. The players are rewarded for producing an estimate close to a certain multiplicative factor of the average decision, this factor being specific to each player. This model is motivated by scenarios arising in commodity or financial markets, where investment decisions are sometimes partly based on following a trend. We provide a method to compute Nash equilibria within the class of affine strategies. We then develop a mean-field approximation, in the limit of an infinite number of players, which has the advantage that computing the best-response strategies only requires the knowledge of the parameter distribution of the players, rather than their actual parameters. We show that the mean-field strategies lead to an ε-Nash equilibrium for a system with a finite number of players. We conclude by analyzing the impact on individual behavior of changes in aggregate population behavior.
We consider optimal control of an unknown multi-agent linear quadratic (LQ) system where the dynamics and the cost are coupled across the ag… (voir plus)ents through the mean-field (i.e., empirical mean) of the states and controls. Directly using single-agent LQ learning algorithms in such models results in regret which increases polynomially with the number of agents. We propose a new Thompson sampling based learning algorithm which exploits the structure of the system model and show that the expected Bayesian regret of our proposed algorithm for a system with agents of |M| different types at time horizon T is
2021-12-14
2021 60th IEEE Conference on Decision and Control (CDC) (publié)
Adoption of recently developed methods from machine learning has given rise to creation of drug-discovery knowledge graphs (KGs) that utiliz… (voir plus)e the interconnected nature of the domain. Graph-based modelling of the data, combined with KG embedding (KGE) methods, are promising as they provide a more intuitive representation and are suitable for inference tasks such as predicting missing links. One common application is to produce ranked lists of genes for a given disease, where the rank is based on the perceived likelihood of association between the gene and the disease. It is thus critical that these predictions are not only pertinent but also biologically meaningful. However, KGs can be biased either directly due to the underlying data sources that are integrated or due to modelling choices in the construction of the graph, one consequence of which is that certain entities can get topologically overrepresented. We demonstrate the effect of these inherent structural imbalances, resulting in densely connected entities being highly ranked no matter the context. We provide support for this observation across different datasets, models as well as predictive tasks. Further, we present various graph perturbation experiments which yield more support to the observation that KGE models can be more influenced by the frequency of entities rather than any biological information encoded within the relations. Our results highlight the importance of data modelling choices, and emphasizes the need for practitioners to be mindful of these issues when interpreting model outputs and during KG composition.
Arti fi cial intelligence (AI) based devices are currently being used in the delivery of surgical care in a variety of settings. 1,2 Howeve… (voir plus)r, AI-enabled systems can trigger a variety of opinions and emotions, which reveals the different lenses that shape views on AI. Nonethless, work within surgical education may necessitate a more balanced view; with an acknowledgment of the participation of AI-enhanced devices in the delivery of surgical care and education
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples. Recent studies show that task … (voir plus)distribution plays a vital role in the performance of the model. Conventional wisdom is that task diversity should improve the performance of meta-learning. In this work, we find evidence to the contrary; we study different task distributions on a myriad of models and datasets to evaluate the effect of task diversity on meta-learning algorithms. For this experiment, we train on two datasets - Omniglot and miniImageNet and with three broad classes of meta-learning models - Metric-based (i.e., Protonet, Matching Networks), Optimization-based (i.e., MAML, Reptile, and MetaOptNet), and Bayesian meta-learning models (i.e., CNAPs). Our experiments demonstrate that the effect of task diversity on all these algorithms follows a similar trend, and task diversity does not seem to offer any benefits to the learning of the model. Furthermore, we also demonstrate that even a handful of tasks, repeated over multiple batches, would be sufficient to achieve a performance similar to uniform sampling and draws into question the need for additional tasks to create better models.
Recent generative models such as generative adversarial networks have achieved remarkable success in generating realistic images, but they r… (voir plus)equire large training datasets and computational resources. The goal of few-shot image generation is to learn the distribution of a new dataset from only a handful of examples by transferring knowledge learned from structurally similar datasets. Towards achieving this goal, we propose the “Implicit Support Set Autoencoder” (ISSA) that adversarially learns the relationship across datasets using an unsupervised dataset representation, while the distribution of each individual dataset is learned using implicit distributions. Given a few examples from a new dataset, ISSA can generate new samples by inferring the representation of the underlying distribution using a single forward pass. We showcase significant gains from our method on generating high quality and diverse images for unseen classes in the Omniglot and CelebA datasets in few-shot image generation settings.