Accueil

Inspirer le développement de l'intelligence artificielle au bénéfice de tous·tes

Un professeur s'entretient avec ses étudiants dans un café/lounge.

Situé au cœur de l’écosystème québécois en intelligence artificielle (IA), Mila rassemble une communauté de plus de 1200 personnes spécialisées en apprentissage automatique et dédiées à l’excellence scientifique et l’innovation.

À propos

À la une

Corps professoral

Fondé en 1993 par le professeur Yoshua Bengio, Mila regroupe aujourd'hui plus de 140 professeur·e·s affilié·e·s à l'Université de Montréal, l'Université McGill, Polytechnique Montréal et HEC Montréal. L'institut accueille également des professeur·e·s de l'Université Laval, de l'Université de Sherbrooke, de l'École de technologie supérieure (ÉTS) et de l'Université Concordia.

Consultez l'annuaire en ligne

Photo de Yoshua Bengio

Publications récentes

3D Foundation Model-Based Loop Closing for Decentralized Collaborative SLAM
Pierre-Yves Lajoie
Benjamin Ramtoula
Daniele De Martini
Decentralized Collaborative Simultaneous Localization and Mapping (C-SLAM) techniques often struggle to identify map overlaps due to signifi… (voir plus)cant viewpoint variations among robots. Motivated by recent advancements in 3D foundation models, which can register images despite large viewpoint differences, we propose a robust loop closing approach that leverages these models to establish inter-robot measurements. In contrast to resource-intensive methods requiring full 3D reconstruction within a centralized map, our approach integrates foundation models into existing SLAM pipelines, yielding scalable and robust multi-robot mapping. Our contributions include: 1) integrating 3D foundation models to reliably estimate relative poses from monocular image pairs within decentralized C-SLAM; 2) introducing robust outlier mitigation techniques critical to the use of these relative poses and 3) developing specialized pose graph optimization formulations that efficiently resolve scale ambiguities. We evaluate our method against state-of-the-art approaches, demonstrating improvements in localization and mapping accuracy, alongside significant gains in computational and memory efficiency. These results highlight the potential of our approach for deployment in large-scale multi-robot scenarios.
The role of Large Language Models in IoT security: A systematic review of advances, challenges, and opportunities
Saeid Jamshidi
Negar Shahabi
Amin Nikanjam
Kawser Wazed Nafi
Carol Fung
Differentially Private Clustered Federated Learning
Federated learning (FL), which is a decentralized machine learning (ML) approach, often incorporates differential privacy (DP) to provide ri… (voir plus)gorous data privacy guarantees to clients. Previous works attempted to address high structured data heterogeneity in vanilla FL settings through clustering clients (a.k.a clustered FL), but these methods remain sensitive and prone to errors, further exacerbated by the DP noise. This vulnerability makes the previous methods inappropriate for differentially private FL (DPFL) under structured data heterogeneity. To address this gap, we propose an algorithm for differentially private clustered FL, which is robust to the DP noise in the system and identifies the underlying clients’ clusters correctly. To this end, we propose to cluster clients based on both their model updates and training loss values. Furthermore, for clustering clients’ model updates at the end of the first round, our proposed approach addresses the server’s uncertainties by employing large batch sizes as well as Gaussian Mixture Models (GMM) to reduce the impact of DP and stochastic noise and avoid potential clustering errors. We provide theoretical analysis to justify our approach and evaluate it across diverse data distributions and privacy budgets. Our experimental results show the approach’s effectiveness in addressing high structured data heterogeneity in DPFL.
A Large Recurrent Action Model: xLSTM enables Fast Inference for Robotics Tasks
Thomas Schmied
Thomas Adler
Vihang Prakash Patil
Maximilian Beck
Korbinian Poppel
Johannes Brandstetter
Günter Klambauer
Sepp Hochreiter
In recent years, there has been a trend in the field of Reinforcement Learning (RL) towards large action models trained offline on large-sca… (voir plus)le datasets via sequence modeling. Existing models are primarily based on the Transformer architecture, which results in powerful agents. However, due to slow inference times, Transformer-based approaches are impractical for real-time applications, such as robotics. Recently, modern recurrent architectures, such as xLSTM and Mamba, have been proposed that exhibit parallelization benefits during training similar to the Transformer architecture while offering fast inference. In this work, we study the aptitude of these modern recurrent architectures for large action models. Consequently, we propose a Large Recurrent Action Model (LRAM) with an xLSTM at its core that comes with linear-time inference complexity and natural sequence length extrapolation abilities. Experiments on 432 tasks from 6 domains show that LRAM compares favorably to Transformers in terms of performance and speed.

IA pour l'humanité

Le développement socialement responsable et bénéfique de l'IA est une dimension fondamentale de la mission de Mila. En tant que chef de file, nous souhaitons contribuer au dialogue social et au développement d'applications qui seront bénéfiques pour la société.

En savoir plus

Une personne regarde un ciel étoilé.