Accueil

Inspirer le développement de l'intelligence artificielle au bénéfice de tous·tes

Un professeur s'entretient avec ses étudiants dans un café/lounge.

Situé au cœur de l’écosystème québécois en intelligence artificielle (IA), Mila rassemble une communauté de plus de 1200 personnes spécialisées en apprentissage automatique et dédiées à l’excellence scientifique et l’innovation.

À propos

À la une

Corps professoral

Fondé en 1993 par le professeur Yoshua Bengio, Mila regroupe aujourd'hui plus de 140 professeur·e·s affilié·e·s à l'Université de Montréal, l'Université McGill, Polytechnique Montréal et HEC Montréal. L'institut accueille également des professeur·e·s de l'Université Laval, de l'Université de Sherbrooke, de l'École de technologie supérieure (ÉTS) et de l'Université Concordia.

Consultez l'annuaire en ligne

Photo de Yoshua Bengio

Publications récentes

A stochastic integer programming approach to reserve staff scheduling with preferences
Carl Perreault‐Lafleur
Guy Desaulniers
Ex Post Conditions for the Exactness of Optimal Power Flow Conic Relaxations
Jean-Luc Lupien
Convex relaxations of the optimal power flow (OPF) problem provide an efficient alternative to solving the intractable alternating current (… (voir plus)AC) optimal power flow. The conic subset of OPF convex relaxations, in particular, greatly accelerate resolution while leading to high-quality approximations that are exact in several scenarios. However, the sufficient conditions guaranteeing exactness are stringent, e.g., requiring radial topologies. In this short communication, we present two equivalent ex post conditions for the exactness of any conic relaxation of the OPF. These rely on obtaining either a rank-1 voltage matrix or self-coherent cycles. Instead of relying on sufficient conditions a priori, satisfying one of the presented ex post conditions acts as an exactness certificate for the computed solution. The operator can therefore obtain an optimality guarantee when solving a conic relaxation even when a priori exactness requirements are not met. Finally, we present numerical examples from the MATPOWER library where the ex post conditions hold even though the exactness sufficient conditions do not, thereby illustrating the use of the conditions.
Combining supervised learning and local search for the multicommodity capacitated fixed-charge network design problem
Charly Robinson La Rocca
Jean-François Cordeau
The multicommodity capacitated fixed-charge network design problem has been extensively studied in the literature due to its wide range of a… (voir plus)pplications. Despite the fact that many sophisticated solution methods exist today, finding high-quality solutions to large-scale instances remains challenging. In this paper, we explore how a data-driven approach can help improve upon the state of the art. By leveraging machine learning models, we attempt to reveal patterns hidden in the data that might be difficult to capture with traditional optimization methods. For scalability, we propose a prediction method where the machine learning model is called at the level of each arc of the graph. We take advantage of off-the-shelf models trained via supervised learning to predict near-optimal solutions. Our experimental results include an algorithm design analysis that compares various integration strategies of predictions within local search algorithms. We benchmark the ML-based approach with respect to the state-of-the-art heuristic for this problem. The findings indicate that our method can outperform the leading heuristic on sets of instances sampled from a uniform distribution.
Unlearning in- vs. out-of-distribution data in LLMs under gradient-based method
Teodora Băluță
Pascal Lamblin
Daniel Tarlow
Fabian Pedregosa
Machine unlearning aims to solve the problem of removing the influence of selected training examples from a learned model. Despite the incre… (voir plus)asing attention to this problem, it remains an open research question how to evaluate unlearning in large language models (LLMs), and what are the critical properties of the data to be unlearned that affect the quality and efficiency of unlearning. This work formalizes a metric to evaluate unlearning quality in generative models, and uses it to assess the trade-offs between unlearning quality and performance. We demonstrate that unlearning out-of-distribution examples requires more unlearning steps but overall presents a better trade-off overall. For in-distribution examples, however, we observe a rapid decay in performance as unlearning progresses. We further evaluate how example's memorization and difficulty affect unlearning under a classical gradient ascent-based approach.

IA pour l'humanité

Le développement socialement responsable et bénéfique de l'IA est une dimension fondamentale de la mission de Mila. En tant que chef de file, nous souhaitons contribuer au dialogue social et au développement d'applications qui seront bénéfiques pour la société.

En savoir plus

Une personne regarde un ciel étoilé.