Portrait de Andrea Lodi

Andrea Lodi

Membre académique associé
Professeur associé, Polytechnique Montréal, Département de mathématiques et de génie industriel (MAGI)
Fondateur et directeur scientifique, IVADO Labs
Sujets de recherche
Optimisation

Biographie

Andrea Lodi est professeur associé au Département de mathématiques et de génie industriel de Polytechnique Montréal. Il est aussi le fondateur et directeur scientifique d’IVADO Labs.

Depuis 2014, il est titulaire de la Chaire d'excellence en recherche du Canada sur la science des données pour la prise de décision en temps réel (Polytechnique Montréal), la chaire de recherche la plus importante au pays dans le domaine de la recherche opérationnelle. Reconnu internationalement pour ses travaux sur la programmation mixte linéaire et non linéaire, le professeur Lodi se concentre sur le développement de nouveaux modèles et algorithmes permettant de traiter rapidement et efficacement de vastes quantités de données de multiples sources. Ces algorithmes et modèles devraient conduire à la création de stratégies optimisées de prise de décision en temps réel. La Chaire a pour objectif d’appliquer son expertise dans divers secteurs, notamment l’énergie, les transports, la santé, la production et la gestion de la chaîne logistique.

Titulaire d'un doctorat en ingénierie des systèmes (2000), Andrea Lodi a été professeur titulaire de recherche opérationnelle au Département de génie électrique, électronique et informationnel de l'Université de Bologne. Il coordonne des projets de recherche opérationnelle européens à grande échelle et travaille depuis 2006 comme consultant auprès de l'équipe de recherche et développement CPLEX chez IBM. Il a publié plus de 70 articles dans de grandes revues de programmation mathématique et a été éditeur associé au sein de plusieurs d’entre elles.

Le professeur Lodi a reçu le prix Google 2010 du corps professoral et le prix IBM 2011 du corps professoral. Il a en outre été membre du prestigieux programme Herman Goldstine du centre de recherche IBM Thomas J. Watson en 2005-2006.

Publications

Structured Pruning of Neural Networks for Constraints Learning
Matteo Cacciola
Antonio Frangioni
In recent years, the integration of Machine Learning (ML) models with Operation Research (OR) tools has gained popularity across diverse app… (voir plus)lications, including cancer treatment, algorithmic configuration, and chemical process optimization. In this domain, the combination of ML and OR often relies on representing the ML model output using Mixed Integer Programming (MIP) formulations. Numerous studies in the literature have developed such formulations for many ML predictors, with a particular emphasis on Artificial Neural Networks (ANNs) due to their significant interest in many applications. However, ANNs frequently contain a large number of parameters, resulting in MIP formulations that are impractical to solve, thereby impeding scalability. In fact, the ML community has already introduced several techniques to reduce the parameter count of ANNs without compromising their performance, since the substantial size of modern ANNs presents challenges for ML applications as it significantly impacts computational efforts during training and necessitates significant memory resources for storage. In this paper, we showcase the effectiveness of pruning, one of these techniques, when applied to ANNs prior to their integration into MIPs. By pruning the ANN, we achieve significant improvements in the speed of the solution process. We discuss why pruning is more suitable in this context compared to other ML compression techniques, and we identify the most appropriate pruning strategies. To highlight the potential of this approach, we conduct experiments using feed-forward neural networks with multiple layers to construct adversarial examples. Our results demonstrate that pruning offers remarkable reductions in solution times without hindering the quality of the final decision, enabling the resolution of previously unsolvable instances.
Cardinality Minimization, Constraints, and Regularization: A Survey
Andreas M. Tillmann
Daniel Bienstock
Alexandra Schwartz
We survey optimization problems that involve the cardinality of variable vectors in constraints or the objective function. We provide a unif… (voir plus)ied viewpoint on the general problem classes and models, and give concrete examples from diverse application fields such as signal and image processing, portfolio selection, or machine learning. The paper discusses general-purpose modeling techniques and broadly applicable as well as problem-specific exact and heuristic solution approaches. While our perspective is that of mathematical optimization, a main goal of this work is to reach out to and build bridges between the different communities in which cardinality optimization problems are frequently encountered. In particular, we highlight that modern mixed-integer programming, which is often regarded as impractical due to commonly unsatisfactory behavior of black-box solvers applied to generic problem formulations, can in fact produce provably high-quality or even optimal solutions for cardinality optimization problems, even in large-scale real-world settings. Achieving such performance typically draws on the merits of problem-specific knowledge that may stem from different fields of application and, e.g., shed light on structural properties of a model or its solutions, or lead to the development of efficient heuristics; we also provide some illustrative examples.
The Madness of Multiple Entries in March Madness
Jeff Decary
David Bergman
Carlos Henrique Cardonha
Jason Imbrogno
This paper explores multi-entry strategies for betting pools related to single-elimination tournaments. In such betting pools, participants … (voir plus)select winners of games, and their respective score is a weighted sum of the number of correct selections. Most betting pools have a top-heavy payoff structure, so the paper focuses on strategies that maximize the expected score of the best-performing entry. There is no known closed-formula expression for the estimation of this metric, so the paper investigates the challenges associated with the estimation and the optimization of multi-entry solutions. We present an exact dynamic programming approach for calculating the maximum expected score of any given fixed solution, which is exponential in the number of entries. We explore the structural properties of the problem to develop several solution techniques. In particular, by extracting insights from the solutions produced by one of our algorithms, we design a simple yet effective problem-specific heuristic that was the best-performing technique in our experiments, which were based on real-world data extracted from recent March Madness tournaments. In particular, our results show that the best 100-entry solution identified by our heuristic had a 2.2% likelihood of winning a
Assortment Optimization with Visibility Constraints
Théo Barré
Omar El Housni
Implementing a Hierarchical Deep Learning Approach for Simulating multilevel Auction Data
Marcelin Joanis
Igor Sadoune
Increasing schedule reliability in the multiple depot vehicle scheduling problem with stochastic travel time
L'ea Ricard
Guy Desaulniers
Louis-Martin Rousseau
Reinforcement learning for freight booking control problems
Justin Dumouchelle
Recovering Dantzig–Wolfe Bounds by Cutting Planes
Rui Chen
Oktay Günlük
Leveraging Dantzig–Wolfe Decomposition in the Original Variable Space for Mixed-Integer Programming Dantzig–Wolfe decomposition has been… (voir plus) extensively applied to solve large-scale mixed-integer programs with decomposable structures, leading to exact solution approaches, such as branch and price. However, these approaches would require solving the problem in an extended variable space and are not readily present in off-the-shelf solvers. In “Recovering Dantzig–Wolfe Bounds by Cutting Planes,” Chen, Günlük, and Lodi propose a computational effective approach for generating cutting planes from Dantzig–Wolfe decomposition to enhance branch and cut in the space of original variables. The proposed approach requires a relatively small number of cutting planes to recover the strength of the Dantzig–Wolfe dual bound and should be easy to implement in general-purpose mixed-integer programming solvers. The authors show that these cutting planes typically lead to a formulation with lower dual degeneracy and hence, a better computational performance than naïve approaches, such as the objective function cut.
An Exact Method for (Constrained) Assortment Optimization Problems with Product Costs
Markus Leitner
Roberto Roberti
Claudio Sole
A framework for fair decision-making over time with time-invariant utilities
Sriram Sankaranarayanan
Guanyi Wang
An improved column-generation-based matheuristic for learning classification trees
Krunal Kishor Patel
Guy Desaulniers
Learning to repeatedly solve routing problems
Mouad Morabit
Guy Desaulniers
In the last years, there has been a great interest in machine‐learning‐based heuristics for solving NP‐hard combinatorial optimization… (voir plus) problems. The developed methods have shown potential on many optimization problems. In this paper, we present a learned heuristic for the reoptimization of a problem after a minor change in its data. We focus on the case of the capacited vehicle routing problem with static clients (i.e., same client locations) and changed demands. Given the edges of an original solution, the goal is to predict and fix the ones that have a high chance of remaining in an optimal solution after a change of client demands. This partial prediction of the solution reduces the complexity of the problem and speeds up its resolution, while yielding a good quality solution. The proposed approach resulted in solutions with an optimality gap ranging from 0% to 1.7% on different benchmark instances within a reasonable computing time.