Portrait de Andrea Lodi

Andrea Lodi

Membre académique associé
Professeur associé, Polytechnique Montréal, Département de mathématiques et de génie industriel
Fondateur et directeur scientifique, IVADO Labs

Biographie

Andrea Lodi est professeur associé au Département de mathématiques et de génie industriel de Polytechnique Montréal. Il est aussi le fondateur et directeur scientifique d’IVADO Labs.

Depuis 2014, il est titulaire de la Chaire d'excellence en recherche du Canada sur la science des données pour la prise de décision en temps réel (Polytechnique Montréal), la chaire de recherche la plus importante au pays dans le domaine de la recherche opérationnelle. Reconnu internationalement pour ses travaux sur la programmation mixte linéaire et non linéaire, le professeur Lodi se concentre sur le développement de nouveaux modèles et algorithmes permettant de traiter rapidement et efficacement de vastes quantités de données de multiples sources. Ces algorithmes et modèles devraient conduire à la création de stratégies optimisées de prise de décision en temps réel. La Chaire a pour objectif d’appliquer son expertise dans divers secteurs, notamment l’énergie, les transports, la santé, la production et la gestion de la chaîne logistique.

Titulaire d'un doctorat en ingénierie des systèmes (2000), Andrea Lodi a été professeur titulaire de recherche opérationnelle au Département de génie électrique, électronique et informationnel de l'Université de Bologne. Il coordonne des projets de recherche opérationnelle européens à grande échelle et travaille depuis 2006 comme consultant auprès de l'équipe de recherche et développement CPLEX chez IBM. Il a publié plus de 70 articles dans de grandes revues de programmation mathématique et a été éditeur associé au sein de plusieurs d’entre elles.

Le professeur Lodi a reçu le prix Google 2010 du corps professoral et le prix IBM 2011 du corps professoral. Il a en outre été membre du prestigieux programme Herman Goldstine du centre de recherche IBM Thomas J. Watson en 2005-2006.

Publications

An improved column-generation-based matheuristic for learning classification trees
Krunal Kishor Patel
Guy Desaulniers
Increasing schedule reliability in the multiple depot vehicle scheduling problem with stochastic travel time
L'ea Ricard
Guy Desaulniers
Louis-Martin Rousseau
Reinforcement learning for freight booking control problems
Justin Dumouchelle
An Exact Method for (Constrained) Assortment Optimization Problems with Product Costs
Markus Leitner
Roberto Roberti
Claudio Sole
Recovering Dantzig–Wolfe Bounds by Cutting Planes
Rui Chen
Oktay Günlük
Leveraging Dantzig–Wolfe Decomposition in the Original Variable Space for Mixed-Integer Programming Dantzig–Wolfe decomposition has been… (voir plus) extensively applied to solve large-scale mixed-integer programs with decomposable structures, leading to exact solution approaches, such as branch and price. However, these approaches would require solving the problem in an extended variable space and are not readily present in off-the-shelf solvers. In “Recovering Dantzig–Wolfe Bounds by Cutting Planes,” Chen, Günlük, and Lodi propose a computational effective approach for generating cutting planes from Dantzig–Wolfe decomposition to enhance branch and cut in the space of original variables. The proposed approach requires a relatively small number of cutting planes to recover the strength of the Dantzig–Wolfe dual bound and should be easy to implement in general-purpose mixed-integer programming solvers. The authors show that these cutting planes typically lead to a formulation with lower dual degeneracy and hence, a better computational performance than naïve approaches, such as the objective function cut.
Operational Research: methods and applications
Fotios Petropoulos
Gilbert Laporte
Emel Aktas
Sibel A. Alumur
Claudia Archetti
Hayriye Ayhan
Maria Battarra
Julia A. Bennell
Jean-Marie Bourjolly
John E. Boylan
Michèle Breton
David Canca
Bo Chen
Cihan Tugrul Cicek
Louis Anthony Cox
Christine S.M. Currie
Erik Demeulemeester
Li Ding
Stephen M. Disney … (voir 62 de plus)
Matthias Ehrgott
Martin J. Eppler
Güneş Erdoğan
Bernard Fortz
L. Alberto Franco
Jens Frische
Salvatore Greco
Amanda J. Gregory
Raimo P. Hämäläinen
Willy Herroelen
Mike Hewitt
Jan Holmström
John N. Hooker
Tuğçe Işık
Jill Johnes
Bahar Y. Kara
Özlem Karsu
Katherine Kent
Charlotte Köhler
Martin Kunc
Yong-Hong Kuo
Judit Lienert
Adam N. Letchford
Janny Leung
Dong Li
Haitao Li
Ivana Ljubić
Sebastián Lozano
Virginie Lurkin
Silvano Martello
Ian G. McHale
Gerald Midgley
John D.W. Morecroft
Akshay Mutha
Ceyda Oğuz
Sanja Petrovic
Ulrich Pferschy
Harilaos N. Psaraftis
Sam Rose
Lauri Saarinen
Said Salhi
Jing-Sheng Song
Dimitrios Sotiros
Kathryn E. Stecke
Arne K. Strauss
İstenç Tarhan
Clemens Thielen
Paolo Toth
Greet Vanden Berghe
Christos Vasilakis
Vikrant Vaze
Daniele Vigo
Kai Virtanen
Xun Wang
Rafał Weron
Leroy White
Tom Van Woensel
Mike Yearworth
E. Alper Yıldırım
Georges Zaccour
Xuying Zhao
Throughout its history, Operational Research has evolved to include a variety of methods, models and algorithms that have been applied to a … (voir plus)diverse and wide range of contexts. This encyclopedic article consists of two main sections: methods and applications. The first aims to summarise the up-to-date knowledge and provide an overview of the state-of-the-art methods and key developments in the various subdomains of the field. The second offers a wide-ranging list of areas where Operational Research has been applied. The article is meant to be read in a nonlinear fashion. It should be used as a point of reference or first-port-of-call for a diverse pool of readers: academics, researchers, students, and practitioners. The entries within the methods and applications sections are presented in alphabetical order. The authors dedicate this paper to the 2023 Turkey/Syria earthquake victims. We sincerely hope that advances in OR will play a role towards minimising the pain and suffering caused by this and future catastrophes.
When Nash Meets Stackelberg
Gabriele Dragotto
Felipe Feijoo
Sriram Sankaranarayanan
Deep Neural Networks pruning via the Structured Perspective Regularization
Matteo Cacciola
Antonio Frangioni
Xinlin Li
A framework for fair decision-making over time with time-invariant utilities
Sriram Sankaranarayanan
Guanyi Wang
A machine learning framework for neighbor generation in metaheuristic search
De-You Liu
Defeng Liu
Vincent Perreault
Alain Hertz
This paper presents a methodology for integrating machine learning techniques into metaheuristics for solving combinatorial optimization pro… (voir plus)blems. Namely, we propose a general machine learning framework for neighbor generation in metaheuristic search. We first define an efficient neighborhood structure constructed by applying a transformation to a selected subset of variables from the current solution. Then, the key of the proposed methodology is to generate promising neighbors by selecting a proper subset of variables that contains a descent of the objective in the solution space. To learn a good variable selection strategy, we formulate the problem as a classification task that exploits structural information from the characteristics of the problem and from high-quality solutions. We validate our methodology on two metaheuristic applications: a Tabu Search scheme for solving a Wireless Network Optimization problem and a Large Neighborhood Search heuristic for solving Mixed-Integer Programs. The experimental results show that our approach is able to achieve a satisfactory trade-offs between the exploration of a larger solution space and the exploitation of high-quality solution regions on both applications.
Structured Pruning of Neural Networks for Constraints Learning
Matteo Cacciola
Antonio Frangioni
In recent years, the integration of Machine Learning (ML) models with Operation Research (OR) tools has gained popularity across diverse app… (voir plus)lications, including cancer treatment, algorithmic configuration, and chemical process optimization. In this domain, the combination of ML and OR often relies on representing the ML model output using Mixed Integer Programming (MIP) formulations. Numerous studies in the literature have developed such formulations for many ML predictors, with a particular emphasis on Artificial Neural Networks (ANNs) due to their significant interest in many applications. However, ANNs frequently contain a large number of parameters, resulting in MIP formulations that are impractical to solve, thereby impeding scalability. In fact, the ML community has already introduced several techniques to reduce the parameter count of ANNs without compromising their performance, since the substantial size of modern ANNs presents challenges for ML applications as it significantly impacts computational efforts during training and necessitates significant memory resources for storage. In this paper, we showcase the effectiveness of pruning, one of these techniques, when applied to ANNs prior to their integration into MIPs. By pruning the ANN, we achieve significant improvements in the speed of the solution process. We discuss why pruning is more suitable in this context compared to other ML compression techniques, and we identify the most appropriate pruning strategies. To highlight the potential of this approach, we conduct experiments using feed-forward neural networks with multiple layers to construct adversarial examples. Our results demonstrate that pruning offers remarkable reductions in solution times without hindering the quality of the final decision, enabling the resolution of previously unsolvable instances.
Structured Pruning of Neural Networks for Constraints Learning
Matteo Cacciola
Antonio Frangioni
In recent years, the integration of Machine Learning (ML) models with Operation Research (OR) tools has gained popularity across diverse app… (voir plus)lications, including cancer treatment, algorithmic configuration, and chemical process optimization. In this domain, the combination of ML and OR often relies on representing the ML model output using Mixed Integer Programming (MIP) formulations. Numerous studies in the literature have developed such formulations for many ML predictors, with a particular emphasis on Artificial Neural Networks (ANNs) due to their significant interest in many applications. However, ANNs frequently contain a large number of parameters, resulting in MIP formulations that are impractical to solve, thereby impeding scalability. In fact, the ML community has already introduced several techniques to reduce the parameter count of ANNs without compromising their performance, since the substantial size of modern ANNs presents challenges for ML applications as it significantly impacts computational efforts during training and necessitates significant memory resources for storage. In this paper, we showcase the effectiveness of pruning, one of these techniques, when applied to ANNs prior to their integration into MIPs. By pruning the ANN, we achieve significant improvements in the speed of the solution process. We discuss why pruning is more suitable in this context compared to other ML compression techniques, and we identify the most appropriate pruning strategies. To highlight the potential of this approach, we conduct experiments using feed-forward neural networks with multiple layers to construct adversarial examples. Our results demonstrate that pruning offers remarkable reductions in solution times without hindering the quality of the final decision, enabling the resolution of previously unsolvable instances.