Portrait of Andrea Lodi

Andrea Lodi

Associate Academic Member
Adjunct Professor, Polytechnique Montréal, Mathematics and Industrial Engineering Department
Founder and Scientific Director, IVADO Labs

Biography

Andrea Lodi is an adjunct professor in the Department of Mathematics and Industrial Engineering at Polytechnique Montréal, and founder and scientific director of IVADO Labs.

Since 2014, Lodi has held the Canada Excellence Research Chair in Data Science for Real-Time Decision-Making at Polytechnique Montréal. This is Canada’s leading research chair in the field of operations research.

Internationally recognized for his work on mixed linear and nonlinear programming, Lodi is focused on developing new models and algorithms to quickly and efficiently process massive amounts of data from multiple sources. These algorithms and models are expected to lead to the creation of optimized real-time decision-making strategies. The goal of his work as Chair is to apply this expertise in a range of sectors, including energy, transport, health, production and supply chain logistics management.

Lodi holds a PhD in systems engineering (2000) and is a full professor of operations research in the Department of Electrical, Electronic and Information Engineering at the University of Bologna. He coordinates large-scale European operations research projects, and has worked as a consultant for the CPLEX R&D team at IBM since 2006. Lodi has published over seventy articles in major journals in mathematical programming and also served as an associate editor for many of these journals.

His many honours include a 2010 Google Faculty Research Award and a 2011 IBM Faculty Award, and he was a member of the prestigious Herman Goldstine program at the IBM Thomas J. Watson Research Center in 2005–2006.

Publications

An improved column-generation-based matheuristic for learning classification trees
Krunal Kishor Patel
Guy Desaulniers
Increasing schedule reliability in the multiple depot vehicle scheduling problem with stochastic travel time
L'ea Ricard
Guy Desaulniers
Louis-Martin Rousseau
Reinforcement learning for freight booking control problems
Justin Dumouchelle
An Exact Method for (Constrained) Assortment Optimization Problems with Product Costs
Markus Leitner
Roberto Roberti
Claudio Sole
Recovering Dantzig–Wolfe Bounds by Cutting Planes
Rui Chen
Oktay Günlük
Leveraging Dantzig–Wolfe Decomposition in the Original Variable Space for Mixed-Integer Programming Dantzig–Wolfe decomposition has been… (see more) extensively applied to solve large-scale mixed-integer programs with decomposable structures, leading to exact solution approaches, such as branch and price. However, these approaches would require solving the problem in an extended variable space and are not readily present in off-the-shelf solvers. In “Recovering Dantzig–Wolfe Bounds by Cutting Planes,” Chen, Günlük, and Lodi propose a computational effective approach for generating cutting planes from Dantzig–Wolfe decomposition to enhance branch and cut in the space of original variables. The proposed approach requires a relatively small number of cutting planes to recover the strength of the Dantzig–Wolfe dual bound and should be easy to implement in general-purpose mixed-integer programming solvers. The authors show that these cutting planes typically lead to a formulation with lower dual degeneracy and hence, a better computational performance than naïve approaches, such as the objective function cut.
Operational Research: methods and applications
Fotios Petropoulos
Gilbert Laporte
Emel Aktas
Sibel A. Alumur
Claudia Archetti
Hayriye Ayhan
Maria Battarra
Julia A. Bennell
Jean-Marie Bourjolly
John E. Boylan
Michèle Breton
David Canca
Bo Chen
Cihan Tugrul Cicek
Louis Anthony Cox
Christine S.M. Currie
Erik Demeulemeester
Li Ding
Stephen M. Disney … (see 62 more)
Matthias Ehrgott
Martin J. Eppler
Güneş Erdoğan
Bernard Fortz
L. Alberto Franco
Jens Frische
Salvatore Greco
Amanda J. Gregory
Raimo P. Hämäläinen
Willy Herroelen
Mike Hewitt
Jan Holmström
John N. Hooker
Tuğçe Işık
Jill Johnes
Bahar Y. Kara
Özlem Karsu
Katherine Kent
Charlotte Köhler
Martin Kunc
Yong-Hong Kuo
Judit Lienert
Adam N. Letchford
Janny Leung
Dong Li
Haitao Li
Ivana Ljubić
Sebastián Lozano
Virginie Lurkin
Silvano Martello
Ian G. McHale
Gerald Midgley
John D.W. Morecroft
Akshay Mutha
Ceyda Oğuz
Sanja Petrovic
Ulrich Pferschy
Harilaos N. Psaraftis
Sam Rose
Lauri Saarinen
Said Salhi
Jing-Sheng Song
Dimitrios Sotiros
Kathryn E. Stecke
Arne K. Strauss
İstenç Tarhan
Clemens Thielen
Paolo Toth
Greet Vanden Berghe
Christos Vasilakis
Vikrant Vaze
Daniele Vigo
Kai Virtanen
Xun Wang
Rafał Weron
Leroy White
Tom Van Woensel
Mike Yearworth
E. Alper Yıldırım
Georges Zaccour
Xuying Zhao
Throughout its history, Operational Research has evolved to include a variety of methods, models and algorithms that have been applied to a … (see more)diverse and wide range of contexts. This encyclopedic article consists of two main sections: methods and applications. The first aims to summarise the up-to-date knowledge and provide an overview of the state-of-the-art methods and key developments in the various subdomains of the field. The second offers a wide-ranging list of areas where Operational Research has been applied. The article is meant to be read in a nonlinear fashion. It should be used as a point of reference or first-port-of-call for a diverse pool of readers: academics, researchers, students, and practitioners. The entries within the methods and applications sections are presented in alphabetical order. The authors dedicate this paper to the 2023 Turkey/Syria earthquake victims. We sincerely hope that advances in OR will play a role towards minimising the pain and suffering caused by this and future catastrophes.
When Nash Meets Stackelberg
Gabriele Dragotto
Felipe Feijoo
Sriram Sankaranarayanan
Deep Neural Networks pruning via the Structured Perspective Regularization
Matteo Cacciola
Antonio Frangioni
Xinlin Li
A framework for fair decision-making over time with time-invariant utilities
Sriram Sankaranarayanan
Guanyi Wang
A machine learning framework for neighbor generation in metaheuristic search
De-You Liu
Defeng Liu
Vincent Perreault
Alain Hertz
This paper presents a methodology for integrating machine learning techniques into metaheuristics for solving combinatorial optimization pro… (see more)blems. Namely, we propose a general machine learning framework for neighbor generation in metaheuristic search. We first define an efficient neighborhood structure constructed by applying a transformation to a selected subset of variables from the current solution. Then, the key of the proposed methodology is to generate promising neighbors by selecting a proper subset of variables that contains a descent of the objective in the solution space. To learn a good variable selection strategy, we formulate the problem as a classification task that exploits structural information from the characteristics of the problem and from high-quality solutions. We validate our methodology on two metaheuristic applications: a Tabu Search scheme for solving a Wireless Network Optimization problem and a Large Neighborhood Search heuristic for solving Mixed-Integer Programs. The experimental results show that our approach is able to achieve a satisfactory trade-offs between the exploration of a larger solution space and the exploitation of high-quality solution regions on both applications.
Structured Pruning of Neural Networks for Constraints Learning
Matteo Cacciola
Antonio Frangioni
In recent years, the integration of Machine Learning (ML) models with Operation Research (OR) tools has gained popularity across diverse app… (see more)lications, including cancer treatment, algorithmic configuration, and chemical process optimization. In this domain, the combination of ML and OR often relies on representing the ML model output using Mixed Integer Programming (MIP) formulations. Numerous studies in the literature have developed such formulations for many ML predictors, with a particular emphasis on Artificial Neural Networks (ANNs) due to their significant interest in many applications. However, ANNs frequently contain a large number of parameters, resulting in MIP formulations that are impractical to solve, thereby impeding scalability. In fact, the ML community has already introduced several techniques to reduce the parameter count of ANNs without compromising their performance, since the substantial size of modern ANNs presents challenges for ML applications as it significantly impacts computational efforts during training and necessitates significant memory resources for storage. In this paper, we showcase the effectiveness of pruning, one of these techniques, when applied to ANNs prior to their integration into MIPs. By pruning the ANN, we achieve significant improvements in the speed of the solution process. We discuss why pruning is more suitable in this context compared to other ML compression techniques, and we identify the most appropriate pruning strategies. To highlight the potential of this approach, we conduct experiments using feed-forward neural networks with multiple layers to construct adversarial examples. Our results demonstrate that pruning offers remarkable reductions in solution times without hindering the quality of the final decision, enabling the resolution of previously unsolvable instances.
Structured Pruning of Neural Networks for Constraints Learning
Matteo Cacciola
Antonio Frangioni
In recent years, the integration of Machine Learning (ML) models with Operation Research (OR) tools has gained popularity across diverse app… (see more)lications, including cancer treatment, algorithmic configuration, and chemical process optimization. In this domain, the combination of ML and OR often relies on representing the ML model output using Mixed Integer Programming (MIP) formulations. Numerous studies in the literature have developed such formulations for many ML predictors, with a particular emphasis on Artificial Neural Networks (ANNs) due to their significant interest in many applications. However, ANNs frequently contain a large number of parameters, resulting in MIP formulations that are impractical to solve, thereby impeding scalability. In fact, the ML community has already introduced several techniques to reduce the parameter count of ANNs without compromising their performance, since the substantial size of modern ANNs presents challenges for ML applications as it significantly impacts computational efforts during training and necessitates significant memory resources for storage. In this paper, we showcase the effectiveness of pruning, one of these techniques, when applied to ANNs prior to their integration into MIPs. By pruning the ANN, we achieve significant improvements in the speed of the solution process. We discuss why pruning is more suitable in this context compared to other ML compression techniques, and we identify the most appropriate pruning strategies. To highlight the potential of this approach, we conduct experiments using feed-forward neural networks with multiple layers to construct adversarial examples. Our results demonstrate that pruning offers remarkable reductions in solution times without hindering the quality of the final decision, enabling the resolution of previously unsolvable instances.