Portrait de Foutse Khomh

Foutse Khomh

Membre académique associé
Chaire en IA Canada-CIFAR
Professeur, Polytechnique Montréal, Département de génie informatique et génie logiciel
Sujets de recherche
Apprentissage de la programmation
Apprentissage par renforcement
Apprentissage profond
Exploration des données
Modèles génératifs
Systèmes distribués
Traitement du langage naturel

Biographie

Foutse Khomh est professeur titulaire de génie logiciel à Polytechnique Montréal, titulaire d'une chaire en IA Canada-CIFAR dans le domaine des systèmes logiciels d'apprentissage automatique fiables, et titulaire d'une chaire de recherche FRQ-IVADO sur l'assurance qualité des logiciels pour les applications d'apprentissage automatique.

Il a obtenu un doctorat en génie logiciel de l'Université de Montréal en 2011, avec une bourse d'excellence. Il a également reçu le prix CS-Can/Info-Can du meilleur jeune chercheur en informatique en 2019. Ses recherches portent sur la maintenance et l'évolution des logiciels, l'ingénierie des systèmes d'apprentissage automatique, l'ingénierie en nuage et l’IA/apprentissage automatique fiable et digne de confiance.

Ses travaux ont été récompensés par quatre prix de l’article le plus important Most Influential Paper en dix ans et six prix du meilleur article ou de l’article exceptionnel (Best/Distinguished Paper). Il a également siégé au comité directeur de plusieurs conférences et rencontres : SANER (comme président), MSR, PROMISE, ICPC (comme président) et ICSME (en tant que vice-président). Il a initié et coorganisé le symposium Software Engineering for Machine Learning Applications (SEMLA) et la série d'ateliers Release Engineering (RELENG).

Il est cofondateur du projet CRSNG CREATE SE4AI : A Training Program on the Development, Deployment, and Servicing of Artificial Intelligence-based Software Systems et l'un des chercheurs principaux du projet Dependable Explainable Learning (DEEL). Il est également cofondateur de l'initiative québécoise sur l'IA digne de confiance (Confiance IA Québec). Il fait partie du comité de rédaction de plusieurs revues internationales de génie logiciel (dont IEEE Software, EMSE, JSEP) et est membre senior de l'Institute of Electrical and Electronics Engineers (IEEE).

Étudiants actuels

Maîtrise recherche - Polytechnique
Doctorat - Polytechnique
Doctorat - Polytechnique
Postdoctorat - Polytechnique
Co-superviseur⋅e :
Postdoctorat - Polytechnique
Maîtrise recherche - Polytechnique
Doctorat - Polytechnique
Maîtrise recherche - Polytechnique

Publications

Fault Localization in Deep Learning-based Software: A System-level Approach
Mohammad Mehdi Morovati
Amin Nikanjam
Over the past decade, Deep Learning (DL) has become an integral part of our daily lives. This surge in DL usage has heightened the need for … (voir plus)developing reliable DL software systems. Given that fault localization is a critical task in reliability assessment, researchers have proposed several fault localization techniques for DL-based software, primarily focusing on faults within the DL model. While the DL model is central to DL components, there are other elements that significantly impact the performance of DL components. As a result, fault localization methods that concentrate solely on the DL model overlook a large portion of the system. To address this, we introduce FL4Deep, a system-level fault localization approach considering the entire DL development pipeline to effectively localize faults across the DL-based systems. In an evaluation using 100 faulty DL scripts, FL4Deep outperformed four previous approaches in terms of accuracy for three out of six DL-related faults, including issues related to data (84%), mismatched libraries between training and deployment (100%), and loss function (69%). Additionally, FL4Deep demonstrated superior precision and recall in fault localization for five categories of faults including three mentioned fault types in terms of accuracy, plus insufficient training iteration and activation function.
Impact of LLM-based Review Comment Generation in Practice: A Mixed Open-/Closed-source User Study
Doriane Olewicki
Léuson M. P. Da Silva
Suhaib Mujahid
Arezou Amini
Benjamin Mah
Marco Castelluccio
Sarra Habchi
Bram Adams
We conduct a large-scale empirical user study in a live setup to evaluate the acceptance of LLM-generated comments and their impact on the r… (voir plus)eview process. This user study was performed in two organizations, Mozilla (which has its codebase available as open source) and Ubisoft (fully closed-source). Inside their usual review environment, participants were given access to RevMate, an LLM-based assistive tool suggesting generated review comments using an off-the-shelf LLM with Retrieval Augmented Generation to provide extra code and review context, combined with LLM-as-a-Judge, to auto-evaluate the generated comments and discard irrelevant cases. Based on more than 587 patch reviews provided by RevMate, we observed that 8.1% and 7.2%, respectively, of LLM-generated comments were accepted by reviewers in each organization, while 14.6% and 20.5% other comments were still marked as valuable as review or development tips. Refactoring-related comments are more likely to be accepted than Functional comments (18.2% and 18.6% compared to 4.8% and 5.2%). The extra time spent by reviewers to inspect generated comments or edit accepted ones (36/119), yielding an overall median of 43s per patch, is reasonable. The accepted generated comments are as likely to yield future revisions of the revised patch as human-written comments (74% vs 73% at chunk-level).
Towards Enhancing the Reproducibility of Deep Learning Bugs: An Empirical Study
Mehil B. Shah
Mohammad Masudur Rahman
Towards Optimizing SQL Generation via LLM Routing
Mohammadhossein Malekpour
Nour Shaheen
Amine Mhedhbi
Text-to-SQL enables users to interact with databases through natural language, simplifying access to structured data. Although highly capabl… (voir plus)e large language models (LLMs) achieve strong accuracy for complex queries, they incur unnecessary latency and dollar cost for simpler ones. In this paper, we introduce the first LLM routing approach for Text-to-SQL, which dynamically selects the most cost-effective LLM capable of generating accurate SQL for each query. We present two routing strategies (score- and classification-based) that achieve accuracy comparable to the most capable LLM while reducing costs. We design the routers for ease of training and efficient inference. In our experiments, we highlight a practical and explainable accuracy-cost trade-off on the BIRD dataset.
Towards Optimizing SQL Generation via LLM Routing
Mohammadhossein Malekpour
Nour Shaheen
Amine Mhedhbi
Text-to-SQL enables users to interact with databases through natural language, simplifying access to structured data. Although highly capabl… (voir plus)e large language models (LLMs) achieve strong accuracy for complex queries, they incur unnecessary latency and dollar cost for simpler ones. In this paper, we introduce the first LLM routing approach for Text-to-SQL, which dynamically selects the most cost-effective LLM capable of generating accurate SQL for each query. We present two routing strategies (score- and classification-based) that achieve accuracy comparable to the most capable LLM while reducing costs. We design the routers for ease of training and efficient inference. In our experiments, we highlight a practical and explainable accuracy-cost trade-off on the BIRD dataset.
Trained Without My Consent: Detecting Code Inclusion In Language Models Trained on Code
Vahid Majdinasab
Amin Nikanjam
Code auditing ensures that the developed code adheres to standards, regulations, and copyright protection by verifying that it does not cont… (voir plus)ain code from protected sources. The recent advent of Large Language Models (LLMs) as coding assistants in the software development process poses new challenges for code auditing. The dataset for training these models is mainly collected from publicly available sources. This raises the issue of intellectual property infringement as developers' codes are already included in the dataset. Therefore, auditing code developed using LLMs is challenging, as it is difficult to reliably assert if an LLM used during development has been trained on specific copyrighted codes, given that we do not have access to the training datasets of these models. Given the non-disclosure of the training datasets, traditional approaches such as code clone detection are insufficient for asserting copyright infringement. To address this challenge, we propose a new approach, TraWiC; a model-agnostic and interpretable method based on membership inference for detecting code inclusion in an LLM's training dataset. We extract syntactic and semantic identifiers unique to each program to train a classifier for detecting code inclusion. In our experiments, we observe that TraWiC is capable of detecting 83.87% of codes that were used to train an LLM. In comparison, the prevalent clone detection tool NiCad is only capable of detecting 47.64%. In addition to its remarkable performance, TraWiC has low resource overhead in contrast to pair-wise clone detection that is conducted during the auditing process of tools like CodeWhisperer reference tracker, across thousands of code snippets.
Impact of LLM-based Review Comment Generation in Practice: A Mixed Open-/Closed-source User Study
Doriane Olewicki
Leuson Da Silva
Suhaib Mujahid
Arezou Amini
Benjamin Mah
Marco Castelluccio
Sarra Habchi
Bram Adams
Tracing Optimization for Performance Modeling and Regression Detection
Kaveh Shahedi
Heng Li
Maxime Lamothe
Software performance modeling plays a crucial role in developing and maintaining software systems. A performance model analytically describe… (voir plus)s the relationship between the performance of a system and its runtime activities. This process typically examines various aspects of a system's runtime behavior, such as the execution frequency of functions or methods, to forecast performance metrics like program execution time. By using performance models, developers can predict expected performance and thereby effectively identify and address unexpected performance regressions when actual performance deviates from the model's predictions. One common and precise method for capturing performance behavior is software tracing, which involves instrumenting the execution of a program, either at the kernel level (e.g., system calls) or application level (e.g., function calls). However, due to the nature of tracing, it can be highly resource-intensive, making it impractical for production environments where resources are limited. In this work, we propose statistical approaches to reduce tracing overhead by identifying and excluding performance-insensitive code regions, particularly application-level functions, from tracing while still building accurate performance models that can capture performance degradations. By selecting an optimal set of functions to be traced, we can construct optimized performance models that achieve an R-2 score of up to 99% and, sometimes, outperform full tracing models (models using non-optimized tracing data), while significantly reducing the tracing overhead by more than 80% in most cases. Our optimized performance models can also capture performance regressions in our studied programs effectively, demonstrating their usefulness in real-world scenarios. Our approach is fully automated, making it ready to be used in production environments with minimal human effort.
Doctoral Symposium Committee
Anthony Cleve
Christian Lange
Silvia Breu
Manar H. Alalfi
Mario Luca Bernardi
Cornelia Boldyreff
Marco D'Ambros
Simon Denier
Natalia Dragan
Ekwa Duala-Ekoko
Fausto Fasano
Adnane Ghannem
Carmine Gravino
Maen Hammad
Imed Hammouda
Salima Hassaine
Yue Jia
Zhen Ming (Jack) Jiang
Adam Kiezun … (voir 11 de plus)
Jay Kothari
Jonathan Memaitre
Naouel Moha
Rocco Oliveto
Denys Poshyvanyk
Michele Risi
Giuseppe Scanniello
Bonita Sharif
Andrew Sutton
Anis Yousefi
Eugenio Zimeo
Manar H. Alalfi Mario Luca Bernardi Cornelia Boldyreff Anthony Cleve Marco D'Ambros Simon Denier Natalia Dragan Ekwa Duala-Ekoko Fausto Fasa… (voir plus)no Adnane Ghannem Carmine Gravino Maen Hammad Imed Hammouda Salima Hassaine Yue Jia Zhen Ming Jiang Foutse Khomh Adam Kiezun Jay Kothari Jonathan Memaitre Naouel Moha Rocco Oliveto Denys Poshyvanyk Michele Risi Giuseppe Scanniello Bonita Sharif Andrew Sutton Anis Yousefi Eugenio Zimeo
Doctoral Symposium Committee
Anthony Cleve
Christian Lange
Silvia Breu
Manar H. Alalfi
Mario Luca Bernardi
Cornelia Boldyreff
Marco D'Ambros
Simon Denier
Natalia Dragan
Ekwa Duala-Ekoko
Fausto Fasano
Adnane Ghannem
Carmine Gravino
Maen Hammad
Imed Hammouda
Salima Hassaine
Yue Jia
Zhen Ming Jiang
Adam Kiezun … (voir 11 de plus)
Jay Kothari
Jonathan Memaitre
Naouel Moha
Rocco Oliveto
Denys Poshyvanyk
Michele Risi
Giuseppe Scanniello
Bonita Sharif
Andrew Sutton
Anis Yousefi
Eugenio Zimeo
Manar H. Alalfi Mario Luca Bernardi Cornelia Boldyreff Anthony Cleve Marco D'Ambros Simon Denier Natalia Dragan Ekwa Duala-Ekoko Fausto Fasa… (voir plus)no Adnane Ghannem Carmine Gravino Maen Hammad Imed Hammouda Salima Hassaine Yue Jia Zhen Ming Jiang Foutse Khomh Adam Kiezun Jay Kothari Jonathan Memaitre Naouel Moha Rocco Oliveto Denys Poshyvanyk Michele Risi Giuseppe Scanniello Bonita Sharif Andrew Sutton Anis Yousefi Eugenio Zimeo
In-Simulation Testing of Deep Learning Vision Models in Autonomous Robotic Manipulators
Dmytro Humeniuk
Houssem Ben Braiek
Thomas Reid
LIBS-Raman Multimodal Architecture for Automated Lunar Prospecting
Jérôme Pigeon
Richard Boudreault
Ahmed Ashraf
P. Maghoul