Portrait de Foutse Khomh

Foutse Khomh

Membre académique associé
Chaire en IA Canada-CIFAR
Professeur, Polytechnique Montréal, Département de génie informatique et génie logiciel
Sujets de recherche
Apprentissage de la programmation
Apprentissage par renforcement
Apprentissage profond
Exploration des données
Modèles génératifs
Systèmes distribués
Traitement du langage naturel

Biographie

Foutse Khomh est professeur titulaire de génie logiciel à Polytechnique Montréal, titulaire d'une chaire en IA Canada-CIFAR dans le domaine des systèmes logiciels d'apprentissage automatique fiables, et titulaire d'une chaire de recherche FRQ-IVADO sur l'assurance qualité des logiciels pour les applications d'apprentissage automatique.

Il a obtenu un doctorat en génie logiciel de l'Université de Montréal en 2011, avec une bourse d'excellence. Il a également reçu le prix CS-Can/Info-Can du meilleur jeune chercheur en informatique en 2019. Ses recherches portent sur la maintenance et l'évolution des logiciels, l'ingénierie des systèmes d'apprentissage automatique, l'ingénierie en nuage et l’IA/apprentissage automatique fiable et digne de confiance.

Ses travaux ont été récompensés par quatre prix de l’article le plus important Most Influential Paper en dix ans et six prix du meilleur article ou de l’article exceptionnel (Best/Distinguished Paper). Il a également siégé au comité directeur de plusieurs conférences et rencontres : SANER (comme président), MSR, PROMISE, ICPC (comme président) et ICSME (en tant que vice-président). Il a initié et coorganisé le symposium Software Engineering for Machine Learning Applications (SEMLA) et la série d'ateliers Release Engineering (RELENG).

Il est cofondateur du projet CRSNG CREATE SE4AI : A Training Program on the Development, Deployment, and Servicing of Artificial Intelligence-based Software Systems et l'un des chercheurs principaux du projet Dependable Explainable Learning (DEEL). Il est également cofondateur de l'initiative québécoise sur l'IA digne de confiance (Confiance IA Québec). Il fait partie du comité de rédaction de plusieurs revues internationales de génie logiciel (dont IEEE Software, EMSE, JSEP) et est membre senior de l'Institute of Electrical and Electronics Engineers (IEEE).

Étudiants actuels

Collaborateur·rice alumni - Polytechnique
Doctorat - Polytechnique
Doctorat - Polytechnique
Postdoctorat - Polytechnique
Co-superviseur⋅e :
Maîtrise recherche - Polytechnique
Doctorat - Polytechnique
Maîtrise recherche - Polytechnique

Publications

Towards Assessing Deep Learning Test Input Generators
Seif Mzoughi
Ahmed Haj Yahmed
Mohamed Elshafei
Diego Elias Costa
Representation Improvement in Latent Space for Search-Based Testing of Autonomous Robotic Systems
Dmytro Humeniuk
Representation Improvement in Latent Space for Search-Based Testing of Autonomous Robotic Systems
Dmytro Humeniuk
Understanding the impact of IoT security patterns on CPU usage and energy consumption: a dynamic approach for selecting patterns with deep reinforcement learning
Saeid Jamshidi
Amin Nikanjam
Kawser Wazed Nafi
Unveiling Inefficiencies in LLM-Generated Code: Toward a Comprehensive Taxonomy
Altaf Allah Abbassi
Leuson Da Silva
Amin Nikanjam
Self-adaptive cyber defense for sustainable IoT: A DRL-based IDS optimizing security and energy efficiency
Saeid Jamshidi
Ashkan Amirnia
Amin Nikanjam
Kawser Wazed Nafi
Samira Keivanpour
Unveiling Inefficiencies in LLM-Generated Code: Toward a Comprehensive Taxonomy
Altaf Allah Abbassi
Leuson Da Silva
Amin Nikanjam
Assessing the adoption of security policies by developers in terraform across different cloud providers
Alexandre Verdet
Mohammad Hamdaqa
Leuson Da Silva
AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons
Shaona Ghosh
Heather Frase
Adina Williams
Sarah Luger
Paul Rottger
Fazl Barez
Sean McGregor
Kenneth Fricklas
Mala Kumar
Quentin Feuillade--Montixi
Kurt Bollacker
Felix Friedrich
Ryan Tsang
Bertie Vidgen
Alicia Parrish
Chris Knotz
Eleonora Presani
Jonathan Bennion
Marisa Ferrara Boston
Mike Kuniavsky … (voir 81 de plus)
Wiebke Hutiri
James Ezick
Malek Ben Salem
Rajat Sahay
Sujata Goswami
Usman Gohar
Ben Huang
Supheakmungkol Sarin
Elie Alhajjar
Canyu Chen
Roman Eng
K. Manjusha
Virendra Mehta
Eileen Peters Long
Murali Krishna Emani
Natan Vidra
Benjamin Rukundo
Abolfazl Shahbazi
Kongtao Chen
Rajat Ghosh
Vithursan Thangarasa
Pierre Peign'e
Abhinav Singh
Max Bartolo
Satyapriya Krishna
Mubashara Akhtar
Rafael Gold
Cody Coleman
Luis Oala
Vassil Tashev
Joseph Marvin Imperial
Amy Russ
Sasidhar Kunapuli
Nicolas Miailhe
Julien Delaunay
Bhaktipriya Radharapu
Rajat Shinde
Tuesday
Debojyoti Dutta
D. Grabb
Ananya Gangavarapu
Saurav Sahay
Agasthya Gangavarapu
Patrick Schramowski
Stephen Singam
Tom David
Xudong Han
Priyanka Mary Mammen
Tarunima Prabhakar
Venelin Kovatchev
Ahmed M. Ahmed
Kelvin Manyeki
Sandeep Madireddy
Fedor Zhdanov
Joachim Baumann
N. Vasan
Xianjun Yang
Carlos Mougn
Jibin Rajan Varghese
Hussain Chinoy
Seshakrishna Jitendar
Manil Maskey
Claire V. Hardgrove
Tianhao Li
Aakash Gupta
Emil Joswin
Yifan Mai
Shachi H. Kumar
Çigdem Patlak
Kevin Lu
Vincent Alessi
Sree Bhargavi Balija
Chenhe Gu
Robert Sullivan
James Gealy
Matt Lavrisa
James Goel
Peter Mattson
Percy Liang
Joaquin Vanschoren
AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons
Shaona Ghosh
Heather Frase
Adina Williams
Sarah Luger
Paul Rottger
Fazl Barez
Sean McGregor
Kenneth Fricklas
Mala Kumar
Quentin Feuillade--Montixi
Kurt Bollacker
Felix Friedrich
Ryan Tsang
Bertie Vidgen
Alicia Parrish
Chris Knotz
Eleonora Presani
Jonathan Bennion
Marisa Ferrara Boston
Mike Kuniavsky … (voir 81 de plus)
Wiebke Hutiri
James Ezick
Malek Ben Salem
Rajat Sahay
Sujata Goswami
Usman Gohar
Ben Huang
Supheakmungkol Sarin
Elie Alhajjar
Canyu Chen
Roman Eng
K. Manjusha
Virendra Mehta
Eileen Peters Long
Murali Krishna Emani
Natan Vidra
Benjamin Rukundo
Abolfazl Shahbazi
Kongtao Chen
Rajat Ghosh
Vithursan Thangarasa
Pierre Peign'e
Abhinav Singh
Max Bartolo
Satyapriya Krishna
Mubashara Akhtar
Rafael Gold
Cody Coleman
Luis Oala
Vassil Tashev
Joseph Marvin Imperial
Amy Russ
Sasidhar Kunapuli
Nicolas Miailhe
Julien Delaunay
Bhaktipriya Radharapu
Rajat Shinde
Tuesday
Debojyoti Dutta
Declan Grabb
Ananya Gangavarapu
Saurav Sahay
Agasthya Gangavarapu
Patrick Schramowski
Stephen Singam
Tom David
Xudong Han
Priyanka Mary Mammen
Tarunima Prabhakar
Venelin Kovatchev
Ahmed M. Ahmed
Kelvin Manyeki
Sandeep Madireddy
Fedor Zhdanov
Joachim Baumann
N. Vasan
Xianjun Yang
Carlos Mougn
Jibin Rajan Varghese
Hussain Chinoy
Seshakrishna Jitendar
Manil Maskey
Claire V. Hardgrove
Tianhao Li
Aakash Gupta
Emil Joswin
Yifan Mai
Shachi H. Kumar
Çigdem Patlak
Kevin Lu
Vincent Alessi
Sree Bhargavi Balija
Chenhe Gu
Robert Sullivan
James Gealy
Matt Lavrisa
James Goel
Peter Mattson
Percy Liang
Joaquin Vanschoren
Bugs in Large Language Models Generated Code: An Empirical Study
Florian Tambon
Arghavan Moradi Dakhel
Amin Nikanjam
Michel C. Desmarais
Giuliano Antoniol
Mock Deep Testing: Toward Separate Development of Data and Models for Deep Learning
Ruchira Manke
Mohammad Wardat
Hridesh Rajan
While deep learning (DL) has permeated, and become an integral component of many critical software systems, today software engineering resea… (voir plus)rch hasn't explored how to separately test data and models that are integral for DL approaches to work effectively. The main challenge in independently testing these components arises from the tight dependency between data and models. This research explores this gap, introducing our methodology of mock deep testing for unit testing of DL applications. To enable unit testing, we introduce a design paradigm that decomposes the workflow into distinct, manageable components, minimizes sequential dependencies, and modularizes key stages of the DL. For unit testing these components, we propose modeling their dependencies using mocks. This modular approach facilitates independent development and testing of the components, ensuring comprehensive quality assurance throughout the development process. We have developed KUnit, a framework for enabling mock deep testing for the Keras library. We empirically evaluated KUnit to determine the effectiveness of mocks. Our assessment of 50 DL programs obtained from Stack Overflow and GitHub shows that mocks effectively identified 10 issues in the data preparation stage and 53 issues in the model design stage. We also conducted a user study with 36 participants using KUnit to perceive the effectiveness of our approach. Participants using KUnit successfully resolved 25 issues in the data preparation stage and 38 issues in the model design stage. Our findings highlight that mock objects provide a lightweight emulation of the dependencies for unit testing, facilitating early bug detection. Lastly, to evaluate the usability of KUnit, we conducted a post-study survey. The results reveal that KUnit is helpful to DL application developers, enabling them to independently test each component effectively in different stages.