Portrait of Foutse Khomh

Foutse Khomh

Associate Academic Member
Canada CIFAR AI Chair
Professor, Polytechnique Montréal, Department of Computer Engineering and Software Engineering
Research Topics
Data Mining
Deep Learning
Distributed Systems
Generative Models
Learning to Program
Natural Language Processing
Reinforcement Learning

Biography

Foutse Khomh is a full professor of software engineering at Polytechnique Montréal, a Canada CIFAR AI Chair – Trustworthy Machine Learning Software Systems, and an FRQ-IVADO Research Chair in Software Quality Assurance for Machine Learning Applications. Khomh completed a PhD in software engineering at Université de Montréal in 2011, for which he received an Award of Excellence. He was also awarded a CS-Can/Info-Can Outstanding Young Computer Science Researcher Prize in 2019.

His research interests include software maintenance and evolution, machine learning systems engineering, cloud engineering, and dependable and trustworthy ML/AI. His work has received four Ten-year Most Influential Paper (MIP) awards, and six Best/Distinguished Paper Awards. He has served on the steering committee of numerous organizations in software engineering, including SANER (chair), MSR, PROMISE, ICPC (chair), and ICSME (vice-chair). He initiated and co-organized Polytechnique Montréal‘s Software Engineering for Machine Learning Applications (SEMLA) symposium and the RELENG (release engineering) workshop series.

Khomh co-founded the NSERC CREATE SE4AI: A Training Program on the Development, Deployment and Servicing of Artificial Intelligence-based Software Systems, and is a principal investigator for the DEpendable Explainable Learning (DEEL) project.

He also co-founded Confiance IA, a Quebec consortium focused on building trustworthy AI, and is on the editorial board of multiple international software engineering journals, including IEEE Software, EMSE and JSEP. He is a senior member of IEEE.

Current Students

Master's Research - Polytechnique Montréal
Master's Research - Polytechnique Montréal
PhD - Polytechnique Montréal
PhD - Polytechnique Montréal
Postdoctorate - Polytechnique Montréal
Master's Research - Polytechnique Montréal
PhD - Polytechnique Montréal

Publications

Common Challenges of Deep Reinforcement Learning Applications Development: An Empirical Study
Mohammad Mehdi Morovati
Florian Tambon
Mina Taraghi
Amin Nikanjam
Data-access performance anti-patterns in data-intensive systems
Biruk Asmare Muse
Kawser Wazed Nafi
Giuliano Antoniol
Data-intensive systems handle variable, high volume, and high-velocity data generated by human and digital devices. Like traditional softwar… (see more)e, data-intensive systems are prone to technical debts introduced to cope-up with the pressure of time and resource constraints on developers. Data-access is a critical component of data-intensive systems as it determines the overall performance and functionality of such systems. While data access technical debts are getting attention from the research community, technical debts affecting the performance, are not well investigated. Objective: Identify, categorize, and validate data access performance issues in the context of NoSQL-based and polyglot persistence data-intensive systems using qualitative study. Method: We collect issues from NoSQL-based and polyglot persistence open-source data-intensive systems and identify data access performance issues using inductive coding and build a taxonomy of the root causes. Then, we validate the perceived relevance of the newly identified performance issues using a developer survey.
Enhancing Security and Energy Efficiency of Cyber-Physical Systems using Deep Reinforcement Learning
Saeid Jamshidi
Ashkan Amirnia
Amin Nikanjam
List Comprehension Versus for Loops Performance in Real Python Projects: Should we Care?
Cyrine Zid
François Belias
Massimiliano Di Penta
Giuliano Antoniol
List comprehensions are a Pythonic functional construct allowing developers to express in a concise way loops to build and manipulate lists.… (see more) Previous studies point to a gain in speed when list comprehensions are adopted. This paper reports the results of a study that compares the execution time performance of Python code written using list comprehensions as opposed to equivalent imperative programming. To this aim, we have developed a set of transformation rules to map Python for loops into list comprehensions. On the one hand, on artificial code snippets, we found list comprehensions to be faster than procedural code, with differences becoming evident if amplifying the tests, i.e., executing the code fragment thousands of times. On the other hand, this does not happen when executing real-world Python projects, where the performance may or may not improve, depending on the projects' features and the nature of the manipulated objects.
Triage Software Update Impact via Release Notes Classification
Solomon Berhe
Vanessa Kan
Omhier Khan
Nathan Pader
Ali Zain Farooqui
Marc Maynard
Validation of Vigilance Decline Capability in A Simulated Test Environment: A Preliminary Step Towards Neuroadaptive Control
Andra Mahu
Amandeep Singh
Florian Tambon
Benoit Ouellette
Jean-françois Delisle
Tanya Paul
Alexandre Marois
Philippe Doyon-poulin
Vigilance is the ability to sustain attention. It is crucial in tasks like piloting and driving that involve the ability to sustain attentio… (see more)n. However, cognitive performance often falters with prolonged tasks, leading to reduced efficiency, slower reactions, and increased error likelihood. Identifying and addressing diminished vigilance is essential for enhancing driving safety. Neuro-physiological indicators have shown promising results to monitor vigilance, paving the way for neuroadaptive control of vigilance. In fact, the collection of vigilance-related physiological markers could allow, using neuroadaptive intelligent systems, a real-time adaption of tasks or the presentation of countermeasures to prevent errors that would ensue from such hypovigilant situations. Before reaching this goal, one must however collect valid data truly representative of hypovigilance which, in turn, can be used to develop prediction models of the vigilant state. This study serves as a proof of concept to assess validity of a testbed to induce and measure vigilance decline through a simulated test environment, validating controlled induction, and evaluating its impact on participants’ performance and subjective experiences. In total, 28 participants (10 females, 18 males) aged 18 to 35 (M = 23.75 years), were recruited. All participants held valid driving licenses and had corrected-to-normal vision. Data collection involved Psychomotor Vigilance Task (PVT), Karolinska Sleepiness Scale (KSS) and the Stanford Sleepiness Scale (SSS) along with neuro-physiological specialized equipment: Enobio 8 EEG, Empatica E4, Polar H10 and Tobii Nano Pro eye tracker. Notably, this study is limited to demonstrating the results of PVT, KSS, and SSS, with the aim of assessing the effectiveness of the test setup. Participants self-reported their loss of vigilance by pressing a marker on the steering wheel. To induce hypovigilance, participants drove an automatic car in a low-traffic, monotonous environment for 60 minutes, featuring empty fields of grass and desert, employing specific in-game procedures. The driving task included instructions for lane-keeping, indicator usage, and maintaining speeds of up to 80 km/h, with no traffic lights or stop signs present. Experiments were conducted before lunch, between 9 am and 12 pm, ensuring maximum participant alertness, with instructions to abstain from caffeine, alcohol, nicotine, and cannabis on the experiment day. Results showed that the mean reaction time (RT) increased from 257.7 ms before driving to 276.8 ms after driving, t = 4.82, p .0001, d = -0.61 whereas the median RT changed from 246.07 ms to 260.89 ms, t = 3.58, p = 0.0013, d= -0.53 indicating a statistically significant alteration in participant's psychomotor performance. The mean number of minor lapses in attention (RT >500ms) to the PVT increased from 1.11 before driving to 1.67 after driving, but was not statistically significant t = 1.66, p = 0.11, d = -0.28. KSS showed a considerable rise of sleepiness, with a mean of 4.11 (rather alert) before driving increasing to 5.96 (some signs of sleepiness) after driving, t = 5.65, p .0001, d = -1.04. Similarly, the SSS demonstrated an increase in mean values from 2.57 (able to concentrate) before driving to 3.96 (somewhat foggy) after driving, t = 8.42, p .0001, d = -1.20, signifying an increased perception of sleepiness following the driving activity. Lastly, the mean time of the first marker press was 17:38 minutes (SD = 9:47 minutes) indicating that the self-reported loss of vigilance occurred during the first 30 minutes of the driving task. The observed increase in PVT reaction time aligns with the declined alertness reported on both the KSS and SSS responses, suggesting a consistent decline in vigilance and alertness post-driving. In conclusion, the study underscores the effectiveness and validity of the simulated test environment in inducing vigilance decline, providing valuable insights into the impact on both objective and subjective measures. At the same time, the research sets the stage for exploring neuroadaptive control strategies, aiming to enhance task performance and safety. Ultimately, this will contribute to the development of a non-invasive artificial intelligence system capable of detecting vigilance states in extreme/challenging environments, e.g. for pilots and drivers.
Harnessing Pre-trained Generalist Agents for Software Engineering Tasks
Paulina Stevia Nouwou Mindom
Amin Nikanjam
Nowadays, we are witnessing an increasing adoption of Artificial Intelligence (AI) to develop techniques aimed at improving the reliability,… (see more) effectiveness, and overall quality of software systems. Deep reinforcement learning (DRL) has recently been successfully used for automation in complex tasks such as game testing and solving the job-shop scheduling problem. However, these specialized DRL agents, trained from scratch on specific tasks, suffer from a lack of generalizability to other tasks and they need substantial time to be developed and re-trained effectively. Recently, DRL researchers have begun to develop generalist agents, able to learn a policy from various environments and capable of achieving performances similar to or better than specialist agents in new tasks. In the Natural Language Processing or Computer Vision domain, these generalist agents are showing promising adaptation capabilities to never-before-seen tasks after a light fine-tuning phase and achieving high performance. This paper investigates the potential of generalist agents for solving SE tasks. Specifically, we conduct an empirical study aimed at assessing the performance of two generalist agents on two important SE tasks: the detection of bugs in games (for two games) and the minimization of makespan in a scheduling task, to solve the job-shop scheduling problem (for two instances). Our results show that the generalist agents outperform the specialist agents with very little effort for fine-tuning, achieving a 20% reduction of the makespan over specialized agent performance on task-based scheduling. In the context of game testing, some generalist agent configurations detect 85% more bugs than the specialist agents. Building on our analysis, we provide recommendations for researchers and practitioners looking to select generalist agents for SE tasks, to ensure that they perform effectively.
Studying the Practices of Testing Machine Learning Software in the Wild
Moses Openja
Armstrong Foundjem
Zhen Ming Jiang
Mouna Abidi
Ahmed E. Hassan
Background: We are witnessing an increasing adoption of machine learning (ML), especially deep learning (DL) algorithms in many software sys… (see more)tems, including safety-critical systems such as health care systems or autonomous driving vehicles. Ensuring the software quality of these systems is yet an open challenge for the research community, mainly due to the inductive nature of ML software systems. Traditionally, software systems were constructed deductively, by writing down the rules that govern the behavior of the system as program code. However, for ML software, these rules are inferred from training data. Few recent research advances in the quality assurance of ML systems have adapted different concepts from traditional software testing, such as mutation testing, to help improve the reliability of ML software systems. However, it is unclear if any of these proposed testing techniques from research are adopted in practice. There is little empirical evidence about the testing strategies of ML engineers. Aims: To fill this gap, we perform the first fine-grained empirical study on ML testing practices in the wild, to identify the ML properties being tested, the followed testing strategies, and their implementation throughout the ML workflow. Method: First, we systematically summarized the different testing strategies (e.g., Oracle Approximation), the tested ML properties (e.g., Correctness, Bias, and Fairness), and the testing methods (e.g., Unit test) from the literature. Then, we conducted a study to understand the practices of testing ML software. Results: In our findings: 1) we identified four (4) major categories of testing strategy including Grey-box, White-box, Black-box, and Heuristic-based techniques that are used by the ML engineers to find software bugs. 2) We identified 16 ML properties that are tested in the ML workflow.
Harnessing Predictive Modeling and Software Analytics in the Age of LLM-Powered Software Development (Invited Talk)
Bug characterization in machine learning-based systems
Mohammad Mehdi Morovati
Amin Nikanjam
Florian Tambon
Z. Jiang
A Machine Learning Based Approach to Detect Machine Learning Design Patterns
Weitao Pan
Hironori Washizaki
Nobukazu Yoshioka
Yoshiaki Fukazawa
Yann‐Gaël Guéhéneuc
As machine learning expands to various domains, the demand for reusable solutions to similar problems increases. Machine learning design pat… (see more)terns are reusable solutions to design problems of machine learning applications. They can significantly enhance programmers' productivity in programming that requires machine learning algorithms. Given the critical role of machine learning design patterns, the automated detection of them becomes equally vital. However, identifying design patterns can be time-consuming and error-prone. We propose an approach to detect their occurrences in Python files. Our approach uses an Abstract Syntax Tree (AST) of Python files to build a corpus of data and train a refined Text-CNN model to automatically identify machine learning design patterns. We empirically validate our approach by conducting an exploratory study to detect four common machine learning design patterns: Embedding, Multilabel, Feature Cross, and Hashed Feature. We manually label 450 Python code files containing these design patterns from repositories of projects in GitHub. Our approach achieves accuracy values ranging from 80 % to 92% for each of the four patterns.
A large-scale exploratory study of android sports apps in the google play store
Bhagya Chembakottu
Heng Li