Publications

192 Investigation of the Dose Properties and Source to Source Variabilities in Xoft Source Model S7500
A. Esmaelbeigi
Jonathan Kalinowski
T. Vuong
241 Reduction of Metal Artifacts in 7T MRI for Pre-Clinical Diffusing Alpha-Emitting Radiation Therapy Rectal Studies
Mélodie Cyr
Behnaz Behmand
N. Chabaytah
Joud Babik
256 Patient-Specific Pre-Treatment Nuclei Size Distribution is of Significance for Post Radiation Therapy Locoregional Recurrence and Survival Outcomes
Yujing Zou
Magali Lecavalier-Barsoum
Manuela Pelmus
Farhad Maleki
259 Development of a Cost-Efficient Scintillation-Fiber Detector for Use in Automated Synthesis of Positron Emission Tomography Radiotracers
Hailey Ahn
Liam Carroll
Robert Hopewell
I-Huang Tsai
Impact of a vaccine passport on first-dose SARS-CoV-2 vaccine coverage by age and area-level social determinants of health in the Canadian provinces of Quebec and Ontario: an interrupted time series analysis
Jorge Luis Flores Anato
Huiting Ma
M. Hamilton
Yiqing Xia
Sam Harper
Marc Brisson
Michael P. Hillmer
Kamil A. Malikov
Aidin Kerem
Reed Beall
Caroline E Wagner
Étienne Racine
S. Baral
Ève Dubé
Sharmistha Mishra
Mathieu Maheu-Giroux
Integrating equity, diversity and inclusion throughout the lifecycle of AI within healthcare: a scoping review protocol
Milka Nyariro
Elham Emami
Pascale Caidor
Online Bayesian Optimization of Nerve Stimulation
Lorenz Wernisch
Tristan Edwards
Antonin Berthon
Olivier Tessier-Lariviere
Elvijs Sarkans
Myrta Stoukidi
Pascal Fortier-Poisson
Max Pinkney
Michael Thornton
Catherine Hanley
Susannah Lee
Joel Jennings
Ben Appleton
Philip Garsed
Bret Patterson
Buttinger Will
Samuel Gonshaw
Matjaž Jakopec
Sudhakaran Shunmugam
Jorin Mamen … (see 4 more)
Aleksi Tukiainen
Oliver Armitage
Emil Hewage
PP02  Presentation Time: 4:39 PM
Maryam Rahbaran
Jonathan Kalinowski
James Man Git Tsui
Joseph DeCunha
Kevin Croce
Brian Bergmark
Philip Devlin
Saturday, June 24, 20238:30 AM - 9:30 AMMSOR01 Presentation Time: 8:30 AM
Mélodie Cyr
N. Chabaytah
Joud Babik
Behnaz Behmand
WOODS: Benchmarks for Out-of-Distribution Generalization in Time Series
Jean-Christophe Gagnon-Audet
Kartik Ahuja
Mohammad Javad Darvishi Bayazi
Pooneh Mousavi
Effective Test Generation Using Pre-trained Large Language Models and Mutation Testing
Arghavan Moradi Dakhel
Amin Nikanjam
Vahid Majdinasab
Michel C. Desmarais
One of the critical phases in software development is software testing. Testing helps with identifying potential bugs and reducing maintenan… (see more)ce costs. The goal of automated test generation tools is to ease the development of tests by suggesting efficient bug-revealing tests. Recently, researchers have leveraged Large Language Models (LLMs) of code to generate unit tests. While the code coverage of generated tests was usually assessed, the literature has acknowledged that the coverage is weakly correlated with the efficiency of tests in bug detection. To improve over this limitation, in this paper, we introduce MuTAP for improving the effectiveness of test cases generated by LLMs in terms of revealing bugs by leveraging mutation testing. Our goal is achieved by augmenting prompts with surviving mutants, as those mutants highlight the limitations of test cases in detecting bugs. MuTAP is capable of generating effective test cases in the absence of natural language descriptions of the Program Under Test (PUTs). We employ different LLMs within MuTAP and evaluate their performance on different benchmarks. Our results show that our proposed method is able to detect up to 28% more faulty human-written code snippets. Among these, 17% remained undetected by both the current state-of-the-art fully automated test generation tool (i.e., Pynguin) and zero-shot/few-shot learning approaches on LLMs. Furthermore, MuTAP achieves a Mutation Score (MS) of 93.57% on synthetic buggy code, outperforming all other approaches in our evaluation. Our findings suggest that although LLMs can serve as a useful tool to generate test cases, they require specific post-processing steps to enhance the effectiveness of the generated test cases which may suffer from syntactic or functional errors and may be ineffective in detecting certain types of bugs and testing corner cases PUTs.
Learning Lyapunov-Stable Polynomial Dynamical Systems Through Imitation
Amin Abyaneh
Imitation learning is a paradigm to address complex motion planning problems by learning a policy to imitate an expert's behavior. However, … (see more)relying solely on the expert's data might lead to unsafe actions when the robot deviates from the demonstrated trajectories. Stability guarantees have previously been provided utilizing nonlinear dynamical systems, acting as high-level motion planners, in conjunction with the Lyapunov stability theorem. Yet, these methods are prone to inaccurate policies, high computational cost, sample inefficiency, or quasi stability when replicating complex and highly nonlinear trajectories. To mitigate this problem, we present an approach for learning a globally stable nonlinear dynamical system as a motion planning policy. We model the nonlinear dynamical system as a parametric polynomial and learn the polynomial's coefficients jointly with a Lyapunov candidate. To showcase its success, we compare our method against the state of the art in simulation and conduct real-world experiments with the Kinova Gen3 Lite manipulator arm. Our experiments demonstrate the sample efficiency and reproduction accuracy of our method for various expert trajectories, while remaining stable in the face of perturbations.