Portrait of Kaleem Siddiqi

Kaleem Siddiqi

Associate Academic Member
Professor, McGill University, School of Computer Science
Research Topics
Computational Biology
Computational Neuroscience
Computer Vision
Medical Machine Learning

Biography

Kaleem Siddiqi is a professor of computer science at McGill University and a member of McGill’s Centre for Intelligent Machines. He is also an associate academic member of Mila – Quebec Artificial Intelligence Institute, McGill’s Department of Mathematics and Statistics, and the Goodman Centre for Cancer Research at McGill. He holds an FRQS Dual Chair in Artificial Intelligence and Health with Keith Murai. Siddiqi’s research interests lie in computer vision, biological image analysis, neuroscience, visual perception and robotics. He is field chief editor for Frontiers in Computer Science and has served as an associate editor of IEEE Transactions on Pattern Analysis and Machine Intelligence, Pattern Recognition and Frontiers in ICT. He is co-author with Steve Pizer of the book Medial Representations: Mathematics, Algorithms and Applications (Springer, 2008).

Current Students

PhD - McGill University
Master's Research - McGill University
Master's Research - McGill University
PhD - McGill University
Master's Research - McGill University
Master's Research - McGill University
PhD - McGill University
Principal supervisor :
Master's Research - McGill University
PhD - McGill University
PhD - McGill University
Master's Research - McGill University

Publications

Automated diagnosis of usual interstitial pneumonia on chest CT via the mean curvature of isophotes
Peter Savadjiev
Morteza Rezanejad
Sahir Bhatnagar
David Camirand
Claude Kauffmann
Ronald J Dandurand
Patrick Bourgouin
Carl Chartrand-Lefebvre
Alexandre Semionov
Visual-Tactile Inference of 2.5D Object Shape From Marker Texture
Affan Jilani
Francois Hogan
Charlotte Morissette
M. Jenkin
Visual-tactile sensing affords abundant capabilities for contact-rich object manipulation tasks including grasping and placing. Here we intr… (see more)oduce a shape-from-texture inspired contact shape estimation approach for visual-tactile sensors equipped with visually distinct membrane markers. Under a perspective projection camera model, measurements related to the change in marker separation upon contact are used to recover surface shape. Our approach allows for shape sensing in real time, without requiring network training or complex assumptions related to lighting, sensor geometry or marker placement. Experiments show that the surface contact shape recovered is qualitatively and quantitatively consistent with those obtained through the use of photometric stereo, the current state of the art for shape recovery in visual-tactile sensors. Importantly, our approach is applicable to a large family of sensors not equipped with photometric stereo hardware, and also to those with semi-transparent membranes. The recovery of surface shape affords new capabilities to these sensors for robotic applications, such as the estimation of contact and slippage in object manipulation tasks (Hogan etal., 2022) and the use of force matching for kinesthetic teaching using multimodal visual-tactile sensing (Ablett etal., 2024).
Visual-Tactile Inference of 2.5D Object Shape From Marker Texture
Affan Jilani
Francois Hogan
Charlotte Morissette
M. Jenkin
Visual-tactile sensing affords abundant capabilities for contact-rich object manipulation tasks including grasping and placing. Here we intr… (see more)oduce a shape-from-texture inspired contact shape estimation approach for visual-tactile sensors equipped with visually distinct membrane markers. Under a perspective projection camera model, measurements related to the change in marker separation upon contact are used to recover surface shape. Our approach allows for shape sensing in real time, without requiring network training or complex assumptions related to lighting, sensor geometry or marker placement. Experiments show that the surface contact shape recovered is qualitatively and quantitatively consistent with those obtained through the use of photometric stereo, the current state of the art for shape recovery in visual-tactile sensors. Importantly, our approach is applicable to a large family of sensors not equipped with photometric stereo hardware, and also to those with semi-transparent membranes. The recovery of surface shape affords new capabilities to these sensors for robotic applications, such as the estimation of contact and slippage in object manipulation tasks (Hogan etal., 2022) and the use of force matching for kinesthetic teaching using multimodal visual-tactile sensing (Ablett etal., 2024).
Multimodal and Force-Matched Imitation Learning with a See-Through Visuotactile Sensor
Trevor Ablett
Oliver Limoyo
Adam Sigal
Affan Jilani
Jonathan Kelly
Francois Hogan
Kinesthetic Teaching is a popular approach to collecting expert robotic demonstrations of contact-rich tasks for imitation learning (IL), bu… (see more)t it typically only measures motion, ignoring the force placed on the environment by the robot. Furthermore, contact-rich tasks require accurate sensing of both reaching and touching, which can be difficult to provide with conventional sensing modalities. We address these challenges with a See-Through-your-Skin (STS) visuotactile sensor, using the sensor both (i) as a measurement tool to improve kinesthetic teaching, and (ii) as a policy input in contact-rich door manipulation tasks. An STS sensor can be switched between visual and tactile modes by leveraging a semi-transparent surface and controllable lighting, allowing for both pre-contact visual sensing and during-contact tactile sensing with a single sensor. First, we propose tactile force matching, a methodology that enables a robot to match forces read during kinesthetic teaching using tactile signals. Second, we develop a policy that controls STS mode switching, allowing a policy to learn the appropriate moment to switch an STS from its visual to its tactile mode. Finally, we study multiple observation configurations to compare and contrast the value of visual and tactile data from an STS with visual data from a wrist-mounted eye-in-hand camera. With over 3,000 test episodes from real-world manipulation experiments, we find that the inclusion of force matching raises average policy success rates by 62.5%, STS mode switching by 30.3%, and STS data as a policy input by 42.5%. Our results highlight the utility of see-through tactile sensing for IL, both for data collection to allow force matching, and for policy execution to allow accurate task feedback.
Efficient Dynamics Modeling in Interactive Environments with Koopman Theory
Arnab Kumar Mondal
Siba Smarak Panigrahi
Sai Rajeswar
The accurate modeling of dynamics in interactive environments is critical for successful long-range prediction. Such a capability could adva… (see more)nce Reinforcement Learning (RL) and Planning algorithms, but achieving it is challenging. Inaccuracies in model estimates can compound, resulting in increased errors over long horizons. We approach this problem from the lens of Koopman theory, where the nonlinear dynamics of the environment can be linearized in a high-dimensional latent space. This allows us to efficiently parallelize the sequential problem of long-range prediction using convolution while accounting for the agent’s action at every time step. Our approach also enables stability analysis and better control over gradients through time. Taken together, these advantages result in significant improvement over the existing approaches, both in the efficiency and the accuracy of modeling dynamics over extended horizons. We also show that this model can be easily incorporated into dynamics modeling for model-based planning and model-free RL and report promising experimental results.
Interacting with a Visuotactile Countertop
M. Jenkin
Francois Hogan
Jean-François Tremblay
Bobak H. Baghi
Visual-Tactile Inference of 2.5D Object Shape From Marker Texture
Affan Jilani
Francois Hogan
Charlotte Morissette
M. Jenkin
Shape-Based Measures Improve Scene Categorization
Morteza Rezanejad
John Wilder
Dirk B. Walther
Allan D. Jepson
Sven Dickinson
Converging evidence indicates that deep neural network models that are trained on large datasets are biased toward color and texture informa… (see more)tion. Humans, on the other hand, can easily recognize objects and scenes from images as well as from bounding contours. Mid-level vision is characterized by the recombination and organization of simple primary features into more complex ones by a set of so-called Gestalt grouping rules. While described qualitatively in the human literature, a computational implementation of these perceptual grouping rules is so far missing. In this article, we contribute a novel set of algorithms for the detection of contour-based cues in complex scenes. We use the medial axis transform (MAT) to locally score contours according to these grouping rules. We demonstrate the benefit of these cues for scene categorization in two ways: (i) Both human observers and CNN models categorize scenes most accurately when perceptual grouping information is emphasized. (ii) Weighting the contours with these measures boosts performance of a CNN model significantly compared to the use of unweighted contours. Our work suggests that, even though these measures are computed directly from contours in the image, current CNN models do not appear to extract or utilize these grouping cues.
Cardiomyocyte orientation recovery at micrometer scale reveals long‐axis fiber continuum in heart walls
Drisya Dileep
Tabish A Syed
Tyler FW Sloan
Perundurai S Dhandapany
Minhajuddin Sirajuddin
MLGCN: An Ultra Efficient Graph Convolution Neural Model For 3D Point Cloud Analysis
Mohammad Khodadad
Morteza Rezanejad
Ali Shiraee Kasmaee
Dirk Bernhardt-Walther
Hamidreza Mahyar
Organizing Principles of Astrocytic Nanoarchitecture in the Mouse Cerebral Cortex
Christopher K. Salmon
Tabish A Syed
J. Benjamin Kacerovsky
Nensi Alivodej
Alexandra L. Schober
Tyler F. W. Sloan
Michael T. Pratte
Michael P. Rosen
Miranda Green
Adario DasGupta
Hojatollah Vali
Craig A. Mandato
Shaurya Mehta
Affan Jilani
Keith K. Murai
Yanan Wang
Ultrastructure Analysis of Cardiomyocytes and Their Nuclei
Tabish A Syed
Yanan Wang
Drisya Dileep
Minhajuddin Sirajuddin