Portrait of Affan Jilani is unavailable

Affan Jilani

Alumni

Publications

Visual-Tactile Inference of 2.5D Object Shape From Marker Texture
Francois Hogan
Charlotte Morissette
M. Jenkin
Visual-tactile sensing affords abundant capabilities for contact-rich object manipulation tasks including grasping and placing. Here we intr… (see more)oduce a shape-from-texture inspired contact shape estimation approach for visual-tactile sensors equipped with visually distinct membrane markers. Under a perspective projection camera model, measurements related to the change in marker separation upon contact are used to recover surface shape. Our approach allows for shape sensing in real time, without requiring network training or complex assumptions related to lighting, sensor geometry or marker placement. Experiments show that the surface contact shape recovered is qualitatively and quantitatively consistent with those obtained through the use of photometric stereo, the current state of the art for shape recovery in visual-tactile sensors. Importantly, our approach is applicable to a large family of sensors not equipped with photometric stereo hardware, and also to those with semi-transparent membranes. The recovery of surface shape affords new capabilities to these sensors for robotic applications, such as the estimation of contact and slippage in object manipulation tasks (Hogan etal., 2022) and the use of force matching for kinesthetic teaching using multimodal visual-tactile sensing (Ablett etal., 2024).
Visual-Tactile Inference of 2.5D Object Shape From Marker Texture
Francois Hogan
Charlotte Morissette
M. Jenkin
Visual-tactile sensing affords abundant capabilities for contact-rich object manipulation tasks including grasping and placing. Here we intr… (see more)oduce a shape-from-texture inspired contact shape estimation approach for visual-tactile sensors equipped with visually distinct membrane markers. Under a perspective projection camera model, measurements related to the change in marker separation upon contact are used to recover surface shape. Our approach allows for shape sensing in real time, without requiring network training or complex assumptions related to lighting, sensor geometry or marker placement. Experiments show that the surface contact shape recovered is qualitatively and quantitatively consistent with those obtained through the use of photometric stereo, the current state of the art for shape recovery in visual-tactile sensors. Importantly, our approach is applicable to a large family of sensors not equipped with photometric stereo hardware, and also to those with semi-transparent membranes. The recovery of surface shape affords new capabilities to these sensors for robotic applications, such as the estimation of contact and slippage in object manipulation tasks (Hogan etal., 2022) and the use of force matching for kinesthetic teaching using multimodal visual-tactile sensing (Ablett etal., 2024).
Multimodal and Force-Matched Imitation Learning with a See-Through Visuotactile Sensor
Trevor Ablett
Oliver Limoyo
Adam Sigal
Jonathan Kelly
Francois Hogan
Kinesthetic Teaching is a popular approach to collecting expert robotic demonstrations of contact-rich tasks for imitation learning (IL), bu… (see more)t it typically only measures motion, ignoring the force placed on the environment by the robot. Furthermore, contact-rich tasks require accurate sensing of both reaching and touching, which can be difficult to provide with conventional sensing modalities. We address these challenges with a See-Through-your-Skin (STS) visuotactile sensor, using the sensor both (i) as a measurement tool to improve kinesthetic teaching, and (ii) as a policy input in contact-rich door manipulation tasks. An STS sensor can be switched between visual and tactile modes by leveraging a semi-transparent surface and controllable lighting, allowing for both pre-contact visual sensing and during-contact tactile sensing with a single sensor. First, we propose tactile force matching, a methodology that enables a robot to match forces read during kinesthetic teaching using tactile signals. Second, we develop a policy that controls STS mode switching, allowing a policy to learn the appropriate moment to switch an STS from its visual to its tactile mode. Finally, we study multiple observation configurations to compare and contrast the value of visual and tactile data from an STS with visual data from a wrist-mounted eye-in-hand camera. With over 3,000 test episodes from real-world manipulation experiments, we find that the inclusion of force matching raises average policy success rates by 62.5%, STS mode switching by 30.3%, and STS data as a policy input by 42.5%. Our results highlight the utility of see-through tactile sensing for IL, both for data collection to allow force matching, and for policy execution to allow accurate task feedback.
Visual-Tactile Inference of 2.5D Object Shape From Marker Texture
Francois Hogan
Charlotte Morissette
M. Jenkin
Organizing Principles of Astrocytic Nanoarchitecture in the Mouse Cerebral Cortex
Christopher K. Salmon
Tabish A Syed
J. Benjamin Kacerovsky
Nensi Alivodej
Alexandra L. Schober
Tyler F. W. Sloan
Michael T. Pratte
Michael P. Rosen
Miranda Green
Adario DasGupta
Hojatollah Vali
Craig A. Mandato
Keith K. Murai