Portrait of Mahtab Sandhu is unavailable

Mahtab Sandhu

Alumni

Publications

Object-Centric Agentic Robot Policies
Executing open-ended natural language queries in previously unseen environments is a core problem in robotics. While recent advances in imit… (see more)ation learning and vision-language modeling have enabled promising end-to-end policies, these models struggle when faced with complex instructions and new scenes. Their short input context also limits their ability to solve tasks over larger spatial horizons. In this work, we introduce OCARP, a modular agentic robot policy that executes user queries by using a library of tools on a dynamic inventory of objects. The agent builds the inventory by grounding query-relevant objects using a rich 3D map representation that includes open-vocabulary descriptors and 3D affordances. By combining the flexible reasoning abilities of an agent with a general spatial representation, OCARP can execute complex open-vocabulary queries in a zero-shot manner. We showcase how OCARP can be deployed in both tabletop and mobile settings due to the underlying scalable map representation.
Adaptive Resolution Residual Networks — Generalizing Across Resolutions Easily and Efficiently
The majority of signal data captured in the real world uses numerous sensors with different resolutions. In practice, most deep learning arc… (see more)hitectures are fixed-resolution; they consider a single resolution at training and inference time. This is convenient to implement but fails to fully take advantage of the diverse signal data that exists. In contrast, other deep learning architectures are adaptive-resolution; they directly allow various resolutions to be processed at training and inference time. This provides computational adaptivity but either sacrifices robustness or compatibility with mainstream layers, which hinders their use. In this work, we introduce Adaptive Resolution Residual Networks (ARRNs) to surpass this tradeoff. We construct ARRNs from Laplacian residuals, which serve as generic adaptive-resolution adapters for fixed-resolution layers. We use smoothing filters within Laplacian residuals to linearly separate input signals over a series of resolution steps. We can thereby skip Laplacian residuals to cast high-resolution ARRNs into low-resolution ARRNs that are computationally cheaper yet numerically identical over low-resolution signals. We guarantee this result when Laplacian residuals are implemented with perfect smoothing kernels. We complement this novel component with Laplacian dropout, which randomly omits Laplacian residuals during training. This regularizes for robustness to a distribution of lower resolutions. This also regularizes for numerical errors that may occur when Laplacian residuals are implemented with approximate smoothing kernels. We provide a solid grounding for the advantageous properties of ARRNs through a theoretical analysis based on neural operators, and empirically show that ARRNs embrace the challenge posed by diverse resolutions with computational adaptivity, robustness, and compatibility with mainstream layers.
Adaptive Resolution Residual Networks
We introduce Adaptive Resolution Residual Networks (ARRNs), a form of neural operator that enables the creation of networks for signal-based… (see more) tasks that can be rediscretized to suit any signal resolution. ARRNs are composed of a chain of Laplacian residuals that each contain ordinary layers, which do not need to be rediscretizable for the whole network to be rediscretizable. ARRNs have the property of requiring a lower number of Laplacian residuals for exact evaluation on lower-resolution signals, which greatly reduces computational cost. ARRNs also implement Laplacian dropout, which encourages networks to become robust to low-bandwidth signals. ARRNs can thus be trained once at high-resolution and then be rediscretized on the fly at a suitable resolution with great robustness.