Portrait de Kumaraditya Gupta

Kumaraditya Gupta

Doctorat
Superviseur⋅e principal⋅e
Co-supervisor
Sujets de recherche
Robotique
Vision par ordinateur

Publications

Object-Centric Agentic Robot Policies
Executing open-ended natural language queries in previously unseen environments is a core problem in robotics. While recent advances in imit… (voir plus)ation learning and vision-language modeling have enabled promising end-to-end policies, these models struggle when faced with complex instructions and new scenes. Their short input context also limits their ability to solve tasks over larger spatial horizons. In this work, we introduce OCARP, a modular agentic robot policy that executes user queries by using a library of tools on a dynamic inventory of objects. The agent builds the inventory by grounding query-relevant objects using a rich 3D map representation that includes open-vocabulary descriptors and 3D affordances. By combining the flexible reasoning abilities of an agent with a general spatial representation, OCARP can execute complex open-vocabulary queries in a zero-shot manner. We showcase how OCARP can be deployed in both tabletop and mobile settings due to the underlying scalable map representation.
OpenLex3D: A Tiered Evaluation Benchmark for Open-Vocabulary 3D Scene Representations
Christina Kassab
Martin Büchner
Matias Mattamala
Abhinav Valada
Maurice Fallon
3D scene understanding has been transformed by open-vocabulary language models that enable interaction via natural language. However, at pre… (voir plus)sent the evaluation of these representations is limited to datasets with closed-set semantics that do not capture the richness of language. This work presents OpenLex3D, a dedicated benchmark for evaluating 3D open-vocabulary scene representations. OpenLex3D provides entirely new label annotations for scenes from Replica, ScanNet++, and HM3D, which capture real-world linguistic variability by introducing synonymical object categories and additional nuanced descriptions. Our label sets provide 13 times more labels per scene than the original datasets. By introducing an open-set 3D semantic segmentation task and an object retrieval task, we evaluate various existing 3D open-vocabulary methods on OpenLex3D, showcasing failure cases, and avenues for improvement. Our experiments provide insights on feature precision, segmentation, and downstream capabilities. The benchmark is publicly available at: https://openlex3d.github.io/.
OpenLex3D: A New Evaluation Benchmark for Open-Vocabulary 3D Scene Representations
Christina Kassab
Martin Büchner
Matias Mattamala
Abhinav Valada
Maurice Fallon