Portrait de Omar G. Younis

Omar G. Younis

Collaborateur·rice de recherche
Superviseur⋅e principal⋅e
Sujets de recherche
Apprentissage par renforcement
Apprentissage par renforcement hors ligne
Modèles de fondation
Robotique

Publications

CUBE: A Standard for Unifying Agent Benchmarks
Alexandre Lacoste
Nicolas Gontier
Oleh Shliazhko
Aman Jaiswal
Shailesh Nanisetty
Joan Cabezas
Simone Baratta
Matteo Avalle
Elron Bandel
Michal Shmueli-Scheuer
Asaf Yehudai
Leshem Choshen
Sean Hughes
Massimo Caccia … (voir 6 de plus)
Tao Yu
Yu Su
Graham Neubig
Dawn Song
The proliferation of agent benchmarks has created critical fragmentation that threatens research productivity. Each new benchmark requires s… (voir plus)ubstantial custom integration, creating an "integration tax" that limits comprehensive evaluation. We propose CUBE (Common Unified Benchmark Environments), a universal protocol standard built on MCP and Gym that allows benchmarks to be wrapped once and used everywhere. By separating task, benchmark, package, and registry concerns into distinct API layers, CUBE enables any compliant platform to access any compliant benchmark for evaluation, RL training, or data generation without custom integration. We call on the community to contribute to the development of this standard before platform-specific implementations deepen fragmentation as benchmark production accelerates through 2026.
Using Unity to Help Solve Reinforcement Learning
Andrew Robert Williams
Vedant Vyas
Leveraging the depth and flexibility of XLand as well as the rapid prototyping features of the Unity engine, we present the United Unity Uni… (voir plus)verse — an open-source toolkit designed to accelerate the creation of innovative reinforcement learning environments. This toolkit includes a robust implementation of XLand 2.0 complemented by a user-friendly interface which allows users to modify the details of procedurally generated terrains and task rules with ease. Additionally, we provide a curated selection of terrains and rule sets, accompanied by implementations of reinforcement learning baselines to facilitate quick experimentation with novel architectural designs for adaptive agents. Furthermore, we illustrate how the United Unity Universe serves as a high-level language that enables researchers to develop diverse and endlessly variable 3D environments within a unified framework. This functionality establishes the United Unity Universe (U3) as an essential tool for advancing the field of reinforcement learning, especially in the development of adaptive and generalizable learning systems.