Publications

Uncertainty Resolution in Misinformation Detection
Yury Orlovskiy
Camille Thibault
Anne Imouza
Jean-François Godbout
Kellin Pelrine
Adaptation Odyssey in LLMs: Why Does Additional Pretraining Sometimes Fail to Improve?
Firat Oncel
Matthias Bethge
Beyza Ermis
cCaugatay Yildiz
An Addendum to NeBula: Towards Extending TEAM CoSTAR’s Solution to Larger Scale Environments
Benjamin Morrell
Kyohei Otsu
Ali Agha
David D. Fan
Sung-Kyun Kim
Muhammad Fadhil Ginting
Xianmei Lei
Jeffrey Edlund
Seyed Fakoorian
Amanda Bouman
Fernando Chavez
Taeyeon Kim
Gustavo J. Correa
Maira Saboia
Angel Santamaria-Navarro
Brett Lopez
Boseong Kim
Chanyoung Jung
Mamoru Sobue
Oriana Claudia Peltzer … (voir 69 de plus)
Joshua Ott
Robert Trybula
Thomas Touma
Marcel Kaufmann
Tiago Stegun Vaquero
Torkom Pailevanian
Matteo Palieri
Yun Chang
Andrzej Reinke
Matthew Anderson
Frederik E.T. Schöller
Patrick Spieler
Lillian Clark
Avak Archanian
Kenny Chen
Hovhannes Melikyan
Anushri Dixit
Harrison Delecki
Daniel Pastor
Barry Ridge
Nicolas Marchal
Jose Uribe
Sharmita Dey
Kamak Ebadi
Kyle Coble
Alexander Nikitas Dimopoulos
Vivek Thangavelu
Vivek Shankar Vardharajan
Nicholas Palomo
Antoni Rosinol
Arghya Chatterjee
Christoforos Kanellakis
Bjorn Lindqvist
Micah Corah
Kyle Strickland
Ryan Stonebraker
Michael Milano
Christopher E. Denniston
Sami Sahnoune
Thomas Claudet
Seungwook Lee
Gautam Salhotra
Edward Terry
Rithvik Musuku
Robin Schmid
Tony Tran
Ara Kourchians
Justin Schachter
Hector Azpurua
Levi Resende
Arash Kalantari
Jeremy Nash
Josh Lee
Christopher Patterson
Jen Blank
Kartik Patath
Yuki Kubo
Ryan Alimo
Yasin Almalioglu
Aaron Curtis
Jacqueline Sly
Tesla Wells
Nhut T. Ho
Mykel Kochenderfer
George Nikolakopoulos
David Shim
Luca Carlone
Joel Burdick
This paper presents an appendix to the original NeBula autonomy solution [Agha et al., 2021] developed by the TEAM CoSTAR (Collaborative Sub… (voir plus)Terranean Autonomous Robots), participating in the DARPA Subterranean Challenge. Specifically, this paper presents extensions to NeBula’s hardware, software, and algorithmic components that focus on increasing the range and scale of the exploration environment. From the algorithmic perspective, we discuss the following extensions to the original NeBula framework: (i) large-scale geometric and semantic environment mapping; (ii) an adaptive positioning system; (iii) probabilistic traversability analysis and local planning; (iv) large-scale POMDPbased global motion planning and exploration behavior; (v) large-scale networking and decentralized reasoning; (vi) communication-aware mission planning; and (vii) multi-modal ground-aerial exploration solutions. We demonstrate the application and deployment of the presented systems and solutions in various large-scale underground environments, including limestone mine exploration scenarios as well as deployment in the DARPA Subterranean challenge.
Affirmative Safety: An Approach to Risk Management for Advanced Ai
Akash Wasil
Joshua Clymer
Emily Dardaman
Simeon Campos
Evan Murphy
AIoT Smart Home via Autonomous LLM Agents
Dmitriy Rivkin
Francois Hogan
Amal Feriani
Abhisek Konar
Adam Sigal
AmbieGen at the SBFT 2024 Tool Competition - CPS-UAV Track
Dmytro Humeniuk
AmbieGenVAE at the SBFT 2024 Tool Competition - Cyber-Physical Systems Track
Dmytro Humeniuk
An Analysis of Quantile Temporal-Difference Learning
Mark Rowland
Remi Munos
Mohammad Gheshlaghi Azar
Yunhao Tang
Georg Ostrovski
Anna Harutyunyan
K. Tuyls
Will Dabney
We analyse quantile temporal-difference learning (QTD), a distributional reinforcement learning algorithm that has proven to be a key compon… (voir plus)ent in several successful large-scale applications of reinforcement learning. Despite these empirical successes, a theoretical understanding of QTD has proven elusive until now. Unlike classical TD learning, which can be analysed with standard stochastic approximation tools, QTD updates do not approximate contraction mappings, are highly non-linear, and may have multiple fixed points. The core result of this paper is a proof of convergence to the fixed points of a related family of dynamic programming procedures with probability 1, putting QTD on firm theoretical footing. The proof establishes connections between QTD and non-linear differential inclusions through stochastic approximation theory and non-smooth analysis.
An Analytic Hierarchy Process based approach for assessing the performance of photovoltaic solar power plants
Meryam Chafiq
Loubna Benabbou
Ismail Belhaj
Abdelali Djdiaa
Hicham Bouzekri
Abdelaziz Berrado
Application-Driven Innovation in Machine Learning
Alan Aspuru-Guzik
Sara Beery
Bistra Dilkina
Priya L. Donti
Marzyeh Ghassemi
Hannah Kerner
Claire Monteleoni
Esther Rolf
Milind Tambe
Adam White
As applications of machine learning proliferate, innovative algorithms inspired by specific real-world challenges have become increasingly i… (voir plus)mportant. Such work offers the potential for significant impact not merely in domains of application but also in machine learning itself. In this paper, we describe the paradigm of application-driven research in machine learning, contrasting it with the more standard paradigm of methods-driven research. We illustrate the benefits of application-driven machine learning and how this approach can productively synergize with methods-driven work. Despite these benefits, we find that reviewing, hiring, and teaching practices in machine learning often hold back application-driven innovation. We outline how these processes may be improved.
AsmDocGen: Generating Functional Natural Language Descriptions for Assembly Code
Jesia Yuki
Mohammadhossein Amouei
Philippe Charland
Andrew Walenstein
Assessing Neural Network Representations During Training Using Noise-Resilient Diffusion Spectral Entropy
Danqi Liao
Chen Liu
Benjamin W Christensen
Alexander Tong
Guillaume Huguet
Maximilian Nickel
Ian Adelstein
Smita Krishnaswamy
Entropy and mutual information in neural networks provide rich information on the learning process, but they have proven difficult to comput… (voir plus)e reliably in high dimensions. Indeed, in noisy and high-dimensional data, traditional estimates in ambient dimensions approach a fixed entropy and are prohibitively hard to compute. To address these issues, we leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures. Specifically, we define diffusion spectral entropy (DSE) in neural representations of a dataset as well as diffusion spectral mutual information (DSMI) between different variables representing data. First, we show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data that outperform classic Shannon entropy, nonparametric estimation, and mutual information neural estimation (MINE). We then study the evolution of representations in classification networks with supervised learning, self-supervision, or overfitting. We observe that (1) DSE of neural representations increases during training; (2) DSMI with the class label increases during generalizable learning but stays stagnant during overfitting; (3) DSMI with the input signal shows differing trends: on MNIST it increases, while on CIFAR-10 and STL-10 it decreases. Finally, we show that DSE can be used to guide better network initialization and that DSMI can be used to predict downstream classification accuracy across 962 models on ImageNet.