Publications

Platform-based Adaptive Experimental Research in Education: Lessons Learned from The Digital Learning Challenge
Ilya Musabirov
Mohi Reza
Haochen Song
Steven Moore
Pan Chen
Harsh Kumar
Tong Li
John Stamper
Norman Bier
Anna Rafferty
Thomas Price
Nina Deliu
Michael Liut
Joseph Jay Williams
: We report on our experience with a real-world, multi-experimental evaluation of an adaptive experimentation platform within the XPRIZE Dig… (voir plus)ital Learning Challenge framework. We showcase how EASI (Experiment as a Service) cross-platform software supports quick integration and deployment of adaptive experiments as well as five systematic replications within a 30-day timeframe. The outline the key scenarios of the applicability of platform-supported experiments and reflect on lessons learned from this two-year project that can help researchers and practitioners to integrate adaptive experiments in real-world courses
AI Automatons: AI Systems Intended to Imitate Humans
A.R. Olteanu
Solon Barocas
Lisa Egede
Alicia DeVrio
Myra Cheng
There is a growing proliferation of AI systems designed to mimic people's behavior, work, abilities, likenesses, or humanness -- systems we … (voir plus)dub AI automatons. Individuals, groups, or generic humans are being simulated to produce creative work in their styles, to respond to surveys in their places, to probe how they would use a new system before deployment, to provide users with assistance and companionship, and to anticipate their possible future behavior and interactions with others, just to name a few applications. The research, design, deployment, and availability of such AI systems have, however, also prompted growing concerns about a wide range of possible legal, ethical, and other social impacts. To both 1) facilitate productive discussions about whether, when, and how to design and deploy such systems, and 2) chart the current landscape of existing and prospective AI automatons, we need to tease apart determinant design axes and considerations that can aid our understanding of whether and how various design choices along these axes could mitigate -- or instead exacerbate -- potential adverse impacts that the development and use of AI automatons could give rise to. In this paper, through a synthesis of related literature and extensive examples of existing AI systems intended to mimic humans, we develop a conceptual framework to help foreground key axes of design variations and provide analytical scaffolding to foster greater recognition of the design choices available to developers, as well as the possible ethical implications these choices might have.
DialEgg: Dialect-Agnostic MLIR Optimizer using Equality Saturation with Egglog.
Abd-El-Aziz Zayed
MLIR’s ability to optimize programs at multiple levels of abstraction is key to enabling domain-specific optimizing compilers. However, ex… (voir plus)pressing optimizations remains tedious. Optimizations can interact in unexpected ways, making it hard to unleash full performance. Equality saturation promises to solve these challenges. First, it simplifies the expression of optimizations using rewrite rules. Secondly, it considers all possible optimization interactions, through saturation, selecting the best program variant. Despite these advantages, equality saturation remains absent from production compilers such as MLIR. This paper proposes to integrate Egglog, a recent equality saturation engine, with MLIR, in a dialect-agnostic manner. This paper shows how the main MLIR constructs such as operations, types or attributes can be modeled in Egglog. It also presents DialEgg, a tool that pre-defines a large set of common MLIR constructs in Egglog and automatically translates between the MLIR and Egglog program representations. Using a few use-cases, this paper demonstrates the potential for combining equality saturation and MLIR.
Divergent responses to SARS-CoV-2 infection in bronchial epithelium with pre-existing respiratory diseases
Justine Oliva
Manon Ruffin
Claire Calmel
Aurélien Gibeaud
Andrés Pizzorno
Clémence Gaudin
Solenne Chardonnet
Viviane de Almeida Bastos
Manuel Rosa-Calatrava
Simon Rousseau
Harriet Corvol
Olivier Terrier
Loïc Guillot
Ensemble machine learning to accelerate industrial decarbonization: Prediction of Hansen solubility parameters for streamlined chemical solvent selection
Eslam G. Al-Sakkari
Mostafa Amer
Olumoye Ajao
Marzouk Benali
Daria C. Boffito
Mouloud Amazouz
Implicit Generative Modeling by Kernel Similarity Matching
Shubham Choudhary
Demba Ba
Improving clustering quality evaluation in noisy Gaussian mixtures
Renato Cordeiro De Amorim
Interpretable deep learning for deconvolutional analysis of neural signals
Bahareh Tolooshams
Sara Matias
Hao Wu
Simona Temereanca
Naoshige Uchida
Venkatesh N. Murthy
Demba Ba
The widespread adoption of deep learning to build models that capture the dynamics of neural populations is typically based on “black-box… (voir plus) approaches that lack an interpretable link between neural activity and network parameters. Here, we propose to apply algorithm unrolling, a method for interpretable deep learning, to design the architecture of sparse deconvolutional neural networks and obtain a direct interpretation of network weights in relation to stimulus-driven single-neuron activity through a generative model. We characterize our method, referred to as deconvolutional unrolled neural learning (DUNL), and show its versatility by applying it to deconvolve single-trial local signals across multiple brain areas and recording modalities. To exemplify use cases of our decomposition method, we uncover multiplexed salience and reward prediction error signals from midbrain dopamine neurons in an unbiased manner, perform simultaneous event detection and characterization in somatosensory thalamus recordings, and characterize the heterogeneity of neural responses in the piriform cortex and in the striatum during unstructured, naturalistic experiments. Our work leverages the advances in interpretable deep learning to gain a mechanistic understanding of neural activity.
Interval Regression: A Comparative Study with Proposed Models
Tung L. Nguyen
Regression models are essential for a wide range of real-world applications. However, in practice, target values are not always precisely kn… (voir plus)own; instead, they may be represented as intervals of acceptable values. This challenge has led to the development of Interval Regression models. In this study, we provide a comprehensive review of existing Interval Regression models and introduce alternative models for comparative analysis. Experiments are conducted on both real-world and synthetic datasets to offer a broad perspective on model performance. The results demonstrate that no single model is universally optimal, highlighting the importance of selecting the most suitable model for each specific scenario.
Large language models deconstruct the clinical intuition behind diagnosing autism
Emmett Rabot
Laurent Mottron
Learning adversarially robust kernel ensembles with kernel average pooling.
Amirozhan Dehghani
Yifei Ren
LLM-Safety Evaluations Lack Robustness
Tim Beyer
Simon Geisler
Stephan Günnemann
In this paper, we argue that current safety alignment research efforts for large language models are hindered by many intertwined sources of… (voir plus) noise, such as small datasets, methodological inconsistencies, and unreliable evaluation setups. This can, at times, make it impossible to evaluate and compare attacks and defenses fairly, thereby slowing progress. We systematically analyze the LLM safety evaluation pipeline, covering dataset curation, optimization strategies for automated red-teaming, response generation, and response evaluation using LLM judges. At each stage, we identify key issues and highlight their practical impact. We also propose a set of guidelines for reducing noise and bias in evaluations of future attack and defense papers. Lastly, we offer an opposing perspective, highlighting practical reasons for existing limitations. We believe that addressing the outlined problems in future research will improve the field's ability to generate easily comparable results and make measurable progress.