ToxiSight: Insights Towards Detected Chat Toxicity
Zachary Yang
Domenico Tullo
We present a comprehensive explainability dashboard designed for in-game chat toxicity. This dashboard integrates various existing explainab… (voir plus)le AI (XAI) techniques, including token importance analysis, model output visualization, and attribution to the training dataset. It also provides insights through the closest positive and negative examples, facilitating a deeper understanding and potential correction of the training data. Additionally, the dashboard includes word sense analysis—particularly useful for new moderators—and offers free-text explanations for both positive and negative predictions. This multi-faceted approach enhances the interpretability and transparency of toxicity detection models.
ChainBuddy: An AI Agent System for Generating LLM Pipelines
Jingyue Zhang
As large language models (LLMs) advance, their potential applications have grown significantly. However, it remains difficult to evaluate LL… (voir plus)M behavior on user-specific tasks and craft effective pipelines to do so. Many users struggle with where to start, often referred to as the"blank page"problem. ChainBuddy, an AI assistant for generating evaluative LLM pipelines built into the ChainForge platform, aims to tackle this issue. ChainBuddy offers a straightforward and user-friendly way to plan and evaluate LLM behavior, making the process less daunting and more accessible across a wide range of possible tasks and use cases. We report a within-subjects user study comparing ChainBuddy to the baseline interface. We find that when using AI assistance, participants reported a less demanding workload and felt more confident setting up evaluation pipelines of LLM behavior. We derive insights for the future of interfaces that assist users in the open-ended evaluation of AI.
ChainBuddy: An AI-assisted Agent System for Generating LLM Pipelines
Jingyue Zhang
ChainBuddy: An AI-assisted Agent System for Generating LLM Pipelines
Jingyue Zhang
Development of small, cost‐efficient scintillating fiber detectors for automated synthesis of positron emission tomography radiopharmaceuticals
Hailey Ahn
Liam Carroll
Robert Hopewell
I-Huang Tsai
Dean Jolly
Gassan Massarweh
Diagnostic tests for infections in critically ill immunocompromised patients
Adrien Joseph
Lara Zafrani
Dynamic HumTrans: Humming Transcription Using CNNs and Dynamic Programming
Shubham Gupta
Isaac Neri Gomez-Sarmiento
Faez Amjed Mezdari
Enhancing Logical Reasoning in Large Language Models through Graph-based Synthetic Data
Jiaming Zhou
Abbas Ghaddar
Ge Zhang
Liheng Ma
Yaochen Hu
Soumyasundar Pal
Bin Wang
Yingxue Zhang
Jianye Hao
Explaining Network Decision Provides Insights on the Causal Interaction Between Brain Regions in a Motor Imagery Task
Davide Borra
Multi-modal Decoding of Reach-to-Grasping from EEG and EMG via Neural Networks
Davide Borra
Matteo Fraternali
Elisa Magosso
Relative biological effectiveness of clinically relevant photon energies for the survival of human colorectal, cervical, and prostate cancer cell lines
Joanna Li
N. Chabaytah
Joud Babik
Behnaz Behmand
H. Bekerat
Tanner Connell
Michael D C Evans
Russell Ruo
T. Vuong
Training Language Models to Self-Correct via Reinforcement Learning
Aviral Kumar
Vincent Zhuang
Yi Su
John D Co-Reyes
Avi Singh
Kate Baumli
Shariq N Iqbal
Colton Bishop
Rebecca Roelofs
Lei M Zhang
Kay McKinney
Disha Shrivastava
Cosmin Paduraru
George Tucker
Feryal Behbahani
Aleksandra Faust
Self-correction is a highly desirable capability of large language models (LLMs), yet it has consistently been found to be largely ineffecti… (voir plus)ve in modern LLMs. Current methods for training self-correction typically depend on either multiple models, a more advanced model, or additional forms of supervision. To address these shortcomings, we develop a multi-turn online reinforcement learning (RL) approach, SCoRe, that significantly improves an LLM's self-correction ability using entirely self-generated data. To build SCoRe, we first show that variants of supervised fine-tuning (SFT) on offline model-generated correction traces are often insufficient for instilling self-correction behavior. In particular, we observe that training via SFT falls prey to either a distribution mismatch between mistakes made by the data-collection policy and the model's own responses, or to behavior collapse, where learning implicitly prefers only a certain mode of correction behavior that is often not effective at self-correction on test problems. SCoRe addresses these challenges by training under the model's own distribution of self-generated correction traces and using appropriate regularization to steer the learning process into learning a self-correction behavior that is effective at test time as opposed to fitting high-reward responses for a given prompt. This regularization process includes an initial phase of multi-turn RL on a base model to generate a policy initialization that is less susceptible to collapse, followed by using a reward bonus to amplify self-correction. With Gemini 1.0 Pro and 1.5 Flash models, we find that SCoRe achieves state-of-the-art self-correction performance, improving the base models' self-correction by 15.6% and 9.1% respectively on MATH and HumanEval.