Publications

GAGE: Genetic Algorithm-Based Graph Explainer for Malware Analysis
Mohd Saqib
Philippe Charland
Andrew Walenstein
Malware analysts often prefer reverse engineering using Call Graphs, Control Flow Graphs (CFGs), and Data Flow Graphs (DFGs), which involves… (see more) the utilization of black-box Deep Learning (DL) models. The proposed research introduces a structured pipeline for reverse engineering-based analysis, offering promising results compared to state-of-the-art methods and providing high-level interpretability for malicious code blocks in subgraphs. We propose the Canonical Executable Graph (CEG) as a new representation of Portable Executable (PE) files, uniquely incorporating syntactical and semantic information into its node embeddings. At the same time, edge features capture structural aspects of PE files. This is the first work to present a PE file representation encompassing syntactical, semantic, and structural characteristics, whereas previous efforts typically focused solely on syntactic or structural properties. Furthermore, recognizing the limitations of existing graph explanation methods within Explainable Artificial Intelligence (XAI) for malware analysis, primarily due to the specificity of malicious files, we introduce Genetic Algorithm-based Graph Explainer (GAGE). GAGE operates on the CEG, striving to identify a precise subgraph relevant to predicted malware families. Through experiments and comparisons, our proposed pipeline exhibits substantial improvements in model robustness scores and discriminative power compared to the previous benchmarks. Furthermore, we have successfully used GAGE in practical applications on real-world data, producing meaningful insights and interpretability. This research offers a robust solution to enhance cybersecurity by delivering a transparent and accurate understanding of malware behaviour. Moreover, the proposed algorithm is specialized in handling graph-based data, effectively dissecting complex content and isolating influential nodes.
Globally Stable Neural Imitation Policies
Amin Abyaneh
Mariana Sosa Guzmán
TEMPLATES: Characterization of a Merger in the Dusty Lensing SPT0418-47 System
Jared Cathey
Anthony H. Gonzalez
Sidney Lower
Kedar A. Phadke
Justin Spilker
Manuel Aravena
Matthew Bayliss
Jack E. Birkin
Simon Birrer
Scott Chapman
Håkon Dahle
Christopher C. Hayward
Ryley Hill
Taylor A. Hutchison
Keunho J. Kim
Guillaume Mahler
Daniel P. Marrone
Desika Narayanan
Alexander Navarre … (see 7 more)
Cassie Reuter
Jane R Rigby
Keren Sharon
Manuel Solimano
Nikolaus Sulzenauer
Joaquin Vieira
David Vizgan
The 1st International Workshop on Graph Foundation Models (GFM)
Haitao Mao
Jianan Zhao
Xiaoxin He
Zhikai Chen
Qian Huang
Zhaocheng Zhu
Micheal Bronstein
Xavier Bresson
Bryan Hooi
Haiyang Zhang
Xianfeng Tang
Luo Chen
Jiliang Tang
An AI-Resilient Text Rendering Technique for Reading and Skimming Documents
Ziwei Gu
Kenneth Li
Jonathan K. Kummerfeld
Elena L. Glassman
ChainForge: A Visual Toolkit for Prompt Engineering and LLM Hypothesis Testing
Chelse Swoopes
Priyan Vaithilingam
Martin Wattenberg
Elena L. Glassman
Evaluating outputs of large language models (LLMs) is challenging, requiring making -- and making sense of -- many responses. Yet tools that… (see more) go beyond basic prompting tend to require knowledge of programming APIs, focus on narrow domains, or are closed-source. We present ChainForge, an open-source visual toolkit for prompt engineering and on-demand hypothesis testing of text generation LLMs. ChainForge provides a graphical interface for comparison of responses across models and prompt variations. Our system was designed to support three tasks: model selection, prompt template design, and hypothesis testing (e.g., auditing). We released ChainForge early in its development and iterated on its design with academics and online users. Through in-lab and interview studies, we find that a range of people could use ChainForge to investigate hypotheses that matter to them, including in real-world settings. We identify three modes of prompt engineering and LLM hypothesis testing: opportunistic exploration, limited evaluation, and iterative refinement.
Designing and Evaluating Dialogue LLMs for Co-Creative Improvised Theatre
Boyd Branch
Piotr Mirowski
Sophia Ppali
Alexandra Covaci
Social robotics researchers are increasingly interested in multi-party trained conversational agents. With a growing demand for real-world e… (see more)valuations, our study presents Large Language Models (LLMs) deployed in a month-long live show at the Edinburgh Festival Fringe. This case study investigates human improvisers co-creating with conversational agents in a professional theatre setting. We explore the technical capabilities and constraints of on-the-spot multi-party dialogue, providing comprehensive insights from both audience and performer experiences with AI on stage. Our human-in-the-loop methodology underlines the challenges of these LLMs in generating context-relevant responses, stressing the user interface's crucial role. Audience feedback indicates an evolving interest for AI-driven live entertainment, direct human-AI interaction, and a diverse range of expectations about AI's conversational competence and utility as a creativity support tool. Human performers express immense enthusiasm, varied satisfaction, and the evolving public opinion highlights mixed emotions about AI's role in arts.
Calibration‐free parallel transmission of the cervical, thoracic, and lumbar spinal cord at <scp>7T</scp>
Christoph S. Aigner
Manuel F. Sánchez Alarcon
Alexandre D'Astous
Eva Alonso‐Ortiz
Sebastian Schmitter
Repeat it without me: Crowdsourcing the T1 mapping common ground via the ISMRM reproducibility challenge.
Mathieu Boudreau
Agah Karakuzu
Ecem Bozkurt
Madeline Carr
Marco Castellaro
Luis Concha
Mariya Doneva
Seraina A. Dual
Alex Ensworth
Alexandru Foias
Véronique Fortier
Refaat E. Gabr
Guillaume Gilbert
Carri K. Glide‐Hurst
Matthew Grech‐Sollars
Siyuan Hu
Oscar Jalnefjord
Jorge Jovicich
Kübra Keskin … (see 22 more)
Peter Koken
Anastasia Kolokotronis
Simran Kukran
Nam G. Lee
Ives R. Levesque
Bochao Li
Dan Ma
Burkhard Mädler
Nyasha G. Maforo
Jamie Near
Erick Pasaye
Alonso Ramirez‐Manzanares
Ben Statton
Christian Stehning
Stefano Tambalo
Ye Tian
Chenyang Wang
Kilian Weiss
Niloufar Zakariaei
Shuo Zhang
Ziwei Zhao
Nikola Stikov
PURPOSE T1 mapping is a widely used quantitative MRI technique, but its tissue-specific values remain inconsistent across protocols, sites, … (see more)and vendors. The ISMRM Reproducible Research and Quantitative MR study groups jointly launched a challenge to assess the reproducibility of a well-established inversion-recovery T1 mapping technique, using acquisition details from a seminal T1 mapping paper on a standardized phantom and in human brains. METHODS The challenge used the acquisition protocol from Barral et al. (2010). Researchers collected T1 mapping data on the ISMRM/NIST phantom and/or in human brains. Data submission, pipeline development, and analysis were conducted using open-source platforms. Intersubmission and intrasubmission comparisons were performed. RESULTS Eighteen submissions (39 phantom and 56 human datasets) on scanners by three MRI vendors were collected at 3 T (except one, at 0.35 T). The mean coefficient of variation was 6.1% for intersubmission phantom measurements, and 2.9% for intrasubmission measurements. For humans, the intersubmission/intrasubmission coefficient of variation was 5.9/3.2% in the genu and 16/6.9% in the cortex. An interactive dashboard for data visualization was also developed: https://rrsg2020.dashboards.neurolibre.org. CONCLUSION The T1 intersubmission variability was twice as high as the intrasubmission variability in both phantoms and human brains, indicating that the acquisition details in the original paper were insufficient to reproduce a quantitative MRI protocol. This study reports the inherent uncertainty in T1 measures across independent research groups, bringing us one step closer to a practical clinical baseline of T1 variations in vivo.
Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems
David Dalrymple
David
Joar Max Viktor Skalse
Stuart Russell
Max Tegmark
Sanjit A. Seshia
Steve Omohundro
Christian Szegedy
Ben Goldhaber
Nora Ammann
Alessandro Abate
Joe Halpern
Clark Barrett
Ding Zhao
Zhi-Xuan Tan
Jeannette Wing
Joshua B. Tenenbaum
Ensuring that AI systems reliably and robustly avoid harmful or dangerous behaviours is a crucial challenge, especially for AI systems with … (see more)a high degree of autonomy and general intelligence, or systems used in safety-critical contexts. In this paper, we will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI. The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees. This is achieved by the interplay of three core components: a world model (which provides a mathematical description of how the AI system affects the outside world), a safety specification (which is a mathematical description of what effects are acceptable), and a verifier (which provides an auditable proof certificate that the AI satisfies the safety specification relative to the world model). We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them. We also argue for the necessity of this approach to AI safety, and for the inadequacy of the main alternative approaches.
Interpretability Needs a New Paradigm
Andreas Madsen
Himabindu Lakkaraju
Quantifying neurodegeneration of the cervical cord and brain in degenerative cervical myelopathy: A multicentre study using quantitative <scp>magnetic resonance imaging</scp>
Patrick Freund
Viveka Boller
Tim M. Emmenegger
Muhammad Akbar
Markus Hupp
Nikolai Pfender
Claudia A. M. Gandini Wheeler-Kingshott
Michael G. Fehlings
Armin Curt
Maryam Seif