Publications

Chronosymbolic Learning: Efficient CHC Solving with Symbolic Reasoning and Inductive Learning
Ziyan Luo
Solving Constrained Horn Clauses (CHCs) is a fundamental challenge behind a wide range of verification and analysis tasks. Data-driven appro… (see more)aches show great promise in improving CHC solving without the painstaking manual effort of creating and tuning various heuristics. However, a large performance gap exists between data-driven CHC solvers and symbolic reasoning-based solvers. In this work, we develop a simple but effective framework,"Chronosymbolic Learning", which unifies symbolic information and numerical data points to solve a CHC system efficiently. We also present a simple instance of Chronosymbolic Learning with a data-driven learner and a BMC-styled reasoner. Despite its great simplicity, experimental results show the efficacy and robustness of our tool. It outperforms state-of-the-art CHC solvers on a dataset consisting of 288 benchmarks, including many instances with non-linear integer arithmetics.
Cone-Traced Supersampling for Signed Distance Field Rendering
Andrei Chubarau
Yangyang Zhao
Ruby Rao
Paul Kry
While Signed Distance Fields (SDFs) in theory offer infinite level of detail, they are typically rendered using the sphere tracing algorithm… (see more) at finite resolutions, which causes the common rasterized image synthesis problem of aliasing. Most existing optimized antialiasing solutions rely on polygon mesh representations; SDF-based geometry can only be directly antialiased with the computationally expensive supersampling or with post-processing filters that often lead to undesirable blurriness and ghosting. In this work, we present cone-traced supersampling (CTSS), an efficient and robust spatial antialiasing solution that naturally complements the sphere tracing algorithm, does not require casting additional rays per pixel or offline pre-filtering, and can be easily implemented in existing real-time SDF renderers. CTSS performs supersampling along the traced ray near surfaces with partial visibility identified by evaluating cone intersections within a pixel's view frustum. We further devise a specialized sampling strategy to minimize the number of shading computations and aggregate the collected samples based on their correlated visibility. Depending on configuration, CTSS incurs roughly 15-30% added computational cost and significantly outperforms conventional supersampling approaches while offering comparative antialiasing and visual image quality for most geometric edges.
Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP
Anya Belz
Craig Thomson
Ehud Reiter
Gavin Abercrombie
Jose M. Alonso-moral
Mohammad Arvan
Mark Cieliebak
Elizabeth Clark
Kees Van Deemter
Tanvi Dinkar
Ondrej Dusek
Steffen Eger
Qixiang Fang
Albert Gatt
Dimitra Gkatzia
Javier Gonz'alez-Corbelle
Dirk Hovy
Manuela Hurlimann
Takumi Ito … (see 19 more)
John D. Kelleher
Filip Klubicka
Huiyuan Lai
Chris van der Lee
Emiel van Miltenburg
Yiru Li
Saad Mahamood
Margot Mieskes
Malvina Nissim
Natalie Paige Parde
Ondvrej Pl'atek
Verena Teresa Rieser
Pablo Mosteiro Romero
Joel Joel Tetreault
Antonio Toral
Xiao-Yi Wan
Leo Wanner
Lewis Joshua Watson
Diyi Yang
We report our efforts in identifying a set of previous human evaluations in NLP that would be suitable for a coordinated study examining wha… (see more)t makes human evaluations in NLP more/less reproducible. We present our results and findings, which include that just 13% of papers had (i) sufficiently low barriers to reproduction, and (ii) enough obtainable information, to be considered for reproduction, and that all but one of the experiments we selected for reproduction was discovered to have flaws that made the meaningfulness of conducting a reproduction questionable. As a result, we had to change our coordinated study design from a reproduce approach to a standardise-then-reproduce-twice approach. Our overall (negative) finding that the great majority of human evaluations in NLP is not repeatable and/or not reproducible and/or too flawed to justify reproduction, paints a dire picture, but presents an opportunity for a rethink about how to design and report human evaluations in NLP.
On the incompatibility of accuracy and equal opportunity
Carlos Pinzón
Catuscia Palamidessi
Frank Valencia
156. Modeling Eye Gaze to Videos Using Dynamic Trajectory Variability Analysis
Qianying Wu
Na Yeon Kim
Jasmin Turner
Umit Keles
Lynn Paul
Ralph Adolphs
ArK: Augmented Reality with Knowledge Interactive Emergent Ability
Qiuyuan Huang
J. Park
Abhinav Gupta
Pan Lu
Paul N. Bennett
Ran Gong
Subhojit Som
Baolin Peng
Owais Khan Mohammed
Yejin Choi
Jianfeng Gao
Despite the growing adoption of mixed reality and interactive AI agents, it remains challenging for these systems to generate high quality 2… (see more)D/3D scenes in unseen environments. The common practice requires deploying an AI agent to collect large amounts of data for model training for every new task. This process is costly, or even impossible, for many domains. In this study, we develop an infinite agent that learns to transfer knowledge memory from general foundation models (e.g. GPT4, DALLE) to novel domains or scenarios for scene understanding and generation in the physical or virtual world. The heart of our approach is an emerging mechanism, dubbed Augmented Reality with Knowledge Inference Interaction (ArK), which leverages knowledge-memory to generate scenes in unseen physical world and virtual reality environments. The knowledge interactive emergent ability (Figure 1) is demonstrated as the observation learns i) micro-action of cross-modality: in multi-modality models to collect a large amount of relevant knowledge memory data for each interaction task (e.g., unseen scene understanding) from the physical reality; and ii) macro-behavior of reality-agnostic: in mix-reality environments to improve interactions that tailor to different characterized roles, target variables, collaborative information, and so on. We validate the effectiveness of ArK on the scene generation and editing tasks. We show that our ArK approach, combined with large foundation models, significantly improves the quality of generated 2D/3D scenes, compared to baselines, demonstrating the potential benefit of incorporating ArK in generative AI for applications such as metaverse and gaming simulation.
Bird Distribution Modelling using Remote Sensing and Citizen Science data
Mélisande Teng
Amna Elmustafa
Benjamin Akera
Combining Parameter-efficient Modules for Task-level Generalisation
CryCeleb: A Speaker Verification Dataset Based on Infant Cry Sounds
David Budaghyan
Arsenii Gorin
Charles Onu
This paper describes the Ubenwa CryCeleb dataset - a labeled collection of infant cries - and the accompanying CryCeleb 2023 task, which is … (see more)a public speaker verification challenge based on cry sounds. We released more than 6 hours of manually segmented cry sounds from 786 newborns for academic use, aiming to encourage research in infant cry analysis. The inaugural public competition attracted 59 participants, 11 of whom improved the baseline performance. The top-performing system achieved a significant improvement scoring 25.8% equal error rate, which is still far from the performance of state-of-the-art adult speaker verification systems. Therefore, we believe there is room for further research on this dataset, potentially extending beyond the verification task.
Distinct Social Behavior and Inter-Brain Connectivity in Dyads with autistic individuals
Quentin Moreau
Florence Brun
Anaël Ayrolles
Jacqueline Nadel
Embracing Channel Estimation in Multi-Packet Reception of ZigBee
Zhe Wang
Linghe Kong
Guihai Chen
As a low-power and low-cost wireless protocol, the promising ZigBee has been widely used in sensor networks and cyber-physical systems. Sinc… (see more)e ZigBee based networks usually adopt tree or cluster topology, the convergecast scenarios are common in which multiple transmitters send packets to one receiver, leading to the severe collision problem. The conventional ZigBee adopts carrier sense multiple access with collisions avoidance to avoid collisions, which introduces additional time/energy overhead. The state-of-the-art methods resolve collisions instead of avoidance, in which mZig decomposes a collision by the collision itself and reZig decodes a collision by comparing with reference waveforms. However, mZig falls into high decoding errors only exploiting the signal amplitudes while reZig incurs high computational complexity for waveform comparison. In this paper, we propose CmZig to embrace channel estimation in multiple-packet reception (MPR) of ZigBee, which effectively improves MPR via lightweight computing used for channel estimation and collision decomposition. First, CmZig enables accurate collision decomposition with low computational complexity, which uses the estimated channel parameters modeling both signal amplitudes and phases. Second, CmZig adopts reference waveform comparison only for collisions without chip-level time offsets, instead of the complex machine learning based method. We implement CmZig on USRP-N210 and establish a six-node testbed. Results show that CmZig achieves a bit error rate in the order of
Embracing Channel Estimation in Multi-Packet Reception of ZigBee
Zhe Wang
L. Kong
Xuemei Liu
Guihai Chen
As a low-power and low-cost wireless protocol, the promising ZigBee has been widely used in sensor networks and cyber-physical systems. Sinc… (see more)e ZigBee based networks usually adopt tree or cluster topology, the convergecast scenarios are common in which multiple transmitters send packets to one receiver, leading to the severe collision problem. The conventional ZigBee adopts carrier sense multiple access with collisions avoidance to avoid collisions, which introduces additional time/energy overhead. The state-of-the-art methods resolve collisions instead of avoidance, in which mZig decomposes a collision by the collision itself and reZig decodes a collision by comparing with reference waveforms. However, mZig falls into high decoding errors only exploiting the signal amplitudes while reZig incurs high computational complexity for waveform comparison. In this paper, we propose CmZig to embrace channel estimation in multiple-packet reception (MPR) of ZigBee, which effectively improves MPR via lightweight computing used for channel estimation and collision decomposition. First, CmZig enables accurate collision decomposition with low computational complexity, which uses the estimated channel parameters modeling both signal amplitudes and phases. Second, CmZig adopts reference waveform comparison only for collisions without chip-level time offsets, instead of the complex machine learning based method. We implement CmZig on USRP-N210 and establish a six-node testbed. Results show that CmZig achieves a bit error rate in the order of