Publications

Single-cell analysis reveals inflammatory interactions driving macular degeneration
Manik Kuchroo
Marcello DiStasio
Eric Song
Eda Calapkulu
Le Zhang
Maryam Ige
Amar H. Sheth
Abdelilah Majdoubi
Madhvi Menon
Alexander Tong
Abhinav Godavarthi
Yu Xing
Scott Gigante
Holly Steach
Jessie Huang
Je-chun Huang
Guillaume Huguet
Janhavi Narain
Kisung You
George Mourgkos … (voir 6 de plus)
Rahul M. Dhodapkar
Matthew Hirn
Bastian Rieck
Smita Krishnaswamy
Brian P. Hafler
Automated Detection of Anatomical Landmarks During Colonoscopy Using a Deep Learning Model
Mahsa Taghiakbari
Sina Hamidi Ghalehjegh
Emmanuel Jehanno
Tess Berthier
Lisa Di Jorio
Saber Ghadakzadeh
Alan Barkun
Mark Takla
Mickael Bouin
Eric Deslandres
Simon Bouchard
Sacha Sidani
Daniel von Renteln
Abstract Background and aims Identification and photo-documentation of the ileocecal valve (ICV) and appendiceal orifice (AO) confirm comple… (voir plus)teness of colonoscopy examinations. We aimed to develop and test a deep convolutional neural network (DCNN) model that can automatically identify ICV and AO, and differentiate these landmarks from normal mucosa and colorectal polyps. Methods We prospectively collected annotated full-length colonoscopy videos of 318 patients undergoing outpatient colonoscopies. We created three nonoverlapping training, validation, and test data sets with 25,444 unaltered frames extracted from the colonoscopy videos showing four landmarks/image classes (AO, ICV, normal mucosa, and polyps). A DCNN classification model was developed, validated, and tested in separate data sets of images containing the four different landmarks. Results After training and validation, the DCNN model could identify both AO and ICV in 18 out of 21 patients (85.7%). The accuracy of the model for differentiating AO from normal mucosa, and ICV from normal mucosa were 86.4% (95% CI 84.1% to 88.5%), and 86.4% (95% CI 84.1% to 88.6%), respectively. Furthermore, the accuracy of the model for differentiating polyps from normal mucosa was 88.6% (95% CI 86.6% to 90.3%). Conclusion This model offers a novel tool to assist endoscopists with automated identification of AO and ICV during colonoscopy. The model can reliably distinguish these anatomical landmarks from normal mucosa and colorectal polyps. It can be implemented into automated colonoscopy report generation, photo-documentation, and quality auditing solutions to improve colonoscopy reporting quality.
Cone-Traced Supersampling for Signed Distance Field Rendering
Andrei Chubarau
Yangyang Zhao
Ruby Rao
Paul Kry
While Signed Distance Fields (SDFs) in theory offer infinite level of detail, they are typically rendered using the sphere tracing algorithm… (voir plus) at finite resolutions, which causes the common rasterized image synthesis problem of aliasing. Most existing optimized antialiasing solutions rely on polygon mesh representations; SDF-based geometry can only be directly antialiased with the computationally expensive supersampling or with post-processing filters that often lead to undesirable blurriness and ghosting. In this work, we present cone-traced supersampling (CTSS), an efficient and robust spatial antialiasing solution that naturally complements the sphere tracing algorithm, does not require casting additional rays per pixel or offline pre-filtering, and can be easily implemented in existing real-time SDF renderers. CTSS performs supersampling along the traced ray near surfaces with partial visibility identified by evaluating cone intersections within a pixel's view frustum. We further devise a specialized sampling strategy to minimize the number of shading computations and aggregate the collected samples based on their correlated visibility. Depending on configuration, CTSS incurs roughly 15-30% added computational cost and significantly outperforms conventional supersampling approaches while offering comparative antialiasing and visual image quality for most geometric edges.
Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP
Anya Belz
Craig Thomson
Ehud Reiter
Gavin Abercrombie
Jose M. Alonso-moral
Mohammad Arvan
Mark Cieliebak
Elizabeth Clark
Kees Van Deemter
Tanvi Dinkar
Ondrej Dusek
Steffen Eger
Qixiang Fang
Albert Gatt
Dimitra Gkatzia
Javier Gonz'alez-Corbelle
Dirk Hovy
Manuela Hurlimann
Takumi Ito … (voir 19 de plus)
John D. Kelleher
Filip Klubicka
Huiyuan Lai
Chris van der Lee
Emiel van Miltenburg
Yiru Li
Saad Mahamood
Margot Mieskes
Malvina Nissim
Natalie Paige Parde
Ondvrej Pl'atek
Verena Teresa Rieser
Pablo Mosteiro Romero
Joel Joel Tetreault
Antonio Toral
Xiao-Yi Wan
Leo Wanner
Lewis Joshua Watson
Diyi Yang
We report our efforts in identifying a set of previous human evaluations in NLP that would be suitable for a coordinated study examining wha… (voir plus)t makes human evaluations in NLP more/less reproducible. We present our results and findings, which include that just 13% of papers had (i) sufficiently low barriers to reproduction, and (ii) enough obtainable information, to be considered for reproduction, and that all but one of the experiments we selected for reproduction was discovered to have flaws that made the meaningfulness of conducting a reproduction questionable. As a result, we had to change our coordinated study design from a reproduce approach to a standardise-then-reproduce-twice approach. Our overall (negative) finding that the great majority of human evaluations in NLP is not repeatable and/or not reproducible and/or too flawed to justify reproduction, paints a dire picture, but presents an opportunity for a rethink about how to design and report human evaluations in NLP.
On the incompatibility of accuracy and equal opportunity
Carlos Pinzón
Catuscia Palamidessi
Frank Valencia
156. Modeling Eye Gaze to Videos Using Dynamic Trajectory Variability Analysis
Qianying Wu
Na Yeon Kim
Jasmin Turner
Umit Keles
Lynn Paul
Ralph Adolphs
ArK: Augmented Reality with Knowledge Interactive Emergent Ability
Qiuyuan Huang
J. Park
Abhinav Gupta
Pan Lu
Paul N. Bennett
Ran Gong
Subhojit Som
Baolin Peng
Owais Khan Mohammed
Yejin Choi
Jianfeng Gao
Despite the growing adoption of mixed reality and interactive AI agents, it remains challenging for these systems to generate high quality 2… (voir plus)D/3D scenes in unseen environments. The common practice requires deploying an AI agent to collect large amounts of data for model training for every new task. This process is costly, or even impossible, for many domains. In this study, we develop an infinite agent that learns to transfer knowledge memory from general foundation models (e.g. GPT4, DALLE) to novel domains or scenarios for scene understanding and generation in the physical or virtual world. The heart of our approach is an emerging mechanism, dubbed Augmented Reality with Knowledge Inference Interaction (ArK), which leverages knowledge-memory to generate scenes in unseen physical world and virtual reality environments. The knowledge interactive emergent ability (Figure 1) is demonstrated as the observation learns i) micro-action of cross-modality: in multi-modality models to collect a large amount of relevant knowledge memory data for each interaction task (e.g., unseen scene understanding) from the physical reality; and ii) macro-behavior of reality-agnostic: in mix-reality environments to improve interactions that tailor to different characterized roles, target variables, collaborative information, and so on. We validate the effectiveness of ArK on the scene generation and editing tasks. We show that our ArK approach, combined with large foundation models, significantly improves the quality of generated 2D/3D scenes, compared to baselines, demonstrating the potential benefit of incorporating ArK in generative AI for applications such as metaverse and gaming simulation.
Bird Distribution Modelling using Remote Sensing and Citizen Science data
Mélisande Teng
Amna Elmustafa
Benjamin Akera
Combining Parameter-efficient Modules for Task-level Generalisation
Distinct Social Behavior and Inter-Brain Connectivity in Dyads with autistic individuals
Quentin Moreau
Florence Brun
Anaël Ayrolles
Jacqueline Nadel
Embracing Channel Estimation in Multi-Packet Reception of ZigBee
Zhe Wang
Linghe Kong
Guihai Chen
As a low-power and low-cost wireless protocol, the promising ZigBee has been widely used in sensor networks and cyber-physical systems. Sinc… (voir plus)e ZigBee based networks usually adopt tree or cluster topology, the convergecast scenarios are common in which multiple transmitters send packets to one receiver, leading to the severe collision problem. The conventional ZigBee adopts carrier sense multiple access with collisions avoidance to avoid collisions, which introduces additional time/energy overhead. The state-of-the-art methods resolve collisions instead of avoidance, in which mZig decomposes a collision by the collision itself and reZig decodes a collision by comparing with reference waveforms. However, mZig falls into high decoding errors only exploiting the signal amplitudes while reZig incurs high computational complexity for waveform comparison. In this paper, we propose CmZig to embrace channel estimation in multiple-packet reception (MPR) of ZigBee, which effectively improves MPR via lightweight computing used for channel estimation and collision decomposition. First, CmZig enables accurate collision decomposition with low computational complexity, which uses the estimated channel parameters modeling both signal amplitudes and phases. Second, CmZig adopts reference waveform comparison only for collisions without chip-level time offsets, instead of the complex machine learning based method. We implement CmZig on USRP-N210 and establish a six-node testbed. Results show that CmZig achieves a bit error rate in the order of
Embracing Channel Estimation in Multi-Packet Reception of ZigBee
Zhe Wang
L. Kong
Xuemei Liu
Guihai Chen
As a low-power and low-cost wireless protocol, the promising ZigBee has been widely used in sensor networks and cyber-physical systems. Sinc… (voir plus)e ZigBee based networks usually adopt tree or cluster topology, the convergecast scenarios are common in which multiple transmitters send packets to one receiver, leading to the severe collision problem. The conventional ZigBee adopts carrier sense multiple access with collisions avoidance to avoid collisions, which introduces additional time/energy overhead. The state-of-the-art methods resolve collisions instead of avoidance, in which mZig decomposes a collision by the collision itself and reZig decodes a collision by comparing with reference waveforms. However, mZig falls into high decoding errors only exploiting the signal amplitudes while reZig incurs high computational complexity for waveform comparison. In this paper, we propose CmZig to embrace channel estimation in multiple-packet reception (MPR) of ZigBee, which effectively improves MPR via lightweight computing used for channel estimation and collision decomposition. First, CmZig enables accurate collision decomposition with low computational complexity, which uses the estimated channel parameters modeling both signal amplitudes and phases. Second, CmZig adopts reference waveform comparison only for collisions without chip-level time offsets, instead of the complex machine learning based method. We implement CmZig on USRP-N210 and establish a six-node testbed. Results show that CmZig achieves a bit error rate in the order of