Consistent Training via Energy-Based GFlowNets for Modeling Discrete Joint Distributions
Chanakya Ajit Ekbote
Moksh J. Jain
Payel Das
Generative Flow Networks (GFlowNets) have demonstrated significant performance improvements for generating diverse discrete objects …
Discrete Factorial Representations as an Abstraction for Goal Conditioned Reinforcement Learning
Riashat Islam
Hongyu Zang
Anirudh Goyal
Alex Lamb
Kenji Kawaguchi
Xin Li
Romain Laroche
Remi Tachet des Combes
Global SARS-CoV-2 seroprevalence from January 2020 to April 2022: A systematic review and meta-analysis of standardized population-based studies
Isabel Bergeri
Mairead Whelan
Harriet Ware
Lorenzo Subissi
Anthony Nardone
Hannah C. Lewis
Zihan Li
Xiaomeng Ma
Marta Valenciano
Brianna Cheng
Lubna Al Ariqi
Arash Rashidian
Joseph Okeibunor
Tasnim Azim
Pushpa Wijesinghe
Linh-Vi Le
Aisling Vaughan
Richard Pebody
Andrea Vicari
Tingting Yan … (voir 9 de plus)
Mercedes Yanes-Lane
Christian Cao
David A. Clifton
Matthew P. Cheng
Jesse Papenburg
Niklas Bobrovitz
Rahul K. Arora
Maria D Van Kerkhove
Successive-Cancellation Decoding of Reed-Muller Codes With Fast Hadamard Transform
Nghia Doan
Seyyed Ali Hashemi
A novel permuted fast successive-cancellation list decoding algorithm with fast Hadamard transform (FHT-FSCL) is presented. The proposed dec… (voir plus)oder initializes
Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints
Jose Gallego-Posada
Juan Ramirez
Akram Erraqabi
The performance of trained neural networks is robust to harsh levels of pruning. Coupled with the ever-growing size of deep learning models,… (voir plus) this observation has motivated extensive research on learning sparse models. In this work, we focus on the task of controlling the level of sparsity when performing sparse learning. Existing methods based on sparsity-inducing penalties involve expensive trial-and-error tuning of the penalty factor, thus lacking direct control of the resulting model sparsity. In response, we adopt a constrained formulation: using the gate mechanism proposed by Louizos et al. (2018), we formulate a constrained optimization problem where sparsification is guided by the training objective and the desired sparsity target in an end-to-end fashion. Experiments on CIFAR-{10, 100}, TinyImageNet, and ImageNet using WideResNet and ResNet{18, 50} models validate the effectiveness of our proposal and demonstrate that we can reliably achieve pre-determined sparsity targets without compromising on predictive performance.
MAgNet: Mesh Agnostic Neural PDE Solver
Oussama Boussif
Dan Assouline
The computational complexity of classical numerical methods for solving Partial Differential Equations (PDE) scales significantly as the res… (voir plus)olution increases. As an important example, climate predictions require fine spatio-temporal resolutions to resolve all turbulent scales in the fluid simulations. This makes the task of accurately resolving these scales computationally out of reach even with modern supercomputers. As a result, current numerical modelers solve PDEs on grids that are too coarse (3km to 200km on each side), which hinders the accuracy and usefulness of the predictions. In this paper, we leverage the recent advances in Implicit Neural Representations (INR) to design a novel architecture that predicts the spatially continuous solution of a PDE given a spatial position query. By augmenting coordinate-based architectures with Graph Neural Networks (GNN), we enable zero-shot generalization to new non-uniform meshes and long-term predictions up to 250 frames ahead that are physically consistent. Our Mesh Agnostic Neural PDE Solver (MAgNet) is able to make accurate predictions across a variety of PDE simulation datasets and compares favorably with existing baselines. Moreover, MAgNet generalizes well to different meshes and resolutions up to four times those trained on.
Rethinking Generalization: The Impact of Annotation Style on Medical Image Segmentation
Brennan Nichyporuk
Jillian L. Cardinell
Justin Szeto
Raghav Mehta
Jean-Pierre R. Falet
Douglas Arnold
Sotirios A. Tsaftaris
Generalization is an important attribute of machine learning models, particularly for those that are to be deployed in a medical context, wh… (voir plus)ere unreliable predictions can have real world consequences. While the failure of models to generalize across datasets is typically attributed to a mismatch in the data distributions, performance gaps are often a consequence of biases in the "ground-truth" label annotations. This is particularly important in the context of medical image segmentation of pathological structures (e.g. lesions), where the annotation process is much more subjective, and affected by a number underlying factors, including the annotation protocol, rater education/experience, and clinical aims, among others. In this paper, we show that modeling annotation biases, rather than ignoring them, poses a promising way of accounting for differences in annotation style across datasets. To this end, we propose a generalized conditioning framework to (1) learn and account for different annotation styles across multiple datasets using a single model, (2) identify similar annotation styles across different datasets in order to permit their effective aggregation, and (3) fine-tune a fully trained model to a new annotation style with just a few samples. Next, we present an image-conditioning approach to model annotation styles that correlate with specific image features, potentially enabling detection biases to be more easily identified.
When Do We Need GNN for Node Classification?
Sitao Luan
Chenqing Hua
Qincheng Lu
Jiaqi Zhu
Xiao-Wen Chang
When Do We Need GNN for Node Classification?
Sitao Luan
Chenqing Hua
Qincheng Lu
Jiaqi Zhu
Xiao-Wen Chang
Notational Programming for Notebook Environments: A Case Study with Quantum Circuits
Anthony DeArmas
Michael Roberts
Shrutarshi Basu
Tapan Parikh
We articulate a vision for computer programming that includes pen-based computing, a paradigm we term notational programming. Notational pro… (voir plus)gramming blurs contexts: certain typewritten variables can be referenced in handwritten notation and vice-versa. To illustrate this paradigm, we developed an extension, Notate, to computational notebooks which allows users to open drawing canvases within lines of code. As a case study, we explore quantum programming and designed a notation, Qaw, that extends quantum circuit notation with abstraction features, such as variable-sized wire bundles and recursion. Results from a usability study with novices suggest that users find our core interaction of implicit cross-context references intuitive, but suggests further improvements to debugging infrastructure, interface design, and recognition rates. Throughout, we discuss questions raised by the notational paradigm, including a shift from ‘recognition’ of notations to ‘reconfiguration’ of practices and values around programming, and from ‘sketching’ to writing and drawing, or what we call ‘notating.’
Notational Programming for Notebook Environments: A Case Study with Quantum Circuits
Ian A. Arawjo
Anthony DeArmas
Michael Roberts
Shrutarshi Basu
Tapan S. Parikh
We articulate a vision for computer programming that includes pen-based computing, a paradigm we term notational programming. Notational pro… (voir plus)gramming blurs contexts: certain typewritten variables can be referenced in handwritten notation and vice-versa. To illustrate this paradigm, we developed an extension, Notate, to computational notebooks which allows users to open drawing canvases within lines of code. As a case study, we explore quantum programming and designed a notation, Qaw, that extends quantum circuit notation with abstraction features, such as variable-sized wire bundles and recursion. Results from a usability study with novices suggest that users find our core interaction of implicit cross-context references intuitive, but suggests further improvements to debugging infrastructure, interface design, and recognition rates. Throughout, we discuss questions raised by the notational paradigm, including a shift from ‘recognition’ of notations to ‘reconfiguration’ of practices and values around programming, and from ‘sketching’ to writing and drawing, or what we call ‘notating.’
Low-Rank Representation of Reinforcement Learning Policies
We propose a general framework for policy representation for reinforcement learning tasks. This framework involves finding a low-dimensional… (voir plus) embedding of the policy on a reproducing kernel Hilbert space (RKHS). The usage of RKHS based methods allows us to derive strong theoretical guarantees on the expected return of the reconstructed policy. Such guarantees are typically lacking in black-box models, but are very desirable in tasks requiring stability and convergence guarantees. We conduct several experiments on classic RL domains. The results confirm that the policies can be robustly represented in a low-dimensional space while the embedded policy incurs almost no decrease in returns.