GFlowNet Foundations
Tristan Deleu
Edward J Hu
Salem Lahlou
Mo Tiwari
GFlowNet Foundations
Tristan Deleu
Edward J Hu
Salem Lahlou
Mo Tiwari
Generative Flow Networks (GFlowNets) have been introduced as a method to sample a diverse set of candidates in an active learning context, w… (voir plus)ith a training objective that makes them approximately sample in proportion to a given reward function. In this paper, we show a number of additional theoretical properties of GFlowNets. They can be used to estimate joint probability distributions and the corresponding marginal distributions where some variables are unspecified and, of particular interest, can represent distributions over composite objects like sets and graphs. GFlowNets amortize the work typically done by computationally expensive MCMC methods in a single but trained generative pass. They could also be used to estimate partition functions and free energies, conditional probabilities of supersets (supergraphs) given a subset (subgraph), as well as marginal distributions over all supersets (supergraphs) of a given set (graph). We introduce variations enabling the estimation of entropy and mutual information, sampling from a Pareto frontier, connections to reward-maximizing policies, and extensions to stochastic environments, continuous actions and modular energy functions.
GFlowNet Foundations
Tristan Deleu
Edward J Hu
Salem Lahlou
Mo Tiwari
GFlowNet Foundations
Tristan Deleu
Edward J Hu
Salem Lahlou
Mo Tiwari
GFlowNet Foundations
Tristan Deleu
Edward J Hu
Salem Lahlou
Mo Tiwari
GFlowNet Foundations
Tristan Deleu
Edward J Hu
Salem Lahlou
Mo Tiwari
GFlowNet Foundations
Tristan Deleu
Edward J Hu
Salem Lahlou
Mo Tiwari
GFlowNet Foundations
Tristan Deleu
Edward J Hu
Salem Lahlou
Mo Tiwari
Splitting, Renaming, Removing: A Study of Common Cleaning Activities in Jupyter Notebooks
Helen Dong
Shurui Zhou
Christian Kästner
Data scientists commonly use computational notebooks because they provide a good environment for testing multiple models. However, once the … (voir plus)scientist completes the code and finds the ideal model, he or she will have to dedicate time to clean up the code in order for others to easily understand it. In this paper, we perform a qualitative study on how scientists clean their code in hopes of being able to suggest a tool to automate this process. Our end goal is for tool builders to address possible gaps and provide additional aid to data scientists, who then can focus more on their actual work rather than the routine and tedious cleaning work. By sampling notebooks from GitHub and analyzing changes between subsequent commits, we identified common cleaning activities, such as changes to markdown (e.g., adding headers sections or descriptions) or comments (both deleting dead code and adding descriptions) as well as reordering cells. We also find that common cleaning activities differ depending on the intended purpose of the notebook. Our results provide a valuable foundation for tool builders and notebook users, as many identified cleaning activities could benefit from codification of best practices and dedicated tool support, possibly tailored depending on intended use.
Subtle Bugs Everywhere: Generating Documentation for Data Wrangling Code
Chenyang Yang
Shurui Zhou
Christian Kästner
Data scientists reportedly spend a significant amount of their time in their daily routines on data wrangling, i.e. cleaning data and extrac… (voir plus)ting features. However, data wrangling code is often repetitive and error-prone to write. Moreover, it is easy to introduce subtle bugs when reusing and adopting existing code, which results in reduced model quality. To support data scientists with data wrangling, we present a technique to generate documentation for data wrangling code. We use (1) program synthesis techniques to automatically summarize data transformations and (2) test case selection techniques to purposefully select representative examples from the data based on execution information collected with tailored dynamic program analysis. We demonstrate that a JupyterLab extension with our technique can provide on-demand documentation for many cells in popular notebooks and find in a user study that users with our plugin are faster and more effective at finding realistic bugs in data wrangling code.
ZERO: Playing Mathematical Programming Games
Gabriele Dragotto
S. Sankaranarayanan
Andrea Lodi
Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks
Christopher Hillar
Tenzin Chan
Rachel Taubman
In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthr… (voir plus)oughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and fixed-point attractor dynamics. Specifically, we explore minimum energy flow (MEF) as a scalable convex objective for determining network parameters. We catalog various properties of MEF, such as biological plausibility, and then compare to classical approaches in the theory of learning. Trained Hopfield networks can perform unsupervised clustering and define novel error-correcting coding schemes. They also efficiently find hidden structures (cliques) in graph theory. We extend this known connection from graphs to hypergraphs and discover n-node networks with robust storage of 2Ω(n1−ϵ) memories for any ϵ>0. In the case of graphs, we also determine a critical ratio of training samples at which networks generalize completely.