Smart Futures Based Resource Trading and Coalition Formation for Real-Time Mobile Data Processing
Ruitao Chen
Xianbin Wang
Collaboration among mobile devices (MDs) is becoming more important, as it could augment computing capacity at the network edge through peer… (see more)-to-peer service provisioning, and directly enhance real-time computational performance in smart Internet-of-Things applications. As an important aspect of collaboration mechanism, conventional resource trading (RT) among MDs relies on an onsite interaction process, i.e., price negotiation between service providers and requesters, which, however, inevitably incurs excessive latency and degrades RT efficiency. To overcome this challenge, this article adopts the concept of futures contract (FC) used in financial market, and proposes a smart futures for low latency RT. This new technique enables MDs to form trading coalitions and negotiate multilateral forward contracts applied to a collaboration term in the future. To maximize the benefits of self-interested MDs, the negotiation process of FC is modelled as a coalition formation game comprised of three components executed in an iterative manner, i.e., futures resource allocation, revenue sharing and payment allocation, and distributed decision-making of individual MD. Additionally, a FC enforcement scheme is implemented to efficiently manage the onsite resource sharing via recording resource balances of different task-types and MDs. Simulation results prove the superiority of smart futures in RT latency reduction and trading fairness provisioning.
SVRG meets AdaGrad: painless variance reduction
Benjamin Dubois-Taine
Sharan Vaswani
Reza Babanezhad Harikandeh
Mark Schmidt
Bridging the Gap Between Adversarial Robustness and Optimization Bias
Fartash Faghri
Cristina Vasconcelos
David J Fleet
Fabian Pedregosa
Optimal Spectral-Norm Approximate Minimization of Weighted Finite Automata
We address the approximate minimization problem for weighted finite automata (WFAs) with weights in …
Structured Sparsity Inducing Adaptive Optimizers for Deep Learning
Tristan Deleu
The parameters of a neural network are naturally organized in groups, some of which might not contribute to its overall performance. To prun… (see more)e out unimportant groups of parameters, we can include some non-differentiable penalty to the objective function, and minimize it using proximal gradient methods. In this paper, we derive the weighted proximal operator, which is a necessary component of these proximal methods, of two structured sparsity inducing penalties. Moreover, they can be approximated efficiently with a numerical solver, and despite this approximation, we prove that existing convergence guarantees are preserved when these operators are integrated as part of a generic adaptive proximal method. Finally, we show that this adaptive method, together with the weighted proximal operators derived here, is indeed capable of finding solutions with structure in their sparsity patterns, on representative examples from computer vision and natural language processing.
Prediction, Not Association, Paves the Road to Precision Medicine
Gael Varoquaux
Ewout W. Steyerberg
Task dependent deep LDA pruning of neural networks
Qing Tian
James J. Clark
The neural correlates of ongoing conscious thought
Jonathan Smallwood
Adam Turnbull
Hao-Ting Wang
Nerissa S.P. Ho
Giulia L. Poerio
Theodoros Karapanagiotidis
Delali Konu
Brontë Mckeown
Meichao Zhang
Charlotte Murphy
Deniz Vatansever
Mahiko Konishi
Robert Leech
Paul Seli
Jonathan W. Schooler
Boris C Bernhardt
Daniel S. Margulies
Elizabeth Jefferies
Using Artificial Intelligence to Visualize the Impacts of Climate Change
Alexandra Luccioni
Victor Schmidt
Vahe Vardanyan
T. Rhyne
Public awareness and concern about climate change often do not match the magnitude of its threat to humans and our environment. One reason f… (see more)or this disagreement is that it is difficult to mentally simulate the effects of a process as complex as climate change and to have a concrete representation of the impact that our individual actions will have on our own future, especially if the consequences are long term and abstract. To overcome these challenges, we propose to use cutting-edge artificial intelligence (AI) approaches to develop an interactive personalized visualization tool, the AI climate impact visualizer. It will allow a user to enter an address—be it their house, their school, or their workplace—-and it will provide them with an AI-imagined possible visualization of the future of this location in 2050 following the detrimental effects of climate change such as floods, storms, and wildfires. This image will be accompanied by accessible information regarding the science behind climate change, i.e., why extreme weather events are becoming more frequent and what kinds of changes are happening on a local and global scale.
Training neural networks to recognize speech increased their correspondence to the human auditory pathway but did not yield a shared hierarchy of acoustic features
Jessica A.F. Thompson
Elia Formisano
Marc Schönwiesner
The correspondence between the activity of artificial neurons in convolutional neural networks (CNNs) trained to recognize objects in images… (see more) and neural activity collected throughout the primate visual system has been well documented. Shallower layers of CNNs are typically more similar to early visual areas and deeper layers tend to be more similar to later visual areas, providing evidence for a shared representational hierarchy. This phenomenon has not been thoroughly studied in the auditory domain. Here, we compared the representations of CNNs trained to recognize speech (triphone recognition) to 7-Tesla fMRI activity collected throughout the human auditory pathway, including subcortical and cortical regions, while participants listened to speech. We found no evidence for a shared representational hierarchy of acoustic speech features. Instead, all auditory regions of interest were most similar to a single layer of the CNNs: the first fully-connected layer. This layer sits at the boundary between the relatively task-general intermediate layers and the highly task-specific final layers. This suggests that alternative architectural designs and/or training objectives may be needed to achieve fine-grained layer-wise correspondence with the human auditory pathway. Highlights Trained CNNs more similar to auditory fMRI activity than untrained No evidence of a shared representational hierarchy for acoustic features All ROIs were most similar to the first fully-connected layer CNN performance on speech recognition task positively associated with fmri similarity
Variational Nested Dropout
Yufei Cui
Yushun Mao
Ziquan Liu
Qiao Li
Antoni Bert Chan
Tei-Wei Kuo
Chun Jason Xue
Nested dropout is a variant of dropout operation that is able to order network parameters or features based on the pre-defined importance du… (see more)ring training. It has been explored for: I. Constructing nested nets Cui et al. 2020, Cui et al. 2021: the nested nets are neural networks whose architectures can be adjusted instantly during testing time, e.g., based on computational constraints. The nested dropout implicitly ranks the network parameters, generating a set of sub-networks such that any smaller sub-network forms the basis of a larger one. II. Learning ordered representation Rippel et al. 2014: the nested dropout applied to the latent representation of a generative model (e.g., auto-encoder) ranks the features, enforcing explicit order of the dense representation over dimensions. However, the dropout rate is fixed as a hyper-parameter during the whole training process. For nested nets, when network parameters are removed, the performance decays in a human-specified trajectory rather than in a trajectory learned from data. For generative models, the importance of features is specified as a constant vector, restraining the flexibility of representation learning. To address the problem, we focus on the probabilistic counterpart of the nested dropout. We propose a variational nested dropout (VND) operation that draws samples of multi-dimensional ordered masks at a low cost, providing useful gradients to the parameters of nested dropout. Based on this approach, we design a Bayesian nested neural network that learns the order knowledge of the parameter distributions. We further exploit the VND under different generative models for learning ordered latent distributions. In experiments, we show that the proposed approach outperforms the nested network in terms of accuracy, calibration, and out-of-domain detection in classification tasks. It also outperforms the related generative models on data generation tasks.
Author response: Functional specialization within the inferior parietal lobes across cognitive domains
Ole Numssen
Gesa Hartwigsen