We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
Design and Application of Adaptive Sparse Deep Echo State Network
The prediction of appliances energy consumption in building belongs to time series forecasting problem, which can be solved by echo state ne… (see more)twork (ESN). However, due to the randomly initialized inputs and reservoir, some redundant or irrelevant components are inevitably generated in original ESN. To solve this problem, the adaptive sparse deep echo state network (ASDESN) is proposed, in which the information is processed layer by layer. Firstly, the principal component analysis (PCA) layer is inserted to penalize the redundant projection transmitted between sub-reservoirs. Secondly, the coordinate descent based adaptive sparse learning method is proposed to generate the sparse output weights. Particularly, the designed adaptive threshold strategy is able to enlarge the sparsity of output weights as network depth increases. Moreover, the echo state property (ESP) of ASDESN is given to ensure its applications. The experiment results in both simulated benchmark and real appliances energy datasets illustrate that the proposed ASDESN outperforms other ESNs with higher prediction accuracy and stability.
2023-01-01
IEEE transactions on consumer electronics (published)
The increase of on-vehicle applications has brought explosive computation demands to autonomous vehicles and overwhelmed their limited onboa… (see more)rd resources. Edge computing can offload application load and effectively alleviate this problem. However, the introduction of edge computing faces significant challenges, including the considerable amount of resource contention due to the scarcity of edge resources and the competition among edge computing resource providers to earn users’ services requests. We notice that the problem is not purely technical as solutions for these two problems can become conflicting to each other. In this paper, we propose a distributed pricing strategy to achieve full use of computing resources at the edge and maximize the revenue of service operators, both with guaranteed quality-of-service of on-vehicle applications. More specifically, we first use the multi-leader multi-follower Stackelberg game theory to model the pricing of on-vehicle task offloading under edge computing. Next, we propose a distributed pricing strategy to enable edge servers to adjust their local price distributions so that edge servers can bargain with offloading requesters independently. Experimental results confirm that the proposed distributed pricing strategy can provide more optimized server computing resource utilization while guaranteeing the performance of in-vehicle applications.
Optimizing static risk-averse objectives in Markov decision processes is challenging because they do not readily admit dynamic programming d… (see more)ecompositions. Prior work has proposed to use a dynamic decomposition of risk measures that help to formulate dynamic programs on an augmented state space. This paper shows that several existing decompositions are inherently inexact, contradicting several claims in the literature. In particular, we give examples that show that popular decompositions for CVaR and EVaR risk measures are strict overestimates of the true risk values. However, an exact decomposition is possible for VaR, and we give a simple proof that illustrates the fundamental difference between VaR and CVaR dynamic programming properties.
Learning the causal structure of observable variables is a central focus for scientific discovery. Bayesian causal discovery methods tackle… (see more) this problem by learning a posterior over the set of admissible graphs given our priors and observations. Existing methods primarily consider observations from static systems and assume the underlying causal structure takes the form of a directed acyclic graph (DAG). In settings with dynamic feedback mechanisms that regulate the trajectories of individual variables, this acyclicity assumption fails unless we account for time. We focus on learning Bayesian posteriors over cyclic graphs and treat causal discovery as a problem of sparse identification of a dynamical sys-tem. This imposes a natural temporal causal order between variables and captures cyclic feedback loops through time. Under this lens, we propose a new framework for Bayesian causal discovery for dynamical systems and present a novel generative flow network architecture (DynGFN) tailored for this task. Our results indicate that DynGFN learns posteriors that better encapsulate the distributions over admissible cyclic causal structures compared to counterpart state-of-the-art approaches.
The lifelong learning paradigm in machine learning is an attractive alternative to the more prominent isolated learning scheme not only due … (see more)to its resemblance to biological learning, but also its potential to reduce energy waste by obviating excessive model re-training. A key challenge to this paradigm is the phenomenon of catastrophic forgetting. With the increasing popularity and success of pre-trained models in machine learning, we pose the question: What role does pre-training play in lifelong learning, specifically with respect to catastrophic forgetting? We investigate existing methods in the context of large, pre-trained models and evaluate their performance on a variety of text and image classification tasks, including a large-scale study using a novel dataset of 15 diverse NLP tasks. Across all settings, we observe that generic pre-training implicitly alleviates the effects of catastrophic forgetting when learning multiple tasks sequentially compared to randomly initialized models. We then further investigate why pre-training alleviates forgetting in this setting. We study this phenomenon by analyzing the loss landscape, finding that pre-trained weights appear to ease forgetting by leading to wider minima. Based on this insight, we propose jointly optimizing for current task loss and loss basin sharpness in order to explicitly encourage wider basins during sequential fine-tuning. We show that this optimization approach leads to performance comparable to the state-of-the-art in task-sequential continual learning across multiple settings, without retaining a memory that scales in size with the number of tasks.
The potential of using a large language model (LLM) as a knowledge base (KB) has sparked significant interest. To maintain the knowledge acq… (see more)uired by LLMs, we need to ensure that the editing of learned facts respects internal logical constraints, which are known as dependency of knowledge. Existing work on editing LLMs has partially addressed the issue of dependency, when the editing of a fact should apply to its lexical variations without disrupting irrelevant ones. However, they neglect the dependency between a fact and its logical implications.
We propose an evaluation protocol with an accompanying question-answering dataset, StandUp, that provides a comprehensive assessment of the editing process considering the above notions of dependency. Our protocol involves setting up a controlled environment in which we edit facts and monitor their impact on LLMs, along with their implications based on If-Then rules. Extensive experiments on StandUp show that existing knowledge editing methods are sensitive to the surface form of knowledge, and that they have limited performance in inferring the implications of edited facts.
We propose an interpretable local surrogate (ILS) method for understanding the predictions of black-box graph models. Explainability methods… (see more) are commonly employed to gain insights into black-box models and, given the widespread adoption of GNNs in diverse applications, understanding the underlying reasoning behind their decision-making processes becomes crucial. Our ILS method approximates the behavior of a black-box graph model by fitting a simple surrogate model in the local neighborhood of a given input example. Leveraging the interpretability of the surrogate, ILS is able to identify the most relevant nodes contributing to a specific prediction. To efficiently identify these nodes, we utilize group sparse linear models as local surrogates. Through empirical evaluations on explainability benchmarks, our method consistently outperforms state-of-the-art graph explainability methods. This demonstrates the effectiveness of our approach in providing enhanced interpretability for GNN predictions.
Transformers have enabled impressive improvements in deep learning. They often outperform recurrent and convolutional models in many tasks w… (see more)hile taking advantage of parallel processing. Recently, we proposed the SepFormer, which obtains state-of-the-art performance in speech separation with the WSJ0-2/3 Mix datasets. This paper studies in-depth Transformers for speech separation. In particular, we extend our previous findings on the SepFormer by providing results on more challenging noisy and noisy-reverberant datasets, such as LibriMix, WHAM!, and WHAMR!. Moreover, we extend our model to perform speech enhancement and provide experimental evidence on denoising and dereverberation tasks. Finally, we investigate, for the first time in speech separation, the use of efficient self-attention mechanisms such as Linformers, Lonformers, and ReFormers. We found that they reduce memory requirements significantly. For example, we show that the Reformer-based attention outperforms the popular Conv-TasNet model on the WSJ0-2Mix dataset while being faster at inference and comparable in terms of memory consumption.
2023-01-01
IEEE/ACM Transactions on Audio, Speech, and Language Processing (published)