Portrait of Ezekiel Williams is unavailable

Ezekiel Williams

PhD - Université de Montréal
Supervisor
Research Topics
Computational Neuroscience
Dynamical Systems
Information Theory
Recurrent Neural Networks

Publications

Expressivity of Neural Networks with Random Weights and Learned Biases
Avery Hee-Woon Ryoo
Matthew G Perich
Luca Mazzucato
Landmark universal function approximation results for neural networks with trained weights and biases provided the impetus for the ubiquitou… (see more)s use of neural networks as learning models in neuroscience and Artificial Intelligence (AI). Recent work has extended these results to networks in which a smaller subset of weights (e.g., output weights) are tuned, leaving other parameters random. However, it remains an open question whether universal approximation holds when only biases are learned, despite evidence from neuroscience and AI that biases significantly shape neural responses. The current paper answers this question. We provide theoretical and numerical evidence demonstrating that feedforward neural networks with fixed random weights can approximate any continuous function on compact sets. We further show an analogous result for the approximation of dynamical systems with recurrent neural networks. Our findings are relevant to neuroscience, where they demonstrate the potential for behaviourally relevant changes in dynamics without modifying synaptic weights, as well as for AI, where they shed light on recent fine-tuning methods for large language models, like bias and prefix-based approaches.
Expressivity of Neural Networks with Fixed Weights and Learned Biases
Avery Hee-Woon Ryoo
Matthew G Perich
Luca Mazzucato
Flexible Phase Dynamics for Bio-Plausible Contrastive Learning
Many learning algorithms used as normative models in neuroscience or as candidate approaches for learning on neuromorphic chips learn by con… (see more)trasting one set of network states with another. These Contrastive Learning (CL) algorithms are traditionally implemented with rigid, temporally non-local, and periodic learning dynamics that could limit the range of physical systems capable of harnessing CL. In this study, we build on recent work exploring how CL might be implemented by biological or neurmorphic systems and show that this form of learning can be made temporally local, and can still function even if many of the dynamical requirements of standard training procedures are relaxed. Thanks to a set of general theorems corroborated by numerical experiments across several CL models, our results provide theoretical foundations for the study and development of CL methods for biological and neuromorphic neural networks.
Formalizing locality for normative synaptic plasticity models