Mila > Publication > Bridging the Gap Between Deep Learning and Neuroscience > Equivalence of Equilibrium Propagation and Recurrent Backpropagation

Equivalence of Equilibrium Propagation and Recurrent Backpropagation

Bridging the Gap Between Deep Learning and Neuroscience
Jun 2018

Equivalence of Equilibrium Propagation and Recurrent Backpropagation

Jun 2018

Recurrent Backpropagation and Equilibrium Propagation are supervised learning algorithms for fixed point recurrent neural networks which differ in their second phase. In the first phase, both algorithms converge to a fixed point which corresponds to the configuration where the prediction is made. In the second phase, Equilibrium Propagation relaxes to another nearby fixed point corresponding to smaller prediction error, whereas Recurrent Backpropagation uses a side network to compute error derivatives iteratively. In this work we establish a close connection between these two algorithms. We show that, at every moment in the second phase, the temporal derivatives of the neural activities in Equilibrium Propagation are equal to the error derivatives computed iteratively by Recurrent Backpropagation in the side network. This work shows that it is not required to have a side network for the computation of error derivatives, and supports the hypothesis that, in biological neural networks, temporal derivatives of neural activities may code for error signals.

Reference

Related Papers

Linked Profiles