During my PhD, Yoshua Bengio and I introduced
equilibrium propagation, a novel mathematical framework for gradient-descent based machine learning, in which inference and gradient computation are performed using the same physical laws. By suggesting a path to perform the desired computations (inference and learning) more efficiently, this framework may have implications for the design of accelerators for machine learning. For more information, here is the link to
my PhD thesis, as well as the list of papers that it covers:
1,
2,
3,
4,
5.