My main interests lie at the interface of deep learning, physics and neuroscience.
During my PhD, Yoshua Bengio and I came up with a mathematical framework to train energy-based models by gradient descent that we called equilibrium propagation (EqProp). This framework may have implications for the design of accelerators for deep learning. Indeed, in this framework, one may use the energy of a physical system as the energy function, so that the model directly uses the laws of physics to perform the desired computations (inference and learning). Here is the link to my PhD thesis, as well as the list of papers that it covers: 1, 2, 3, 4, 5.