Nos «tea talks» sont présentations fait par un(e) étudiant(e) de MILA ou un(e) invité(e) pour la plupart au sujet d’apprentissage automatique, et ils sont ouvert au public. Généralement ils sont présentées les vendredis de 10:30 á 12:00, au pavilion Andre Aisenstadt salle 1360.
Si vous voulez faire un présentation, SVP envoyez un courriel à .
Si vous voulez vous abonner à notre liste de diffusion, envoyez un courriel à .
L’horaire de ces conférences et quelques unes des diapositives présentées sont disponibles (voir ci-dessous).
|May 4 2018||10:30||-||-||-||Cancelled for ICLR||-|
|May 11 2018||10:30||Martin Gilbert||IVADO||AA1360||Intro to Ethics 2||In this presentation, I will introduce to the three main families of normative moral theories: consequentialism, deontologism and virtue ethics.|
|May 11 2018||14:00||Sebastiano Vigna||UDS Milano||AA3195||Kendall Tau|
|May 18 2018||-||-||-||Cancelled for NIPS Deadline|
|May 25 2018||10:30||Nicolas Le Roux||AA3195||An exploration of variance reduction techniques in stochastic optimization||yes but not recorded||I will present recent and ongoing work on reducing the variance in stochastic optimization techniques to speed-up and simplify the resulting algorithms. In particular, stochastic gradient methods can suffer from high variance, limiting their convergence speed. While variance reduction techniques exist in the finite case, they are rarer in the online case. We demonstrate how an increasing momentum offers variance reduction in the online case, at the expense of bias, and how that bias can be countered by an extrapolation step. The resulting algorithm differs from iterate averaging in only a factor, but, in the context of the minimization of a quadratic function, this difference is enough to lead to the first algorithm converging both linearly in the noiseless and sublinearly in the homoscedastic noise case when using a constant stepsize.|
|May 25 2018||13:30||Adrià Recasens||MIT||AA3195||Where are they looking?||yes||Humans have the remarkable ability to follow the gaze of other people to identify what they are looking at. Following eye gaze, or gaze-following, is an important ability that allows us to understand what other people are thinking, the actions they are performing, and even predict what they might do next. Despite the importance of this topic, this problem has only been studied in limited scenarios within the computer vision community. In this talk I will present a deep neural network-based approach for gaze-following. Given an image and the location of a head, our approach follows the gaze of the person and identifies the object being looked at. Furthermore, I will also introduce GazeNet, a deep neural-network to predict the 3D direction of a person's gaze from the full 360 degrees. To complement GazeNet, I will present a novel saliency-based sampling layer for neural networks, the Saliency Sampler, which helps to improve the spatial sampling of input data for an arbitrary task. Our differentiable layer can be added as a preprocessing block to existing task networks and trained altogether in an end-to-end fashion. The effect of the layer is to efficiently estimate how to sample from the original data in order to boost task performance. For example, for the gaze-tracking task in which the original data might range in size up to several megapixels, but where the desired input images to the task network are much smaller, our layer learns how best to sample from the underlying high resolution data in a manner which preserves task-relevant information better than uniform downsampling.|
|June 1 2018||Town Hall Meeting|
|June 7 2018||14:00||Maksym Korablyov||MIT||AA3195||A few steps towards molecule google||yes but recording will be delayed||Commonly used search engines such as Google can retrieve relevant entries with high speed and accuracy from spaces as big as 10^12 - the number of words in the Internet. Search algorithms that could address the space of 10^80 - 10^100 drug-like molecules are not yet publicly available. The world’s combined effort of pharmaceutical companies (experimental + computational) is ~10^15 molecules/year with 30-50 new drugs found. In this talk I will 1: give some numerical estimates of the value of unexplored chemical space. 2: provide some intuition why all possible molecules could be mapped to a low-dimensional manifold and which families of functions might efficiently describe such mapping. 3: show some empirical results of the search in 10^80. 4: present an algorithm for partial flexible shape match (drug + protein) that we have developed recently.|
|June 15 2018||10:30||Yaroslav Ganin||MILA||AA3195||yes|
|June 22 2018||10:30||Dzmitry Bahandau||MILA||AA3195||no|
|June 26 2018||14:30||Randall O'Reilly||CU Boulder||Neuroscience Reading Group|
|June 29 2018||10:30||Anne Churchland||Cold Spring Harbor Laboratory||AA3195||Spontaneous movements dominate cortical activity during sensory-guided decision making||yes||Animals continually produce a wide array of spontaneous and learned movements and undergo rapid internal state transitions. Most work in neuroscience ignores this “internal backdrop” and instead focuses on neural activity aligned to task-imposed variables such as sensory stimuli. We sought to understand the joint effects of internal backdrop vs. task-imposed variables. We measured neural activity using calcium imaging via a widefield macroscope during decision-making. Surprisingly, the impact of the internal backdrop dwarfed task-imposed sensory and cognitive signals. This dominance was comparable in novice and expert decision-makers and was even stronger in single neuron measurements from frontal cortex. These results highlight spontaneous and learned movements as the main determinant of large-scale cortical activity. By leveraging a wide array of animal movements, our model offers a very general method for separating the impact of internal backdrop from task-imposed neural activity.|
|July 6 2018||10:30||Mila||AA1177||ICML Lightning Talks||yes||Mila authors of ICML-accepted conference and workshop papers do quick, 5 minute presentations of their work!|
|July 13 2018||-||-||-||Cancelled for ICML||-||-|
|July 20 2018||10:30||Glen Berseth||UBC||AA3195||Scalable Deep Reinforcement Learning for Physics-Based Motion Control||yes||Motion control in physics-based animation is challenging due to complex dynamics and discontinuous contacts. Many previous control methods that produce walking motions are very stiff, only work in particular environments and require significant manual tuning to get functioning. In this work, we progress the state-of-the-art in physics-based character animation in a number of directions using machine learning methods. We present three contributions that build upon the current research on motion control using deepRL. First, we show that decomposing tasks into a hierarchy increase learning efficiency by operating across multiple time scales on a complex locomotion and navigation task. Second, we investigate improved action exploration methods to sample more promising actions on |
robots and in simulation using forward dynamics distributions. This sampling strategy has been shown to improve sample efficiency for a number of problems, including many from the OpenAIGym. Last, we consider a new algorithm to progressively learn and integrate new skills producing a robust and multi-skilled physics-based controller. This algorithm combines the skills of experts together and then applies transfer learning methods to initialize and accelerate the learning of new skills.
|July 27 2018|
|August 3 2018||10:30||Petar Veličković||Cambridge||AA3195|
|August 10 2018||10:30||Wengong Jin||MIT||AA3195|
|August 17 2018||10:30||Abhishek Das||Georgia Tech||AA3195|
|August 24 2018||10:30||Marco Gori||University of Siena||AA3195|
|August 31 2018|
Voir le Google doc