Nos « Tea Talks »

Nos «tea talks» sont présentations fait par un(e) étudiant(e) de MILA ou un(e) invité(e) pour la plupart au sujet d’apprentissage automatique, et ils sont ouvert au public. Généralement ils sont présentées les vendredis de 10:30 á 12:00, au pavilion Andre Aisenstadt salle 1360.

Si vous voulez faire un présentation, SVP envoyez un courriel à .

Si vous voulez vous abonner à notre liste de diffusion, envoyez un courriel à .

L’horaire de ces conférences et quelques unes des diapositives présentées sont disponibles (voir ci-dessous).

Dates [M/D/Y]
May 4 201810:30---Cancelled for ICLR-
May 11 201810:30Martin GilbertIVADOAA1360Intro to Ethics 2In this presentation, I will introduce to the three main families of normative moral theories: consequentialism, deontologism and virtue ethics.
May 11 201814:00Sebastiano VignaUDS MilanoAA3195Kendall Tau
May 18 2018---Cancelled for NIPS Deadline
May 25 201810:30Nicolas Le RouxGoogleAA3195An exploration of variance reduction techniques in stochastic optimizationyes but not recordedI will present recent and ongoing work on reducing the variance in stochastic optimization techniques to speed-up and simplify the resulting algorithms. In particular, stochastic gradient methods can suffer from high variance, limiting their convergence speed. While variance reduction techniques exist in the finite case, they are rarer in the online case. We demonstrate how an increasing momentum offers variance reduction in the online case, at the expense of bias, and how that bias can be countered by an extrapolation step. The resulting algorithm differs from iterate averaging in only a factor, but, in the context of the minimization of a quadratic function, this difference is enough to lead to the first algorithm converging both linearly in the noiseless and sublinearly in the homoscedastic noise case when using a constant stepsize.
May 25 201813:30Adrià RecasensMITAA3195Where are they looking?yesHumans have the remarkable ability to follow the gaze of other people to identify what they are looking at. Following eye gaze, or gaze-following, is an important ability that allows us to understand what other people are thinking, the actions they are performing, and even predict what they might do next. Despite the importance of this topic, this problem has only been studied in limited scenarios within the computer vision community. In this talk I will present a deep neural network-based approach for gaze-following. Given an image and the location of a head, our approach follows the gaze of the person and identifies the object being looked at. Furthermore, I will also introduce GazeNet, a deep neural-network to predict the 3D direction of a person's gaze from the full 360 degrees. To complement GazeNet, I will present a novel saliency-based sampling layer for neural networks, the Saliency Sampler, which helps to improve the spatial sampling of input data for an arbitrary task. Our differentiable layer can be added as a preprocessing block to existing task networks and trained altogether in an end-to-end fashion. The effect of the layer is to efficiently estimate how to sample from the original data in order to boost task performance. For example, for the gaze-tracking task in which the original data might range in size up to several megapixels, but where the desired input images to the task network are much smaller, our layer learns how best to sample from the underlying high resolution data in a manner which preserves task-relevant information better than uniform downsampling.
June 1 2018Town Hall Meeting
June 7 201814:00Maksym KorablyovMITAA3195A few steps towards molecule googleyes but recording will be delayedCommonly used search engines such as Google can retrieve relevant entries with high speed and accuracy from spaces as big as 10^12 - the number of words in the Internet. Search algorithms that could address the space of 10^80 - 10^100 drug-like molecules are not yet publicly available. The world’s combined effort of pharmaceutical companies (experimental + computational) is ~10^15 molecules/year with 30-50 new drugs found. In this talk I will 1: give some numerical estimates of the value of unexplored chemical space. 2: provide some intuition why all possible molecules could be mapped to a low-dimensional manifold and which families of functions might efficiently describe such mapping. 3: show some empirical results of the search in 10^80. 4: present an algorithm for partial flexible shape match (drug + protein) that we have developed recently.
June 15 201810:30Yaroslav GaninMILAAA3195yes
June 22 201810:30Dzmitry BahandauMILAAA3195no
June 26 201814:30Randall O'ReillyCU BoulderNeuroscience Reading Group
June 29 201810:30Anne ChurchlandCold Spring Harbor LaboratoryAA3195Spontaneous movements dominate cortical activity during sensory-guided decision makingyesAnimals continually produce a wide array of spontaneous and learned movements and undergo rapid internal state transitions. Most work in neuroscience ignores this “internal backdrop” and instead focuses on neural activity aligned to task-imposed variables such as sensory stimuli. We sought to understand the joint effects of internal backdrop vs. task-imposed variables. We measured neural activity using calcium imaging via a widefield macroscope during decision-making. Surprisingly, the impact of the internal backdrop dwarfed task-imposed sensory and cognitive signals. This dominance was comparable in novice and expert decision-makers and was even stronger in single neuron measurements from frontal cortex. These results highlight spontaneous and learned movements as the main determinant of large-scale cortical activity. By leveraging a wide array of animal movements, our model offers a very general method for separating the impact of internal backdrop from task-imposed neural activity.
July 6 201810:30MilaAA1177ICML Lightning TalksyesMila authors of ICML-accepted conference and workshop papers do quick, 5 minute presentations of their work!
July 13 2018---Cancelled for ICML--
July 20 201810:30Glen BersethUBCAA3195Scalable Deep Reinforcement Learning for Physics-Based Motion ControlyesMotion control in physics-based animation is challenging due to complex dynamics and discontinuous contacts. Many previous control methods that produce walking motions are very stiff, only work in particular environments and require significant manual tuning to get functioning. In this work, we progress the state-of-the-art in physics-based character animation in a number of directions using machine learning methods. We present three contributions that build upon the current research on motion control using deepRL. First, we show that decomposing tasks into a hierarchy increase learning efficiency by operating across multiple time scales on a complex locomotion and navigation task. Second, we investigate improved action exploration methods to sample more promising actions on
robots and in simulation using forward dynamics distributions. This sampling strategy has been shown to improve sample efficiency for a number of problems, including many from the OpenAIGym. Last, we consider a new algorithm to progressively learn and integrate new skills producing a robust and multi-skilled physics-based controller. This algorithm combines the skills of experts together and then applies transfer learning methods to initialize and accelerate the learning of new skills.
July 27 2018
August 3 201810:30Petar VeličkovićCambridgeAA3195
August 10 201810:30Wengong JinMITAA3195
August 17 201810:30Abhishek DasGeorgia TechAA3195
August 24 201810:30Marco GoriUniversity of SienaAA3195
August 31 2018

Voir le Google doc

Diapositives disponibles