Tea Talks

MILA organizes weekly tea talks generally on Friday at 10:30 in Pavlion Andre Aisenstadt room 1360. These talks are technical presentations aimed at the level of MILA researchers on a variety of subjects spanning machine learning and are open to the public.

If you’re interested in giving a tea talk, please email .

If you’d like to subscribe to our mailing lists and get notified of all upcoming talks, please email

The schedule for previous and upcoming talks as well as some of the presentation slides are available below

Dates [M/D/Y]
Time
Speaker
Affiliation
Place
Title
Note
Abstract
May 4 201810:30 AM---Cancelled for ICLR-
May 11 201810:30 AMMartin GilbertIVADOAA1360Intro to Ethics 2In this presentation, I will introduce to the three main families of normative moral theories: consequentialism, deontologism and virtue ethics.
May 11 20182:00 PMSebastiano VignaUDS MilanoAA3195Kendall Tau
May 18 2018---Cancelled for NIPS Deadline
May 25 201810:30 AMNicolas Le RouxGoogleAA3195An exploration of variance reduction techniques in stochastic optimizationstreamed but not recordedI will present recent and ongoing work on reducing the variance in stochastic optimization techniques to speed-up and simplify the resulting algorithms. In particular, stochastic gradient methods can suffer from high variance, limiting their convergence speed. While variance reduction techniques exist in the finite case, they are rarer in the online case. We demonstrate how an increasing momentum offers variance reduction in the online case, at the expense of bias, and how that bias can be countered by an extrapolation step. The resulting algorithm differs from iterate averaging in only a factor, but, in the context of the minimization of a quadratic function, this difference is enough to lead to the first algorithm converging both linearly in the noiseless and sublinearly in the homoscedastic noise case when using a constant stepsize.
May 25 20181:30 PMAdrià RecasensMITAA3195Where are they looking?streamedHumans have the remarkable ability to follow the gaze of other people to identify what they are looking at. Following eye gaze, or gaze-following, is an important ability that allows us to understand what other people are thinking, the actions they are performing, and even predict what they might do next. Despite the importance of this topic, this problem has only been studied in limited scenarios within the computer vision community. In this talk I will present a deep neural network-based approach for gaze-following. Given an image and the location of a head, our approach follows the gaze of the person and identifies the object being looked at. Furthermore, I will also introduce GazeNet, a deep neural-network to predict the 3D direction of a person's gaze from the full 360 degrees. To complement GazeNet, I will present a novel saliency-based sampling layer for neural networks, the Saliency Sampler, which helps to improve the spatial sampling of input data for an arbitrary task. Our differentiable layer can be added as a preprocessing block to existing task networks and trained altogether in an end-to-end fashion. The effect of the layer is to efficiently estimate how to sample from the original data in order to boost task performance. For example, for the gaze-tracking task in which the original data might range in size up to several megapixels, but where the desired input images to the task network are much smaller, our layer learns how best to sample from the underlying high resolution data in a manner which preserves task-relevant information better than uniform downsampling.
June 1 2018Tentaively Cancelled For Town Hall
June 8 201810:30 AMMaksym KorablyovMITAA3195streamed but recording will be delayed
June 15 2018
June 22 201810:30 AMDzimitry BahandauMILAAA3195
June 29 201810:30 AMAnne ChurchlandCold Spring Harbor LaboratoryAA3195
July 6 201810:30 AMMILAAA3195ICML Lightning Talks
July 13 2018
July 20 2018
July 27 2018
August 3 2018
August 10 2018
August 17 2018
August 24 201810:30 AMMarco GoriUniversity of SienaAA3195
August 31 2018

See the Google doc

Available slides