Tea Talks

Mila organizes weekly tea talks generally on Friday at 10:30 in the auditorium. These talks are technical presentations aimed at the level of Mila researchers on a variety of subjects spanning machine learning and are open to the public.

If you’re interested in giving a tea talk, please email .

If you’d like to subscribe to our mailing lists and get notified of all upcoming talks, please email

The schedule for previous and upcoming talks as well as some of the presentation slides are available below.

https://sites.google.com/lisa.iro.umontreal.ca/tea-talk-recordings/home

Time
Speaker
Affiliation
Place
Title
Abstract
Bio
10h30Barath RamsundarOnlineOpen sourcing medicine discovery with deepchemTraditionally, the process of discovering new medicine has been driven by proprietary techniques and algorithms. The advent of deep learning driven drug discovery over the last several years has started to change this state of affairs, with increasingly powerful suites of open datasets and algorithms available for researchers. In this talk, I'll introduce the DeepChem project (https://deepchem.readthedocs.io/en/latest/) which seeks to create a powerful open suite of algorithms to enable scientists working on medicine discovery and scientific problems more broadly. I'll also review some of the core algorithmic techniques underlying molecular machine learning and other related areas of scientific deep learning and say a bit about our efforts to build a diverse, decentralized open research community with DeepChem.
Fri 9 October10h30Colin RaffelBrain/UNCOnlineTransfer Learning for NLP : T5 and beyondTransfer learning, where a model is pre-trained on a data-rich task before being fine-tuned on a downstream task of interest, has emerged as the dominant framework for tackling natural language processing (NLP) problems. In this talk, I'll give an introduction to transfer learning for NLP through the lens of our recent large-scale empirical study. To carry out this study, we introduced the "Text-to-Text Transfer Transformer" (T5), a pre-trained language model that casts every NLP problem as a text-to-text problem. After figuring out what works best, we "explored the limits" by scaling up our models to achieve state-of-the-art on many standard NLP benchmarks. I will then present two follow-up works that provide more insight into what these models are capable of. In the first, we evaluate whether giant language models can answer open-domain questions without accessing an external knowledge source. To perform well on this task, a model must squirrel away vast amounts of knowledge in its parameters during pre-training. In the second, we test whether these models can generate plausible-sounding explanations of their predictions, which provides a crude form of interpretability. I'll provide pointers to our pre-trained models and code to facilitate future work.
Fri 16 October10h30Gal ChechikNVIDIAOnlineA causal view of compositional zero-shot visual recognitionPeople easily recognize new visual categories that are new combinations of known components. This compositional generalization capacity is critical for learning in real-world domains like vision and language because the long tail of new combinations dominates the distribution. Unfortunately, learning systems struggle with compositional generalization because they often build on features that are correlated with class labels even if they are not "essential" for the class. This leads to consistent misclassification of samples from a new distribution, like new combinations of known components.
Fri 30 October10h30Siamak RavabakhshMilaOnlineCompositionality of Symmetry in Deep LearningA principled approach to modeling structured data is to consider all transformations that maintain structural relations. Using this perspective in deep learning leads to the design of models (such as ConvNet) that are invariant or equivariant to the symmetry transformations of the data. While equivariant deep learning has dealt with a range of simple structures so far, we have not explored the notion of compositionality in this symmetry-based approach. In this talk, I plan to explore various types of compositionality in recent works on symmetry-based model design and identify opportunities for compositional generalization.
Fri 6 November10h30Irina HigginsDeepmindOnlineUnsupervised deep learning identifies semantic disentanglement in single inferotemporal face patch neurons
Fri 13 November10h30Guillaume RabusseauMilaOnline
Fri 20 November10h30Lucas LenhertBrownOnline

Available slides

array(1) { ["wp-wpml_current_language"]=> string(2) "en" }