Tea Talks

Mila organizes weekly tea talks generally on Friday at 10:30 in the auditorium. These talks are technical presentations aimed at the level of Mila researchers on a variety of subjects spanning machine learning and are open to the public.

If you’re interested in giving a tea talk, please email .

If you’d like to subscribe to our mailing lists and get notified of all upcoming talks, please email

The schedule for previous and upcoming talks as well as some of the presentation slides are available below.

https://sites.google.com/lisa.iro.umontreal.ca/tea-talk-recordings/home

Time
Speaker
Affiliation
Place
Title
Abstract
Bio
Fri 10 January10h30Amin EmadMcGillMila AgoraOn the road to individualized medicine: machine learning in the era of ‘omics’ dataIndividualized medicine (IM) promises to revolutionize patient care by providing personalized treatments based on an individual’s molecular and clinical characteristics. However, we are still far from achieving the goals of IM. For example, in the case of cancer which is the leading cause of death in Canada, the majority of patients only receive (often inadequate) ‘standard of care’ treatment for their cancer type, independent of their tumours’ unique molecular profile. Even when a patient is originally responsive to a drug, they may develop drug resistance and thus face a relapse of cancer. Predicting the patients’ clinical drug response to different treatments and identifying biomarkers of drug sensitivity that can be targeted to overcome drug resistance are two major challenges in moving towards individualized medicine. Machine learning (ML) methods are a natural solution to address these issues, however the complexity of the underlying biological mechanisms and the unique characteristics of the heterogeneous, high-dimensional, multi-modal, and noisy data prohibits us from using off-the-shelf ML algorithms. In this talk, I will describe some recent approaches we developed to address these issues and will describe some important remaining challenges in this domain and our plans to address them.Dr. Emad is an Assistant Professor of Electrical and Computer Engineering at McGill University, leading the Computational Biology and Artificial Intelligence (COMBINE) lab. He is affiliated with McGill's Quantitative Life Sciences (QLS) program and McGill Initiative in Computational Medicine (MiCM), and is affiliated with the National Center for Supercomputing Applications at the University of Illinois (UIUC). Before joining McGill, he was a Postdoctoral Research Associate at the NIH KnowEnG Center of Excellence in Big Data Computing associated with the Department of Computer Science and the Institute for Genomic Biology (IGB) at UIUC. He received his PhD from UIUC in 2015, his MSc from the University of Alberta in 2009 and his BSc from Sharif University of Technology in 2007. His current research interests include developing novel computational methods based on machine learning, network representation learning, and statistical methods to study various problems in pharmacogenomics and regulatory systems genomics.
Fri 17 January10h30Irina RishMilaMila AgoraModeling Psychotherapy Dialogues with Kernelized Hashcode Representations: A Nonparametric Information-Theoretic ApproachWe propose a novel dialogue modeling framework, the first-ever nonparametric kernel functions based approach for dialogue modeling, which learns kernelized hashcodes as compressed text representations; unlike traditional deep learning models, it handles well relatively small datasets, while also scaling to large ones. We also derive a novel lower bound on mutual information, used as a model-selection criterion favoring representations with better alignment between the utterances of participants in a collaborative dialogue setting, as well as higher predictability of the generated responses. As demonstrated on three real-life datasets, including prominently psychotherapy sessions, the proposed approach significantly outperforms several state-of-art neural network based dialogue systems, both in terms of computational efficiency, reducing training time from days or weeks to hours, and the response quality, achieving an order of magnitude improvement over competitors in frequency of being chosen as the best model by human evaluators.Irina Rish is an Associate Professor in the Computer Science and Operations Research department at the Université de Montréal (UdeM) and a core member of MILA - Quebec AI Institute. She holds MSc and PhD in AI from University of California, Irvine and MSc in Applied Mathematics from Moscow Gubkin Institute. Dr. Rish's research focus is on machine learning, neural data analysis and neuroscience-inspired AI.

Her current research interests include continual lifelong learning, optimization algorithms for deep neural networks, sparse modeling and probabilistic inference, dialog generation, biologically plausible reinforcement learning, and dynamical systems approaches to brain imaging analysis. Before joining UdeM and MILA in 2019, Irina was a research scientist at the IBM T.J. Watson Research Center, where she worked on various projects at the intersection of neuroscience and AI, and led the Neuro-AI challenge. She received multiple IBM awards, including IBM Eminence & Excellence Award and IBM Outstanding Innovation Award in 2018, IBM Outstanding Technical Achievement Award in 2017, and IBM Research Accomplishment Award in 2009.

Dr. Rish holds 64 patents, has published over 80 research papers, several book chapters, three edited books, and a monograph on Sparse Modeling. She is IEEE TPAMI Associate Editor (since 2019), a member of the AI Journal (AIJ) editorial board (since 2016), served as a Senior Area Chair for NIPS-2017, NIPS-2018, ICML-2018, an Area Chair for ICLR-2019, ICLR-2018, JCAI-2015, ICML-2015, ICML-2016, NIPS-2010, tutorials chair for UAI-2012 and workshop chair for UAI-2015 and ICML-2012; she gave several tutorials (AAAI-1998, AAAI-2000, ICML-2010, ECML-2006) and co-organized multiple workshops at core AI conferences, including 11 workshops at NIPS (from 2003 to 2016), ICML-2008 and ECML-2006.
Fri 6 March10h30Leslie KaelblingMITMila AgoraDoing for our robots what nature did for usWe, as robot engineers, have to think hard about our role in the design of robots and how it interacts with learning, both in "the factory" (that is, at engineering time) and in "the wild" (that is, when the robot is delivered to a customer). I will share some general thoughts about the strategies for robot design and then talk in detail about some work I have been involved in, both in the design of an overall architecture for an intelligent robot and in strategies for learning to integrate new skills into the repertoire of an already competent robot.Leslie Pack Kaelbling is the Panasonic Professor of Computer Science and Engineering at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology. She has made research contributions to decision-making under uncertainty, learning, and sensing with applications to robotics, with a particular focus on reinforcement learning and planning in partially observable domains. She holds an A.B in Philosophy and a Ph.D. in Computer Science from Stanford University, and has had research positions at SRI International and Teleos Research and a faculty position at Brown University. She is the recipient of the US National Science Foundation Presidential Faculty Fellowship, the IJCAI Computers and Thought Award, and several teaching prizes; she has been elected a fellow of the AAAI. She was the founder and editor-in-chief of the Journal of Machine Learning Research.
Fri 3 April10h30Scott NiekumUT AustinOnlineScaling Probabilistically Safe Learning to RoboticsBefore learning robots can be deployed in the real world, it is critical that probabilistic guarantees can be made about the safety and performance of such systems. In recent years, safe reinforcement learning algorithms have enjoyed success in application areas with high-quality models and plentiful data, but robotics remains a challenging domain for scaling up such approaches. Furthermore, very little work has been done on the even more difficult problem of safe imitation learning, in which the demonstrator's reward function is not known. This talk focuses on new developments in three key areas for scaling safe learning to robotics: (1) a theory of safe imitation learning; (2) scalable reward inference in the absence of models; (3) efficient off-policy policy evaluation. The proposed algorithms offer a blend of safety and practicality, making a significant step towards safe robot learning with modest amounts of real-world data.Scott Niekum is an Assistant Professor and the director of the Personal Autonomous Robotics Lab (PeARL) in the Department of Computer Science at UT Austin. He is also a core faculty member in the interdepartmental robotics group at UT. Prior to joining UT Austin, Scott was a postdoctoral research fellow at the Carnegie Mellon Robotics Institute and received his Ph.D. from the Department of Computer Science at the University of Massachusetts Amherst. His research interests include imitation learning, reinforcement learning, and robotic manipulation. Scott is a recipient of the 2018 NSF CAREER Award and 2019 AFOSR Young Investigator Award.
Fri 24 April10h30Ross OttoMcGillOnlineHow, when, and why do we make reflective versus reflexive choices?The idea that our choices can arise either from a reflective and cognitively demanding system or a fast and reflexive system finds broad support in psychology and neuroscience. Clearly there are situations in which a given system should control our behavior: making the best possible decision is effortful and time-consuming, but the benefits of deliberative choice may be small relative to its cost. However, little experimental work has addressed what factors play into this tradeoff, or even if these two modes of choice really capture distinct processes. In this talk I examine how our reliance upon reflective versus reflexive choice varies based on factors such as the availability of cognitive resources, acute stress, or time pressure. To do this, I leverage the computational framework of Reinforcement Learning, which yields a set of precise set of testable predictions and data analysis tools—allowing us to formalize how different modes of choice behavior come about, and how these computations might unfold in the brain as measured with fMRI. Finally, I explore new methodologies for addressing these questions, including a quantitative framework for understanding how cognitive effort is allocated in the service of decision-making. Taken together, this work enriches our understanding of how and when people perform reflective versus reflexive choice and when it breaks down, informing both the cognitive psychology and neuroscience of decision-making.Ross Otto is an Assistant Professor of Psychology at McGill University. He obtained his BS in Cognitive Science in 2005 from UCLA, and his PhD in Psychology from UT Austin in 2012. He completed postdoctoral work at NYU’s Center for Neural Science prior to beginning his position at McGill. Otto’s work relies on a combination of computational, behavioral, and psychophysiological, and “big data” techniques to understand how people make decisions both in the laboratory and in the real world. His lab’s work is supported by NSERC, FRQNT, and SSHRC.
Fri 8 May10h30Yoshua BengioMilaOnlineEmpowering Citizens against Covid-19 with an ML-Based and Decentralized Risk Awareness AppWhat is contact tracing? How can it help to substantially bring down the reproduction number (number of newly infected individuals per infected person)? How can automated contact tracing act as a complement to existing manual contact tracing? How can ML-based risk estimation generalize contact tracing, moving away from a binary decision about a contact to graded predictions capturing all the clues about being contagious, and how could it enable fitting powerful epidemiological models to the data collected on phones? What are the privacy, human rights, dignity and democracy concerns around digital tracing? How can we deploy decentralized apps with the strongest possible privacy guarantees, thus delivering both on the side of saving lives by reducing greatly the reproduction number of the virus while making sure that neither governments nor other users can have access to my infection status or my personal data? How do we create trust in both directions and empower citizens with the information needed to act responsibly to protect their community, instead of relying on the authority of the government and the threat of social punishment? How does it make the problem more challenging from a machine learning perspective because a lot of information is now not accessible or blurred to achieve differential privacy and avoid having a central repository tracking people's detailed movements and who they met when? What machine learning techniques appear most promising to jointly train an inference machine which predicts contagiousness in the past and the present and at the same time train a highly structured epidemiological model which is a generative engine for running what-if policy scenarios and help public health take the difficult decisions ahead using the scientific evidence as well as the data being collected in a privacy-first way? How do we set up a form of non-profit data trust which protects citizens and avoids conflicts of interest, keeping the collected data at arm's length of governments but yet providing them with the information they need for taking policy decisions and managing the public health challenges. Many questions, and hopefully some early answers.Yoshua Bengio is Full Professor in the computer science and operations research department at U. Montreal, scientific director of Mila and of IVADO, Turing Award 2018 recipient, Canada Research Chair in Statistical Learning Algorithms, as well as a Canada AI CIFAR Chair. He pioneered deep learning and has been getting the most citations per day in 2018 among all computer scientists, worldwide. He is officer of the Order of Canada, member of the Royal Society of Canada, was awarded the Marie-Victorin Prize and the Radio-Canada Scientist of the year in 2017, and he is a member of the NeurIPS board and co-founder and general chair for the ICLR conference, as well as program director of the CIFAR program on Learning in Machines and Brains. His goal is to contribute to uncover the principles giving rise to intelligence through learning, as well as favour the development of AI for the benefit of all.
Fri 15 May10h30Jian TangMilaOnlineGraph Representation Learning: Algorithms and ApplicationsGraphs, a general type of data structures for capturing interconnected objects, are ubiquitous in a variety of disciplines and domains, ranging from computational social science, recommender systems, bioinformatics to chemistry. Recently, there is a growing interest in the machine learning community in developing deep learning architectures for graph-structured data. In this talk, I will give a high-level overview of the research in my group on graph representation learning including: (1) Unsupervised graph representation learning and visualization (WWW’15, WWW’16, WWW’19, ICLR’19); (2) Towards combining traditional statistical relational learning and graph neural networks (ICML’19, NeurIPS’19); (3) Graph Representation Learning for Drug Discovery (ICLR’20).Jian Tang is an assistant professor at HEC Montreal and a core faculty member at Mila since December, 2017. He is named to the first cohort of Canada CIFAR Artificial Intelligence Chairs (CIFAR AI Research Chair). His research interests focus on deep graph representation learning with a variety of applications including knowledge graphs, drug discovery and recommender systems. He was a research fellow at the University of Michigan and Carnegie Mellon University. He was a researcher in Microsoft Research Asia for two years. He received the best paper award of ICML’14 and was nominated for the best paper of WWW’16.
Fri 22 May10h30Ioannis MitliagkasMilaOnlineAdversarial formulations, robust learning and generalization: some recent and ongoing workModern machine learning can involve big, over-parametrized models whose capacity goes beyond what was considered necessary in traditional ML analysis. Some of those models, like generative adversarial networks (GANs), introduce a different paradigm by using multiple, competing objective functions: they are described as games. Compared to single-objective optimization, game dynamics is more complex and less understood. Similar dynamics also appears in formulations of robust learning and domain generalization. This intersection of overparametrization, robust learning, and adversarial dynamics presents some exciting new questions on numerics and statistical learning. In this talk, I will give an overview of recent work performed in this area by my group and collaborators, outline some ongoing projects and summarize interesting questions worth exploring.Ioannis Mitliagkas is an assistant professor in the department of Computer Science and Operations Research (DIRO) at the University of Montréal, member of MILA and CIFAR AI chair. Before that, he was a Postdoctoral Scholar with the departments of Statistics and Computer Science at Stanford University. He obtained his Ph.D. from the department of Electrical and Computer Engineering at The University of Texas at Austin. His research includes topics in optimization and machine learning, with recent emphasis on generalization and adversarial dynamics.
Fri 12 June10h30Timothy O'donnellMcGillOnlineCompositionality in Language Tim O’Donnell is an assistant professor, Canada CIFAR AI chair, and
William Dawson scholar in the Department of Linguistics at McGill
University. Previously he was a research scientist at MIT in the
department of Brain and Cognitive Sciences. He completed his PhD at
the Harvard University Department of Psychology. His research
focuses on developing mathematical and computational models of
language learning and processing. His work draws on techniques from
computational linguistics, machine learning, and artificial
intelligence, integrating concepts from theoretical linguistics and
methods from experimental psychology and looking at problems from all
these domains.
Fri 19 June10h30Liam PaullMilaOnlineSome Challenges for Efficiently Deploying Robots in Unstructured Environments
Fri 3 July10h30Petar VeličkovićDeepmindOnlineAlgorithmic Inductive Biases
Fri 10 July10h30Vikram VoletiMilaOnlineA brief Tutorial on Neural ODEs

Available slides

array(1) { ["wp-wpml_current_language"]=> string(2) "en" }

Mila goes virtual

Starting March 16, 2020, Mila shifts its activities to virtual platforms in order to minimize COVID-19 diffusion.

Read more