Courses

Mila > Courses

Courses and schedules - Winter 2023 (preliminary list)

ProfessorCourseDescriptionCreditsScheduleDates/Location
Simon Lacoste-JulienIFT 6132 – Advanced Structured Prediction and OptimizationStructured prediction is the problem of learning a prediction mapping between inputs and structured outputs, i.e. outputs that are made of interrelated parts often subject to constraints. Examples include predicting trees, orderings, alignments, etc., and appear in many applications from computer vision, natural language processing and computational biology among others.

This is an advanced machine learning course that will focus on the fundamental principles and related tools for structured prediction. The course will review the state of the art, tie older and newer approaches together, as well as identify open questions. It will consist of a mix of faculty lectures, class discussions and paper presentations by students, as well as a research project.

Prerequisite: I will assume that most of the content of IFT 6269 Probabilistic Graphical Models is known by the students.

4Tu: 2:30 – 4:30 pm
Thu: 1:30 – 3:30 pm
TBD
Aaron CourvilleIFT6135 – Representation LearningThis is a course on representation learning in general and deep learning in particular. Deep learning has recently been responsible for a large number of impressive empirical gains across a wide array of applications including most dramatically in object recognition and detection in images and speech recognition.

In this course we will explore both the fundamentals and recent advances in the area of deep learning. Our focus will be on neural network-type models including convolutional neural networks and recurrent neural networks such as LSTMs. We will also review recent work on attention mechanism and efforts to incorporate memory structures into neural network models. We will also consider some of the modern neural network-base generative models such as Generative Adversarial Networks and Variational Autoencoders.

This course is available in English and French.

4TBDTBD
Ioannis MitliagkasIFT 6085 – Theoretical principles for deep learning

 

Research in deep learning produces state-of-the-art results on a number of machine learning tasks. Most of those advances are driven by intuition and massive exploration through trial and error. As a result, theory is currently lagging behind practice. The ML community does not fully understand why the best methods work.

Why can we reliably optimize non-convex objectives?
How expressive are our architectures, in terms of the hypothesis class they describe?
Why do some of our most complex models generalize to unseen examples when we use datasets orders of magnitude smaller than what the classic statistical learning theory deems sufficient?
A symptom of this lack of understanding is that deep learning methods largely lack guarantees and interpretability, two necessary properties for mission-critical applications. More importantly, a solid theoretical foundation can aid the design of a new generation of efficient methods—sans the need for blind trial-and-error-based exploration.

In this class we will go over a number of recent publications that attempt to shed light onto these questions. Before discussing the new results in each paper we will first introduce the necessary fundamental tools from optimization, statistics, information theory and statistical mechanics. The purpose of this class is to get students engaged with new research in the area. To that end, the majority of credit will be given for a class project report and presentation on a relevant topic.

Note: This is an advanced class designed for PhD students with serious mathematical background.

4Wed: 9:30 – 11:30 am
Thu: 9:00 – 11:00 am
TBD
Irina RishIFT6760A: Towards AGI: Scaling, Alignment and Emergent Behaviors in Neural NetsThis seminar-style course will focus on recent advances in the rapidly developing area of “foundation models”, i.e. large-scale neural network models (e.g., GPT-3, CLIP, DALL-e, etc) pretrained on very large, diverse datasets. Such models often demonstrate significant improvement in their few-shot generalization abilities, as compared to their smaller-scale counterparts, across a wide range of downstream tasks – what one could call a “transformation of quantity into quality” or an “emergent behavior”. This is an important step towards a long-standing objective of achieving Artificial General Intelligence (AGI). By AGI here we mean literally a “general”, i.e. broad, versatile AI capable of quickly adapting to a wide range of situations and tasks, both novel and those encountered before – i.e. achieving a good stability (memory) vs plasticity (adaptation) trade-off, using the continual learning terminology. In this course, we will survey most recent advances in large-scale pretrained models, focusing specifically on empirical scaling laws of such systems’ performance, with increasing compute, model size, and pretraining data (power laws, phase transitions). We will also explore the trade-off between the increasing AI capabilities and AI safety/alignment with human values, considering a range of evaluation metrics beyond the predictive performance. Finally, we will touch upon several related fields, including transfer-, continual- and meta-learning, as well as out-of-distribution generalization, robustness and invariant/causal predictive modeling.4Mon: 16:30 – 18:30
Thu:16:30 – 18:30
From January 9 to April 13, 2023

Mila (Auditorium 2)

Guillaume RabusseauIFT 6166 – Matrix and Tensor Factorization for MLThe goal of this course is to present an overview of linear and multilinear algebra techniques for designing/analyzing ML algorithms and models, and to engage students with new research in the area.
– Fundamental notions of linear and multilinear algebra.
– Old and new ML methods leveraging matrix and tensor decomposition: PCA/CCA, collaborative filtering, spectral graph clustering, spectral methods for HMM, K-FAC, spectral normalization, tensor method of moments, NN/MRF compression, tensor regression/completion, etc.
– Open problems.
The goal of this course is to present an overview of linear and multilinear algebra techniques for designing/analyzing ML algorithms and models, and to engage students with new research in the area. – Fundamental notions of linear and multilinear algebra. – Old and new ML methods leveraging matrix and tensor decomposition: PCA/CCA, collaborative filtering, spectral graph clustering, spectral methods for HMM, K-FAC, spectral normalization, tensor method of moments, NN/MRF compression, tensor regression/completion, etc. – Open problems.The goal of this course is to present an overview of linear and multilinear algebra techniques for designing/analyzing ML algorithms and models, and to engage students with new research in the area.
– Fundamental notions of linear and multilinear algebra.
– Old and new ML methods leveraging matrix and tensor decomposition: PCA/CCA, collaborative filtering, spectral graph clustering, spectral methods for HMM, K-FAC, spectral normalization, tensor method of moments, NN/MRF compression, tensor regression/completion, etc.
– Open problems.
4Tue: 12:30-14:30 pm
Thu: 11:30-13:30 pm
Mila
Guillaume LajoieMAT 6215 – Dynamical SystemsThis graduate course is an introduction to the treatment of nonlinear differential equations, and more generally to the theory of dynamical systems. The objective is to introduce the student to the theory of dynamical systems and its applications. Firstly, classical dynamics analysis techniques will be presented: continuous and discrete flows, existence and stability of solutions, invariant manifolds, bifurcations and normal forms. Secondly, an introduction to ergodic theory will be presented: chaotic dynamics, strange attractors, dynamic entropy, high-dimensional systems (e.g. networks), driven dynamics and information processing. Particular attention will be paid to computations performed by dynamical systems. Throughout the course, there will be an emphasis on modern applications in neuroscience, artificial intelligence, and data-driven modeling. This inlcudes: dynamical systems tools for optimization, network dynamics and links to deep learning & representation theory, computational neuroscience tools.
At the end of the course, the student will be able to apply dynamical systems analysis techniques to concrete problems, as well as navigate the modern dynamical systems literature. Several examples and applications making use of numerical simulations will be used. To take this course, the student must master, at an undergraduate level, notions of calculus, linear differential equations, linear algebra and probability.
4Mon: 9:00 am – 12:00pm

& virtual seminar Tue: 1:00 – 2:00 pm

(subject to changes)

TBD

Gauthier GidelIFT 6164 – Adversarial ML (previously named IFT 6756 – Game Theory and ML)

The number of Machine Learning applications related to game theory has been growing in the last couple of years. For example, two-player zero-sum games are important for generative modeling (GANs) and mastering games like Go or Poker via self-play. This course is at the interface between game theory, optimization, and machine learning. It tries to understand how to learn models to play games. It will start with some quick notions of game theory to eventually delve into machine learning problems with game formulations such as GANs or Multi-agent RL. This course will also cover the optimization (a.k.a training) of such machine learning games.

4Wed: 3:30-5:30 pm

Thu: 1:30-3:30 pm

TBD
Jian TangMATH 80600A – Machine Learning II: Deep Learning and Applications Deep learning has achieved great success in a variety of fields such as speech recognition, image understanding, and natural language understanding. This course aims to introduce the basic techniques of deep learning and recent progress of deep learning on natural language understanding and graph analysis.

This course aims to introduce the basic techniques of deep learning including feedforward neural networks, convolutional neural networks, and recurrent neural networks. We will also cover recent progress on deep generative models. Finally, we will introduce how to apply these techniques to natural language understanding and graph analysis.

3TBDTBD
Doina PrecupCOMP 579 Reinforcement LearningComputer Science (Sci) : Bandit algorithms, finite Markov decision processes, dynamic programming, Monte-Carlo Methods, temporal-difference learning, bootstrapping, planning, approximation methods, on versus off policy learning, policy gradient methods temporal abstraction and inverse reinforcement learning.4TBDTBD
Reihaneh RabbanyCOMP 551 – Applied Machine Learning This course covers a selected set of topics in machine learning and data mining, with an emphasis on understanding the inner workings of the common algorithms. The majority of sections are related to commonly used supervised learning techniques, and to a lesser degree unsupervised methods. This includes fundamentals of algorithms on linear and logistic regression, decision trees, support vector machines, clustering, neural networks, as well as key techniques for feature selection and dimensionality reduction, error estimation and empirical validation.4TBDTBD
Aditya MahajanECSE 506 – Stochastic Control and Decision Theory
Markov decision processes (MDP), dynamic programming and approximate dynamic programming. Stochastic monotonicity, structure of optimal policies. Models with imperfect and delayed observations, partially observable Markov decision processes (POMDPs), information state and approximate information state. Linear quadratic and Gaussian (LQG) systems, team theory, information structures, static and dynamic teams, dynamic programming for teams.
3TBDTBD
Siamak RavanbakhshCOMP 588 Probabilistic Graphical Models
The course covers representation, inference and learning with graphical models; the topics at high level include directed and undirected graphical models; exact inference; approximate inference using deterministic optimization based methods, as well as stochastic sampling based methods; learning with complete and partial observations.
TBD Tu: 10-11:30 am
Thu: 10-11:30 am
McGill University
Golnoosh Farnadi80629A – Machine Learning I: Large-Scale Data Analysis and Decision MakingIn this course wi will study machine learning models. In addition to standards models, we will study models for analyzing user behaviour and for decision making.
Massive datasets are now common and require scalable analysis tools. Machine learning provides such tools and is widely used for modelling problems across many fields including artificial intelligence, bioinformatics, finance, marketing, education, transportation, and health. In this context, we study how standard machine learning models for supervised (classification, regression) and unsupervised learning (for example, clustering and topic modelling) can be scaled to massive datasets using modern computation techniques (for example, computer clusters). In addition, we will discuss recent models for recommender systems as well as for decision making (including multi-arm bandits and reinforcement learning).Through a course project students will have the opportunity to gain practical experience with the analysis of datasets from their field(s) of interest. A certain level of familiarity with computer programming will be expected.
3Fri: 12:30 – 3:30 pmTBD
Timothy J. O’DonnellLING 645: Computational LinguisticsIntroduction to foundational ideas in computational linguistics and natural language processing. Topics include formal language theory, probability theory, estimation and inference, and recursively defined models of language structure. Emphasis on both the mathematical foundations of the field as well as how to use these tools to understand human language.TBDTBDTBD
Aishwarya AgrawalIFT6765AA – Vision and LanguageA seminar course on recent advances in research problems at the intersection of computer vision and natural language processing, such as caption based image retrieval, grounding referring expressions, image captioning, visual question answering, etc.4Tue: 9.30 – 11.30am

Fri: 1.30 – 3.30pm

TBD

Mila (preferred)

Pierre-Luc Bacon (Ioannis in the Fall, PLB in the winter)IFT6390 – Fundamentals in machine learningBasic elements of statistical learning algorithms. Examples of applications in data mining, nonlinear regression, and temporal data, and deep learning.4Mon: 12-30-2:30 PM

Wed: 2:30-5:30 PM

TBD

UdeM

Dhanya SridharIFT 6168 – Causal inference and machine learningMachine learning (ML), with its success in language understanding to biological settings, is a key ingredient to intelligent agents that help us with science and decision making. However, ML faces two major hurdles that limit its wider use. First, ML systems struggle to generalize out-of-distribution (OOD), to unseen tasks and domains. Second, ML systems learn correlations but science and decision making require causal inference – an inference about the effects of interventions. The field of causality, with its formalism of causal models, provides a theoretical framework to address the shortcomings of ML systems. Causality benefits from ML too: instead of carefully measuring variables of interest and defining causal models, with ML, we can learn infer quantities from rich sources of data.

In this course, we’ll begin with an introduction to the theory of causal models. We’ll build on this foundation and study the role causality plays in OOD generalization. Then, we’ll study how techniques from ML such as prediction with NNs, representation learning, and gradient based optimization help us leverage large-scale, unstructured data to make causal inferences, from estimating effects to discovering causal models. We’ll focus on the challenges and open research problems around learning causal variables and models from data using ML. This is an advanced course, taught seminar-style, and expects students to have a strong background in ML.

4Tue: 12:30 – 2.30pm

Fri: 11:30am – 1:30pm

From January 10 to April 14, 2023
Glen BersethIFT 6163 – Robot LearningLearning methods such as deep reinforcement learning have shown success in solving simulated planning and control problems but struggle to produce diverse, intelligent behaviour, on robots. This class aims to discuss these limitations and study methods to overcome them and enable agents capable of training autonomously, becoming learning and adapting systems that require little supervision. By the end of the course, each student should have a solid grasp of different techniques to train robots to accomplish tasks in the real world. These techniques covered in the course include but are not limited to reinforcement learning, batch RL, multi-task RL, model-based RL, Sim2Real, hierarchical RL, goal conditioned RL, multi-Agent RL, the fragility of RL, meta-level decision making and learning reward functions.4TBDUdeM
David RolnickCOMP 767 – Machine learning applied to climate changeThis seminar will explore how machine learning can be applied in fighting climate change. We will look at ways that machine learning can be used to help mitigate greenhouse gas emissions and adapt to the effects of climate change – via applications in electricity systems, buildings, transportation, agriculture, disaster response, and many other areas. Particular emphasis will be given to understanding exactly when machine learning is relevant and helpful, and how to go about scoping, developing, and deploying a project so that it has the intended impact.4Mon: 10-11:30am
Wed: 10-11:30am
McGill
Derek NowrouzezahraiECSE 446/546 – Realistic/Advanced Image SynthesisThis course presents modern mathematical models of lighting and the algorithms needed to solve them and generate beautiful realistic images. Both traditional numerical methods and modern machine learning-based approaches will be covered.4TBDMcGill
Siva Reddy, Timothy J. O’DonnellCOMP 345 / LING 345 From Natural Language to Data ScienceThis course is for people with no experience is NLP and would like to see how it can be used for exciting data science applications. We suggest other NLP/CL courses if you want to focus on theoretical side of NLP/CL. Topics covered in this course include: Language data and applications, Searching through data, How to make sense of data, Language Modeling, Language to decisions, Information Retrieval, Information Extraction, Social Networks (Twitter and Facebook data), Recommendation systems, Ethics.3Tu: 11:35 am to 12:55 pm
Thu: 11:35 am to 12:55 pm
TBD

Courses and schedules - Fall 2022

ProfessorCourseDescriptionCreditsScheduleDates/Location
Simon Lacoste-JulienIFT 6269 – Probabilistic Graphical ModelsSystem Representation as probabilistic graphical models, inference in graphical models, learning parameters from data.4Tue: 3-5 pm

Fri: 3-5 pm

06-09-2022

09-09-2022

Mila

Ioannis MitliagkasIFT 6390 – Fondements de l’Apprentissage MachineBasic elements of statistical and symbolic learning algorithms. Examples of applications in data mining, pattern recognition, nonlinear regression, and time data.4TBDTBD
Sarath ChandarINF8250E – Reinforcement LearningDesigning autonomous decision making systems is one of the longstanding goals of Artificial Intelligence. Such decision making systems, if realized, can have a big impact in machine learning for robotics, game playing, control, health care to name a few. This course introduces Reinforcement Learning as a general framework to design such autonomous decision making systems. By the end of this course, you will have a solid knowledge of the core challenges in designing RL systems and how to approach them.3Mon: 12:45- 15:4508-29-2022
Polytechnique
Laurent CharlinMATH 80629 Apprentissage automatique I : Analyse des Mégadonnées et Prise de décisionIn this course, we will study machine learning models including models for decision making.

Massive datasets are now common and require scalable analysis tools. Machine learning provides such tools and is widely used for modelling problems across many fields including artificial intelligence, bioinformatics, finance, marketing, education, transportation, and health. In this context, we study how standard machine learning models for supervised (classification, regression) and unsupervised learning (for example, clustering and topic modelling) can be scaled to massive datasets using modern
computation techniques (for example, computer clusters). In addition, we will discuss recent models for recommender systems as well as for decision making (including multi-arm bandits and reinforcement learning).

I am teaching both the English and the French version in Fall 2022.

3[French] We: 8:30 – 11:30

[English] Mon: 8:30 – 11:30

[French] 31-08-22
[English] 29-08-22HEC
Jackie C. K. CheungCOMP 550 – Natural Language ProcessingAn introduction to the computational modelling of natural language, including algorithms, formalisms, and applications. Computational morphology, language modelling, syntactic parsing, lexical and compositional semantics, and discourse analysis. Selected applications such as automatic summarization, machine translation, and speech processing. Machine learning techniques for natural language processing.
An introduction to the computational modelling of natural language, including algorithms, formalisms, and applications. Computational morphology, language modelling, syntactic parsing, lexical and compositional semantics, and discourse analysis. Selected applications such as automatic summarization, machine translation, and speech processing. Machine learning techniques for natural language processing.An introduction to the computational modelling of natural language, including algorithms, formalisms, and applications. Computational morphology, language modelling, syntactic parsing, lexical and compositional semantics, and discourse analysis. Selected applications such as automatic summarization, machine translation, and speech processing. Machine learning techniques for natural language processing.
3TBDTBD

 

Timothy J. O’DonnellCOMP598/LING 682 – Probabilistic ProgrammingProbabilistic inference viewed as a form of non-standard interpretation of programming languages with a focus on sampling algorithms using the programming language Gen.3TBDTBD

 

Reihaney RabbanyCOMP 596 – Network ScienceAn introduction to Network Science, this is a half lecture half seminar course. Networks model the relationships in complex systems, from hyperlinks between web pages, and co-authorships between research scholars to biological interactions between proteins and genes, and synaptic links between neurons. Network Science is an interdisciplinary research area involving researchers from Physics, Computer Science, Sociology, Math and Statistics, with applications in a wide range of domains including Biology, Medicine, Political Science, Marketing, Ecology, Criminology, etc. In this course, we will cover the basic concepts and techniques used in Network Science, review the state of the art techniques, and discuss the most recent developments.3TBDTBD

 

 Guy WolfMAT 6493 – Geometric data analysisFormal and analytic approaches for modeling intrinsic geometries in data. Algorithms for constructing and utilizing such geometries in machine learning. Applications in classification, clustering, and dimensionality reduction.

The course will accommodate anglophone students who do not speak French, as well as francophone students.

4Mon: 1:30-5:20 pmTBD

UdeM : 4186 Pav. Andre-Aisenstadt

Sarath ChandarINF8245E – Machine LearningThis course provides a rigorous introduction to the field of machine learning (ML). The aim of the course is not just to teach how to use ML algorithms but also to explain why, how, and when these algorithms work. The course introduces fundamental algorithms in supervised learning and unsupervised learning from the first principles. The course, while covering several problems in machine learning like regression, classification, representation learning, dimensionality reduction, will introduce the core theory, which unifies all the algorithms.3Wed: 9:30 am-12:30 pm08-31-2022
Polytechnique
Pierre-Luc BaconIFT 6760C – Reinforcement LearningAdvanced course in reinforcement learning. Topics: Policy gradient methods, gradient estimation, analysis of valued-based function approximation methods, optimal control and automatic differentiation, bilevel optimization in meta-learning and inverse reinforcement learning.4NOT OFFERED THIS FALL 2022 SEMESTER. BACK FOLLOWING YEAR
Gauthier Gidel and Glen BersethIFT 6758 – Data ScienceThe goal of this course is to introduce the concepts (theory and practice) needed to approach and solve data science problems. The first part of the course will cover the principles of analyzing data, the basics about different kinds of models and statistical inference. The second part expands into the statistical methods and practical techniques to deal with common modalities of data – image, text and graphs. Specific programming frameworks required for data science will be covered in the lab sessions.4Tue: 11:30 – 12:30 pm
Thu: 4:30 – 6:30 pmLabo – Ma : 12:30 – 2:30 pm
TBD

Online

Golnoosh FarnadiMATH80630 – Trustworthy Machine LearningThis course will teach students to recognize where and understand why ethical issues and policy questions can arise when applying data science to real world problems. It will focus on ways to conceptualize, measure, and mitigate bias in data-driven decision-making.

This is a graduate course, in which we will cover methods for trustworthy and ethical machine learning and AI, focusing on the technical perspective of methods that allow addressing current ethical issues. Recent years have shown that unintended discrimination arises naturally and frequently in the use of machine learning and algorithmic decision making. We will work systematically towards a technical understanding of this problem mindful of its social and legal context. This course will bring analytic and technical precision to normative debates about the role that data science, machine learning, and artificial intelligence play in consequential decision-making in commerce, employment, finance, healthcare, education, policing, and other areas. Students will learn to think critically about how to plan, execute, and evaluate a project with these concerns in mind, and how to cope with novel challenges for which there are often no easy answers or established solutions.

 

3Fri: 3:30 – 6:30 pm27-08-2022
Aishwarya AgrawalIFT6135 – Representation LearningThis is a course on representation learning in general and deep learning in particular. Deep learning has recently been responsible for a large number of impressive empirical gains across a wide array of applications including most dramatically in object recognition and detection in images and speech recognition.

In this course we will explore both the fundamentals and recent advances in the area of deep learning. Our focus will be on neural network-type models including convolutional neural networks and recurrent neural networks such as LSTMs. We will also review recent work on attention mechanism and efforts to incorporate memory structures into neural network models. We will also consider some of the modern neural network-base generative models such as Generative Adversarial Networks and Variational Autoencoders.

4Tue: 9.30 – 11.30am
Fri: 12.30 – 2.30pm
06-09-2022

Mila Agora

Prakash Panangaden and Joey BoseCOMP760: Geometry and Generative ModelsIn recent years Deep Generative Models have seen remarkable success over a variety of data domains such as images, text, and audio to name a few. However, the predominant approach in many of these models (e.g. GANS, VAE, Normalizing Flows) is to treat data as fixed-dimensional continuous vectors in some Euclidean space, despite significant evidence to the contrary (e.g. 3D molecules). This course places a direct emphasis on learning generative models for complex geometries described via manifolds, such as spheres, tori, hyperbolic spaces, implicit surfaces, and homogeneous spaces. The purpose of this seminar course is to understand the key design principles that underpin the new wave of geometry-aware generative models that treat the rich geometric structure in data as a first-class citizen. This seminar course will also serve to develop extensions to these approaches at the leading edge of research and as a result, a major component of the course will focus on class participation through presenting papers and a thematically-relevant course project.3Fri: 1:00 – 4pmMila Auditorium 1