Courses and schedules - Winter 2022 (preliminary list)
See the full list of DIRO courses
Professor | Course | Description | Credits | Schedule | Dates |
---|---|---|---|---|---|
Simon Lacoste-Julien | IFT 6132 – Advanced Structured Prediction and Optimization | Structured prediction is the problem of learning a prediction mapping between inputs and structured outputs, i.e. outputs that are made of interrelated parts often subject to constraints. Examples include predicting trees, orderings, alignments, etc., and appear in many applications from computer vision, natural language processing and computational biology among others. This is an advanced machine learning course that will focus on the fundamental principles and related tools for structured prediction. The course will review the state of the art, tie older and newer approaches together, as well as identify open questions. It will consist of a mix of faculty lectures, class discussions and paper presentations by students, as well as a research project. Prerequisite: I will assume that most of the content of IFT 6269 Probabilistic Graphical Models is known by the students. | 4 | CANCELLED for Winter 2022 Next time it will be taught will be Winter 2023 | CANCELLED for Winter 2022 Next time it will be taught will be Winter 2023 |
Aaron Courville | IFT 6135 – Apprentissage de représentations | This is a course on representation learning in general and deep learning in particular. Deep learning has recently been responsible for a large number of impressive empirical gains across a wide array of applications including most dramatically in object recognition and detection in images and speech recognition. In this course we will explore both the fundamentals and recent advances in the area of deep learning. Our focus will be on neural network-type models including convolutional neural networks and recurrent neural networks such as LSTMs. We will also review recent work on attention mechanism and efforts to incorporate memory structures into neural network models. We will also consider some of the modern neural network-base generative models such as Generative Adversarial Networks and Variational Autoencoders. | 4 | Tue: 10:30 – 12:30 PM Wed: 12:30-2:30 PM | UdeM |
Ioannis Mitliagkas | IFT 6085 – Theoretical principles for deep learning
| Research in deep learning produces state-of-the-art results on a number of machine learning tasks. Most of those advances are driven by intuition and massive exploration through trial and error. As a result, theory is currently lagging behind practice. The ML community does not fully understand why the best methods work. Why can we reliably optimize non-convex objectives? In this class we will go over a number of recent publications that attempt to shed light onto these questions. Before discussing the new results in each paper we will first introduce the necessary fundamental tools from optimization, statistics, information theory and statistical mechanics. The purpose of this class is to get students engaged with new research in the area. To that end, the majority of credit will be given for a class project report and presentation on a relevant topic. Note: This is an advanced class designed for PhD students with serious mathematical background. | 4 | Wed: 9:30-11:30 AM Thu: 9:30-11:30 AM | At Mila |
Irina Rish | IFT 6167 (6760B) – Neural Scaling Laws and Foundation Models | This seminar-style course will focus on recent advances in the rapidly developing area of “foundation models”, i.e. large-scale neural network models (e.g., GPT-3, CLIP, DALL-e, etc) pretrained on very large, diverse datasets. Such models often demonstrate significant improvement in their few-shot generalization abilities, as compared to their smaller-scale counterparts, across a wide range of downstream tasks – what one could call a “transformation of quantity into quality” or an “emergent behavior”. This is an important step towards a long-standing objective of achieving Artificial General Intelligence (AGI). By AGI here we mean literally a “general”, i.e. broad, versatile AI capable of quickly adapting to a wide range of situations and tasks, both novel and those encountered before – i.e. achieving a good stability (memory) vs plasticity (adaptation) trade-off, using the continual learning terminology. In this course, we will survey most recent advances in large-scale pretrained models, focusing specifically on empirical scaling laws of such systems’ performance, with increasing compute, model size, and pretraining data (power laws, phase transitions). We will also explore the trade-off between the increasing AI capabilities and AI safety/alignment with human values, considering a range of evaluation metrics beyond the predictive performance. Finally, we will touch upon several related fields, including transfer-, continual- and meta-learning, as well as out-of-distribution generalization, robustness and invariant/causal predictive modeling. | 4 | Mon: 4:30-6:30 PM Thu: 4:30-6:30 PM | At Mila |
Guillaume Rabusseau | IFT 6760A – Matrix and Tensor Factorization for ML | The goal of this course is to present an overview of linear and multilinear algebra techniques for designing/analyzing ML algorithms and models, and to engage students with new research in the area. – Fundamental notions of linear and multilinear algebra. – Old and new ML methods leveraging matrix and tensor decomposition: PCA/CCA, collaborative filtering, spectral graph clustering, spectral methods for HMM, K-FAC, spectral normalization, tensor method of moments, NN/MRF compression, tensor regression/completion, etc. – Open problems. The goal of this course is to present an overview of linear and multilinear algebra techniques for designing/analyzing ML algorithms and models, and to engage students with new research in the area. – Fundamental notions of linear and multilinear algebra. – Old and new ML methods leveraging matrix and tensor decomposition: PCA/CCA, collaborative filtering, spectral graph clustering, spectral methods for HMM, K-FAC, spectral normalization, tensor method of moments, NN/MRF compression, tensor regression/completion, etc. – Open problems.The goal of this course is to present an overview of linear and multilinear algebra techniques for designing/analyzing ML algorithms and models, and to engage students with new research in the area. – Fundamental notions of linear and multilinear algebra. – Old and new ML methods leveraging matrix and tensor decomposition: PCA/CCA, collaborative filtering, spectral graph clustering, spectral methods for HMM, K-FAC, spectral normalization, tensor method of moments, NN/MRF compression, tensor regression/completion, etc. – Open problems. | 4 | Tue: 12:30-2:30 PM Thu: 11:30-1:30 PM | At Mila |
Guillaume Lajoie | MAT 6215 – Dynamical Systems | This graduate course is an introduction to the treatment of nonlinear differential equations, and more generally to the theory of dynamical systems. The objective is to introduce the student to the theory of dynamical systems and its applications. Firstly, classical dynamics analysis techniques will be presented: continuous and discrete flows, existence and stability of solutions, invariant manifolds, bifurcations and normal forms. Secondly, an introduction to ergodic theory will be presented: chaotic dynamics, strange attractors, dynamic entropy, high-dimensional systems (e.g. networks), driven dynamics and information processing. Particular attention will be paid to computations performed by dynamical systems. Throughout the course, there will be an emphasis on modern applications in neuroscience, artificial intelligence, and data-driven modeling. This inlcudes: dynamical systems tools for optimization, network dynamics and links to deep learning & representation theory, computational neuroscience tools. At the end of the course, the student will be able to apply dynamical systems analysis techniques to concrete problems, as well as navigate the modern dynamical systems literature. Several examples and applications making use of numerical simulations will be used. To take this course, the student must master, at an undergraduate level, notions of calculus, linear differential equations, linear algebra and probability. | 4 | Tue: 9:30-1:00 PM | TBD |
Blake Richards | COMP 549 – Brain-inspired AI (Replaces COMP596) | This is a historical overview of the influence of neuroscience on artificial intelligence. It will be a seminar style class, mixing lecture, discussion, and class presentations. Topic covered will include perceptrons, the origins of reinforcement learning, parallel distributed processing, Boltzmann machines, brain inspired neural network architectures, and modern approaches to deep learning that incorporate attention, memory and ensembles. This is a historical overview of the influence of neuroscience on artificial intelligence. It will be a seminar style class, mixing lecture, discussion, and class presentations. Topic covered will include perceptrons, the origins of reinforcement learning, parallel distributed processing, Boltzmann machines, brain inspired neural network architectures, and modern approaches to deep learning that incorporate attention, memory and ensembles.This is a historical overview of the influence of neuroscience on artificial intelligence. It will be a seminar style class, mixing lecture, discussion, and class presentations. Topic covered will include perceptrons, the origins of reinforcement learning, parallel distributed processing, Boltzmann machines, brain inspired neural network architectures, and modern approaches to deep learning that incorporate attention, memory and ensembles. | 3 | Mon: 4-5:30PM We: 4-5:30PM | TBD |
Gauthier Gidel | IFT 6756 – Game Theory and ML course (The number of the course and the name will change. New Name: Adversarial ML, new number TBD) | The number of Machine Learning applications related to game theory has been growing in the last couple of years. For example, two-player zero-sum games are important for generative modeling (GANs) and mastering games like Go or Poker via self-play. This course is at the interface between game theory, optimization, and machine learning. It tries to understand how to learn models to play games. It will start with some quick notions of game theory to eventually delve into machine learning problems with game formulations such as GANs or Multi-agent RL. This course will also cover the optimization (a.k.a training) of such machine learning games. | 4 | Tue: 4:30-6:30 PM Fri: 3:30-5:30 PM | At Mila |
Jian Tang | MATH 80600A – Machine Learning II: Deep Learning and Applications | Deep learning has achieved great success in a variety of fields such as speech recognition, image understanding, and natural language understanding. This course aims to introduce the basic techniques of deep learning and recent progress of deep learning on natural language understanding and graph analysis. This course aims to introduce the basic techniques of deep learning including feedforward neural networks, convolutional neural networks, and recurrent neural networks. We will also cover recent progress on deep generative models. Finally, we will introduce how to apply these techniques to natural language understanding and graph analysis. | 3 | TBD | TBD |
Doina Precup | COMP 579 Reinforcement Learning | Computer Science (Sci) : Bandit algorithms, finite Markov decision processes, dynamic programming, Monte-Carlo Methods, temporal-difference learning, bootstrapping, planning, approximation methods, on versus off policy learning, policy gradient methods temporal abstraction and inverse reinforcement learning. | 4 | TBD | TBD |
Reihaneh Rabbany | COMP 551 – Applied Machine Learning | This course covers a selected set of topics in machine learning and data mining, with an emphasis on understanding the inner workings of the common algorithms. The majority of sections are related to commonly used supervised learning techniques, and to a lesser degree unsupervised methods. This includes fundamentals of algorithms on linear and logistic regression, decision trees, support vector machines, clustering, neural networks, as well as key techniques for feature selection and dimensionality reduction, error estimation and empirical validation. | 4 | Tue: 1:00-2:25 PM Thu: 1:00-2:25 PM | Remote/McGill |
Aditya Mahajan | ECSE 506 – Stochastic Control and Decision Theory | Markov decision processes (MDP), dynamic programming and approximate dynamic programming. Stochastic monotonicity, structure of optimal policies. Models with imperfect and delayed observations, partially observable Markov decision processes (POMDPs), information state and approximate information state. Linear quadratic and Gaussian (LQG) systems, team theory, information structures, static and dynamic teams, dynamic programming for teams. | 3 | Tuesday 10:00-11:30 PM Thursday 10:00-11:30 PM | TBD |
Siva Reddy | COMP 599 – Natural Language Understanding with Deep Learning | The field of natural language processing (NLP) has seen multiple paradigm shifts over decades, from symbolic AI to statistical methods to deep learning. We review this shift through the lens of natural language understanding (NLU), a branch of NLP that deals with “meaning”. We start with what is meaning and what does it mean for a machine to understand language? We explore how to represent the meaning of words, phrases, sentences and discourse. We then dive into many useful NLU applications. | TBD | TBD | TBD |
Golnoosh Farnadi | 80629A – Machine Learning I: Large-Scale Data Analysis and Decision Making | TBD | TBD | Wed: 8:30-11:30 AM | TBD |
Tim O’Donnell | COMP596/LING 683 – Probabilistic Programming | An introduction to Bayesian inference via probabilistic programming. | TBD | Thu: 8:30-11:30 AM | TBD |
Aishwarya Agrawal | IFT6xAA – Vision and Language | A seminar course on recent advances in vision and language research | 4 | Tue: 9:30-11:30 AM Fri: 1:30-3:30 PM | At Mila |
Pierre-Luc Bacon (Ioannis in the Fall, PLB in the winter) | IFT6390 – Fundamentals in machine learning | Basic elements of statistical learning algorithms. Examples of applications in data mining, nonlinear regression, and temporal data, and deep learning. | 4 | Mon: 12-30-2:30 PM Wed: 2:30-5:30 PM | UdeM |
Dhanya Sridhar | IFT 6251 – Causal inference and machine learning | There is a growing interest in the intersection of causal inference and machine learning. On one hand, ML methods — e.g., prediction methods, unsupervised methods, representation learning — can be adapted to estimate causal relationships between variables. On the other hand, the language of causality could lead to new learning criteria that yield more robust and fair ML algorithms. In this course, we’ll begin with an introduction to the theory behind causal inference. Next, we’ll cover work on causal estimation with neural networks, representation learning for causal inference, and flexible sensitivity analysis. We’ll conclude with work that draws upon causality to make machine learning methods fair or robust. This is an advanced, seminar-style course and students are expected to have a strong background in ML. | 4 | Tue: 12:30-2:30 PM Fri: 11:30-1:30 PM | At Mila |
Courses and schedules - Fall 2021 (preliminary list)
See the full list of DIRO courses
Noms Professeurs | Cours/sigles | Descriptions | Crédits | Horaires | Dates |
---|---|---|---|---|---|
Simon Lacoste-Julien | IFT 6269 – Modèles Graphiques probabilistes et apprentissage | System Representation as probabilistic graphical models, inference in graphical models, learning parameters from data. | 4 | TBC | TBC |
Ioannis Mitliagkas | IFT 6390 – Fondements de l’Apprentissage Machine | Basic elements of statistical and symbolic learning algorithms. Examples of applications in data mining, pattern recognition, nonlinear regression, and time data. | 4 | Section A : Me 9:30-11:30 et Je 9:30-10:30 Section A1 : Je 10:30-12:30 Section A102 : Je 10:30-12:30 | 01-09-2021 – 08-12-2021
|
Sarath Chandar | INF8953DE – Reinforcement Learning | Designing autonomous decision making systems is one of the longstanding goals of Artificial Intelligence. Such decision making systems, if realized, can have a big impact in machine learning for robotics, game playing, control, health care to name a few. This course introduces Reinforcement Learning as a general framework to design such autonomous decision making systems. By the end of this course, you will have a solid knowledge of the core challenges in designing RL systems and how to approach them. | 3 | TBD | TBD |
Laurent Charlin | MATH 80629 Apprentissage automatique I : Analyse des Mégadonnées et Prise de décision | In this course, we will study models of machine learning. Furthermore, we will also study models of user behavior analysis and decision making. Large datasets are now comMon. and require scalable analytics. In addition, we will discuss recent models for referral systems as well as for decision-making (including multi-armed bandits and reinforcement learning). | 3 | [1er] Me 8:30 –11:30. [2eme] Ve 8:30 – 11:30 | [1er] 09-01-2021 – 12-01-2021 [2eme] 09-03-2021 – 12-03-2021 |
Jackie C. K. Cheung | COMP 550 – Natural Language Processing | An introduction to the computational modelling of natural language, including algorithms, formalisms, and applications. Computational morphology, language modelling, syntactic parsing, lexical and compositional semantics, and discourse analysis. Selected applications such as automatic summarization, machine translation, and speech processing. Machine learning techniques for natural language processing. An introduction to the computational modelling of natural language, including algorithms, formalisms, and applications. Computational morphology, language modelling, syntactic parsing, lexical and compositional semantics, and discourse analysis. Selected applications such as automatic summarization, machine translation, and speech processing. Machine learning techniques for natural language processing.An introduction to the computational modelling of natural language, including algorithms, formalisms, and applications. Computational morphology, language modelling, syntactic parsing, lexical and compositional semantics, and discourse analysis. Selected applications such as automatic summarization, machine translation, and speech processing. Machine learning techniques for natural language processing. | 3 | Lu et Me : 2:30 – 4:00 | TBD
|
Siva Reddy, Timothy J. O’Donnell | COMP 596 – From Natural Language to Data Science | 4 | TBD | TBD | |
Timothy J. O’Donnell | LING 645 – Computational Linguistics | TBD | TBD | TBD | |
Reihaney Rabbany | COMP 596 – Network Science | An introduction to Network Science, this is a half lecture half seminar course. Networks model the relationships in complex systems, from hyperlinks between web pages, and co-authorships between research scholars to biological interactions between proteins and genes, and synaptic links between neurons. Network Science is an interdisciplinary research area involving researchers from Physics, Computer Science, Sociology, Math and Statistics, with applications in a wide range of domains including Biology, Medicine, Political Science, Marketing, Ecology, Criminology, etc. In this course, we will cover the basic concepts and techniques used in Network Science, review the state of the art techniques, and discuss the most recent developments. | 3 | TBD | TBD
|
Guy Wolf | MAT 6495 – Spectral Graph Theory | While graphs are intuitively and naturally represented by vertices and edges, such representations are limited in terms of their analysis, both theoretically and practically (e.g., when implementing graph algorithms). A more powerful approach is yielded by representing them via appropriate matrices (e.g., adjacency, diffusion kernels, or graph Laplacians) that capture intrinsic relations between vertices over the “geometry” represented by the graph structure. Spectral graph theory leverages such matrices, and in particular their spectral and eigendecompositions, to study the properties of graphs and their underlying intrinsic structure. This study leads to surprising and elegant results, not only from a mathematical standpoint, but also in practice with tractable implementations used, e.g., in clustering, visualization, dimensionality reduction, and manifold learning, and geometric deep learning. Finally, since nearly any modern data nowadays can be modelled as a graph, either naturally (e.g., social networks) or via appropriate affinity measures, and therefore the notions and tools studied in this course provide a powerful framework for capturing and understanding data geometry in general.The course will accommodate anglophone students who do not speak French, as well as francophone students. | 4 | Mon: 2:00 – 5:00pm Tue: 12:00 – 1:00pm | TBD |
Sarath Chandar | INF8953CE – Machine Learning | This course provides a rigorous introduction to the field of machine learning (ML). The aim of the course is not just to teach how to use ML algorithms but also to explain why, how, and when these algorithms work. The course introduces fundamental algorithms in supervised learning and unsupervised learning from the first principles. The course, while covering several problems in machine learning like regression, classification, representation learning, dimensionality reduction, will introduce the core theory, which unifies all the algorithms. | 3 | TBD | TBD |
Pierre-Luc Bacon | IFT 6760C – Reinforcement Learning | Advanced course in reinforcement learning. Topics: Policy gradient methods, gradient estimation, analysis of valued-based function approximation methods, optimal control and automatic differentiation, bilevel optimization in meta-learning and inverse reinforcement learning. | 4 | Wed : 1:30 – 3:30 Mon : 3:30 – 5:30 | 01-09-2021 – 13-10-2021 13-09-2021 – 04-10-2021 25-10-2021 – 06-12-2021 27-10–2021 – 08-12-2021 |
Gauthier Gidel | IFT 6758 – Data Science | The goal of this course is to introduce the concepts (theory and practice) needed to approach and solve data science problems. The first part of the course will cover the principles of analyzing data, the basics about different kinds of models and statistical inference. The second part expands into the statistical methods and practical techniques to deal with common modalities of data – image, text and graphs. Specific programming frameworks required for data science will be covered in the lab sessions. | 4 | TBD | TBD |
Prakash Panangaden and Adam Oberman | COMP 599/MATH 597 – Statistical learning theory | TBC
| TBD | TBD | TBD |
David Rolnick | COMP 611 – Mathematical Tools for Computer Science | This is a whirlwind introduction to important math that turns up everywhere in computer science. The focus is on how to think mathematically, and how to write proofs, using techniques such as induction, contradiction, and monovariants. We will explore, from a mathematical perspective, topics including combinatorics, graph theory, probability, linear algebra, algorithms, data structures, and computational complexity. | TBD | TBD | |
Golnoosh Farnadi | 80629A Machine Learning I: Large-Scale Data Analysis and Decision Making | TBC | TBD | TBD | TBD |
Aishwarya Agrawal | IFT6135 – Representation Learning | This is a course on representation learning in general and deep learning in particular. Deep learning has recently been responsible for a large number of impressive empirical gains across a wide array of applications including most dramatically in object recognition and detection in images and speech recognition. In this course we will explore both the fundamentals and recent advances in the area of deep learning. Our focus will be on neural network-type models including convolutional neural networks and recurrent neural networks such as LSTMs. We will also review recent work on attention mechanism and efforts to incorporate memory structures into neural network models. We will also consider some of the modern neural network-base generative models such as Generative Adversarial Networks and Variational Autoencoders. | 4 | TBD | TBD |