Mila > Team > Sarath Chandar

Sarath Chandar

Core Academic Member
Assistant Professor, École Polytechnique de Montréal, Canada CIFAR AI Chair

Assistant Professor, Department of Computer and Software Engineering at Poly and Adjunct Professor, Department of Computer Science and Operations Research (DIRO) at UdeM.

Please check my personal website for more information.

Publications

2021-11

Artificial intelligence and its contribution to overcome COVID-19
Arun Chockalingam, Vibha Tyagi, Rahul G Krishnan, Shehroz S Khan, Sarath Chandar, Mirza Faisal Beg, Vidur Mahajan, Parasvil Patel, Sri Teja Mullapudi, Nikita Thakkar, Arrti A Bhasin, Atul Tyagi, Bing Ye and Alex Mihailidis
International Journal of Medical Science and Public Health
(2021-11-01)
www.ijncd.org

2021-09

Scaling Laws for the Few-Shot Adaptation of Pre-trained Image Classifiers
Gabriele Prato, Simon Guiroy, Ethan Caballero, Irina Rish and Sarath Chandar
arXiv preprint arXiv:2110.06990
(2021-09-29)
ui.adsabs.harvard.eduPDF

2021-08

Post-hoc Interpretability for Neural NLP: A Survey.
Andreas Madsen, Siva Reddy and Sarath Chandar
arXiv preprint arXiv:2108.04840
(2021-08-10)
ui.adsabs.harvard.eduPDF
A Survey of Data Augmentation Approaches for NLP
Steven Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura and Eduard Hovy
MLMLM: Link Prediction with Mean Likelihood Masked Language Model
Louis Clouatre, Philippe Trempe, Amal Zouaq and Sarath Chandar

2021-07

Demystifying Neural Language Models' Insensitivity to Word-Order
Louis Clouatre, Prasanna Parthasarathi, Amal Zouaq and Sarath Chandar
arXiv preprint arXiv:2107.13955
(2021-07-29)
ui.adsabs.harvard.eduPDF
A BRIEF STUDY ON THE EFFECTS OF TRAINING GENERATIVE DIALOGUE MODELS WITH A SEMANTIC LOSS
Prasanna Parthasarathi, Mohamed Abdelsalam, Sarath Chandar and Joelle Pineau
SIGDIAL 2021
(2021-07-29)
aclanthology.org
DO ENCODER REPRESENTATIONS OF GENERATIVE DIALOGUE MODELS HAVE SUFFICIENT SUMMARY OF THE INFORMATION ABOUT THE TASK
Prasanna Parthasarathi, Joelle Pineau and Sarath Chandar
SIGDIAL 2021
(2021-07-29)
aclanthology.org
Continuous Coordination As a Realistic Scenario for Lifelong Learning
Hadi Nekoei, Akilesh Badrinaaraayanan, Aaron Courville and Sarath Chandar

2021-06

Memory Augmented Optimizers for Deep Learning.
Paul-Aymeric McRae, Prasanna Parthasarathi, Mahmoud Assran and Sarath Chandar
arXiv preprint arXiv:2106.10708
(2021-06-20)
ui.adsabs.harvard.eduPDF
Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task
Prasanna Parthasarathi, Joelle Pineau and Sarath Chandar
arXiv preprint arXiv:2106.10622
(2021-06-20)
dblp.uni-trier.dePDF
A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss
Prasanna Parthasarathi, Mohamed Abdelsalam, Joelle Pineau and Sarath Chandar
arXiv preprint arXiv:2106.10619
(2021-06-20)
arxiv.orgPDF

2021-05

Towered Actor Critic For Handling Multiple Action Types In Reinforcement Learning For Drug Discovery
Sai Krishna Gottipati, Yashaswi Pathak, Boris Sattarov, Sahir, Rohan Nuttall, Mohammad Amini, Matthew E. Taylor and Sarath Chandar
AAAI 2021
(2021-05-18)
ojs.aaai.orgPDF
TAG: Task-based Accumulated Gradients for Lifelong learning.
Pranshu Malviya, Balaraman Ravindran and Sarath Chandar
arXiv preprint arXiv:2105.05155
(2021-05-11)
ui.adsabs.harvard.eduPDF
Out-of-Distribution Classification and Clustering
Gabriele Prato and Sarath Chandar
(venue unknown)
(2021-05-04)
openreview.netPDF
Maximum Reward Formulation In Reinforcement Learning
SaiKrishna Gottipati, Yashaswi Pathak, Rohan Nuttall, Sahir, Raviteja Chunduru, Ahmed Touati, Sriram Ganapathi Subramanian, Matthew E. Taylor and Sarath Chandar
arXiv e-prints
(2021-05-04)
ui.adsabs.harvard.eduPDF

2021-01

IIRC: Incremental Implicitly-Refined Classification
Mohamed Abdelsalam, Mojtaba Faramarzi, Shagun Sodhani and Sarath Chandar
CVPR 2021
(2021-01-12)
openaccess.thecvf.comPDF

2020-08

How To Evaluate Your Dialogue System: Probe Tasks as an Alternative for Token-level Evaluation Metrics
Prasanna Parthasarathi, Joelle Pineau and Sarath Chandar
arXiv preprint arXiv:2008.10427
(2020-08-24)
ui.adsabs.harvard.eduPDF

2020-07

Slot Contrastive Networks: A Contrastive Approach for Representing Objects.
Evan Racah and Sarath Chandar
arXiv preprint arXiv:2007.09294
(2020-07-18)
ui.adsabs.harvard.eduPDF
Learning to Navigate in Synthetically Accessible Chemical Space Using Reinforcement Learning
Sai Krishna Gottipati, Boris Sattarov, Sufeng Niu, Haoran Wei, Yashaswi Pathak, Shengchao Liu, Simon Blackburn, Karam Thomas, Connor Coley, Jian Tang, Sarath Chandar and Yoshua Bengio
ICML 2020
(2020-07-12)
proceedings.mlr.pressPDF
The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning
Harm Van Seijen, Hadi Nekoei, Evan Racah and Sarath Chandar

2020-06

Chaotic Continual Learning
Touraj Laleh, Mojtaba Faramarzi, Irina Rish and Sarath Chandar
(venue unknown)
(2020-06-14)
openreview.netPDF
PatchUp: A Regularization Technique for Convolutional Neural Networks
Mojtaba Faramarzi, Mohammad Amini, Akilesh Badrinaaraayanan, Vikas Verma and Sarath Chandar
arXiv preprint arXiv:2006.07794
(2020-06-14)
ui.adsabs.harvard.eduPDF

2020-04

Learning To Navigate The Synthetically Accessible Chemical Space Using Reinforcement Learning
Sai Krishna Gottipati, Boris Sattarov, Sufeng Niu, Yashaswi Pathak, Haoran Wei, Shengchao Liu, Karam M. J. Thomas, Simon Blackburn, Connor W. Coley, Jian Tang, Sarath Chandar and Yoshua Bengio
arXiv preprint arXiv:2004.12485
(2020-04-26)
ui.adsabs.harvard.eduPDF

2020-03

On challenges in training recurrent neural networks
Anbil Parthipan and Sarath Chandar
(venue unknown)
(2020-03-25)
papyrus.bib.umontreal.ca
The Hanabi Challenge: A New Frontier for AI Research
Nolan Bard, Jakob N. Foerster, Sarath Chandar, Neil Burch, Marc Lanctot, H. Francis Song, Emilio Parisotto, Vincent Dumoulin, Subhodeep Moitra, Edward Hughes, Iain Dunning, Shibl Mourad, Hugo Larochelle, Marc G. Bellemare and Michael Bowling
Artificial Intelligence
(2020-03-01)
www.sciencedirect.comPDF

2020-01

Toward Training Recurrent Neural Networks for Lifelong Learning.
Shagun Sodhani, Sarath Chandar and Yoshua Bengio
Neural Computation
(2020-01-01)
direct.mit.edu

2019-07

Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study
Chinnadhurai Sankar, Sandeep Subramanian, Christopher J. Pal, Sarath Chandar and Yoshua Bengio

2019-06

Towards Lossless Encoding of Sentences
Gabriele Prato, Mathieu Duchesneau, Sarath Chandar and Alain Tapp

2019-05

Structure Learning for Neural Module Networks.
Vardaan Pahuja, Jie Fu, Sarath Chandar and Christopher Joseph Pal

2019-01

Towards Non-saturating Recurrent Units for Modelling Long-term Dependencies
Sarath Chandar, Chinnadhurai Sankar, Eugene Vorontsov, Samira Ebrahimi Kahou and Yoshua Bengio

2018-11

Environments for Lifelong Reinforcement Learning.
Khimya Khetarpal, Shagun Sodhani, Sarath Chandar and Doina Precup
arXiv preprint arXiv:1811.10732
(2018-11-26)
dblp.uni-trier.dePDF
On Training Recurrent Neural Networks for Lifelong Learning
Shagun Sodhani, Sarath Chandar and Yoshua Bengio
arXiv preprint arXiv:1811.07017
(2018-11-16)
arxiv.orgPDF

2018-01

A Deep Reinforcement Learning Chatbot (Short Version)
Iulian Vlad Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeswar, Alexandre de Brébisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau and Yoshua Bengio
arXiv preprint arXiv:1801.06700
(2018-01-20)
ui.adsabs.harvard.eduPDF

Publications collected and formatted using Paperoni