Sarath Chandar

Mila > À propos de Mila > Équipe > Sarath Chandar
Membre Académique Principal
Sarath Chandar
Professeur adjoint, École Polytechnique de Montréal
Sarath Chandar

Publications

2021-05

TAG: Task-based Accumulated Gradients for Lifelong learning.
Pranshu Malviya, Balaraman Ravindran and Sarath Chandar
arXiv preprint arXiv:2105.05155
(2021-05-11)
dblp.uni-trier.dePDF
A Survey of Data Augmentation Approaches for NLP.
Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura and Eduard H. Hovy
arXiv preprint arXiv:2105.03075
(2021-05-07)
dblp.uni-trier.dePDF
Out-of-Distribution Classification and Clustering
Gabriele Prato and Sarath Chandar
(venue unknown)
(2021-05-04)
openreview.netPDF
Maximum Reward Formulation In Reinforcement Learning
SaiKrishna Gottipati, Yashaswi Pathak, Rohan Nuttall, Sahir, Raviteja Chunduru, Ahmed Touati, Sriram Ganapathi Subramanian, Matthew E. Taylor and Sarath Chandar
arXiv e-prints
(2021-05-04)
ui.adsabs.harvard.eduPDF

2021-03

Continuous Coordination As a Realistic Scenario for Lifelong Learning.
Hadi Nekoei, Akilesh Badrinaaraayanan, Aaron C. Courville and Sarath Chandar
arXiv preprint arXiv:2103.03216
(2021-03-04)
ui.adsabs.harvard.eduPDF

2020-12

IIRC: Incremental Implicitly-Refined Classification.
Mohamed Abdelsalam, Mojtaba Faramarzi, Shagun Sodhani and Sarath Chandar
CVPR 2020
(2020-12-23)
ui.adsabs.harvard.eduPDF
The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning
Harm Van Seijen, Hadi Nekoei, Evan Racah and Sarath Chandar

2020-09

MLMLM: Link Prediction with Mean Likelihood Masked Language Model.
Louis Clouâtre, Philippe Trempe, Amal Zouaq and Sarath Chandar
arXiv preprint arXiv:2009.07058
(2020-09-15)
ui.adsabs.harvard.eduPDF

2020-08

How To Evaluate Your Dialogue System: Probe Tasks as an Alternative for Token-level Evaluation Metrics
Prasanna Parthasarathi, Joelle Pineau and Sarath Chandar
arXiv preprint arXiv:2008.10427
(2020-08-24)
ui.adsabs.harvard.eduPDF

2020-07

Slot Contrastive Networks: A Contrastive Approach for Representing Objects.
Evan Racah and Sarath Chandar
arXiv preprint arXiv:2007.09294
(2020-07-18)
ui.adsabs.harvard.eduPDF
Learning to Navigate in Synthetically Accessible Chemical Space Using Reinforcement Learning
Sai Krishna Gottipati, Boris Sattarov, Sufeng Niu, Haoran Wei, Yashaswi Pathak, Shengchao Liu, Simon Blackburn, Karam Thomas, Connor Coley, Jian Tang, Sarath Chandar and Yoshua Bengio
ICML 2020
(2020-07-12)
proceedings.mlr.press

2020-06

PatchUp: A Regularization Technique for Convolutional Neural Networks
Mojtaba Faramarzi, Mohammad Amini, Akilesh Badrinaaraayanan, Vikas Verma and Sarath Chandar
arXiv preprint arXiv:2006.07794
(2020-06-14)
ui.adsabs.harvard.eduPDF

2020-04

Learning To Navigate The Synthetically Accessible Chemical Space Using Reinforcement Learning
Sai Krishna Gottipati, Boris Sattarov, Sufeng Niu, Yashaswi Pathak, Haoran Wei, Shengchao Liu, Karam M. J. Thomas, Simon Blackburn, Connor W. Coley, Jian Tang, Sarath Chandar and Yoshua Bengio
arXiv preprint arXiv:2004.12485
(2020-04-26)
aps.arxiv.orgPDF

2020-03

On challenges in training recurrent neural networks
Anbil Parthipan and Sarath Chandar
(venue unknown)
(2020-03-25)
papyrus.bib.umontreal.ca
The Hanabi Challenge: A New Frontier for AI Research
Nolan Bard, Jakob N. Foerster, Sarath Chandar, Neil Burch, Marc Lanctot, H. Francis Song, Emilio Parisotto, Vincent Dumoulin, Subhodeep Moitra, Edward Hughes, Iain Dunning, Shibl Mourad, Hugo Larochelle, Marc G. Bellemare and Michael Bowling
Artificial Intelligence
(2020-03-01)
www.sciencedirect.comPDF

2020-01

Toward Training Recurrent Neural Networks for Lifelong Learning.
Shagun Sodhani, Sarath Chandar and Yoshua Bengio
Neural Computation
(2020-01-01)
direct.mit.eduPDF

2019-07

Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study
Chinnadhurai Sankar, Sandeep Subramanian, Christopher J. Pal, Sarath Chandar and Yoshua Bengio

2019-06

Towards Lossless Encoding of Sentences
Gabriele Prato, Mathieu Duchesneau, Sarath Chandar and Alain Tapp

2019-05

Structure Learning for Neural Module Networks.
Vardaan Pahuja, Jie Fu, Sarath Chandar and Christopher Joseph Pal

2019-01

Towards Non-saturating Recurrent Units for Modelling Long-term Dependencies
Sarath Chandar, Chinnadhurai Sankar, Eugene Vorontsov, Samira Ebrahimi Kahou and Yoshua Bengio

2018-11

Environments for Lifelong Reinforcement Learning.
Khimya Khetarpal, Shagun Sodhani, Sarath Chandar and Doina Precup
arXiv preprint arXiv:1811.10732
(2018-11-26)
dblp.uni-trier.dePDF
On Training Recurrent Neural Networks for Lifelong Learning
Shagun Sodhani, Sarath Chandar and Yoshua Bengio
arXiv preprint arXiv:1811.07017
(2018-11-16)
arxiv.orgPDF

2018-01

A Deep Reinforcement Learning Chatbot (Short Version)
Iulian Vlad Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeswar, Alexandre de Brébisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau and Yoshua Bengio
arXiv preprint arXiv:1801.06700
(2018-01-20)
ui.adsabs.harvard.eduPDF

Publications collected and formatted using Paperoni

array(1) { ["wp-wpml_current_language"]=> string(2) "fr" }