Portrait of Jian Tang

Jian Tang

Core Academic Member
Canada CIFAR AI Chair
Associate Professor, HEC Montréal, Department of Decision Sciences
Adjunct Professor, Université de Montréal, Department of Computer Science and Operations Research
Founder, BioGeometry
Research Topics
Computational Biology
Deep Learning
Generative Models
Graph Neural Networks
Molecular Modeling

Biography

Jian Tang is an Associate professor at HEC's Department of Decision Sciences. He is also an Adjunct professor at the Department of Computer Science and Operations Research at University of Montreal and a Core Academic member at Mila - Quebec AI Institute. He is a Canada CIFAR AI Chair and the Founder of BioGeometry, an AI startup that focuses on generative AI for antibody discovery. Tang’s main research interests are deep generative models and graph machine learning, and their applications to drug discovery. He is an international leader in graph machine learning, and LINE, his node representation method, has been widely recognized and cited more than five thousand times. He has also done pioneering work on AI for drug discovery, such as developing the first open-source machine learning frameworks for drug discovery, TorchDrug and TorchProtein.

Current Students

Collaborating researcher
PhD - Université de Montréal
Principal supervisor :
Research Intern - McGill University
PhD - Université de Montréal
Collaborating researcher - Carnegie Mellon University
PhD - Université de Montréal
PhD - Université de Montréal
Principal supervisor :
Collaborating researcher
PhD - Université de Montréal
PhD - Université de Montréal
PhD - Université de Montréal
PhD - Université de Montréal

Publications

8-inch Wafer-scale Epitaxial Monolayer MoS2.
Hua Yu
Liangfeng Huang
Lanying Zhou
Yalin Peng
Xiuzhen Li
Peng Yin
Jiaojiao Zhao
M. Zhu
Shuopei Wang
Jieying Liu
Hongyue Du
Songge Zhang
Yuchao Zhou
Nianpeng Lu
Kaihui Liu
Na Li
Guangyu Zhang
8-inch Wafer-scale Epitaxial Monolayer MoS2.
Hua Yu
Liangfeng Huang
Lanying Zhou
Yalin Peng
Xiuzhen Li
Peng Yin
Jiaojiao Zhao
M. Zhu
Shuopei Wang
Jieying Liu
Hongyue Du
Songge Zhang
Yuchao Zhou
Nianpeng Lu
Kaihui Liu
Na Li
Guangyu Zhang
Large-scale, high-quality, and uniform monolayer MoS2 films are crucial for their applications in next-generation electronics and optoelectr… (see more)onics. Epitaxy is a mainstream technique for achieving high-quality MoS2 films and has been demonstrated at a wafer scale up to 4-inch. In this study, we report the epitaxial growth of 8-inch wafer-scale highly oriented monolayer MoS2 on sapphire with excellent spatial homogeneity, using a specially designed vertical chemical vapor deposition (VCVD) system. Field effect transistors (FETs) based on the as-grown 8-inch wafer-scale monolayer MoS2 film were fabricated and exhibited high performances, with an average mobility and an on/off ratio of 53.5 cm2V-1s-1 and 107, respectively. In addition, batch fabrication of logic devices and 11-stage ring oscillators were also demonstrated, showcasing excellent electrical functions. Our work may pave way of MoS2 in practical industry-scale applications. This article is protected by copyright. All rights reserved.
Model-independent Approach of the JUNO 8B Solar Neutrino Program
Jun Zhao
B. Yue
Haoqi Lu
Yufeng Li
J. Ling
Zeyuan Yu
Angel Abusleme
Thomas Adam
Shakeel Ahmad
Rizwan Ahmed
Sebastiano Aiello
Muhammad Akram
Abid Aleem
Tsagkarakis Alexandros
F. An
Q. An
Giuseppe Andronico
Nikolay Anfimov
Vito Antonelli
Tatiana Antoshkina … (see 480 more)
Burin Asavapibhop
J. Andr'e
Didier Auguste
Weidong Bai
Nikita Balashov
Wander Baldini
Andrea Barresi
Davide Basilico
Eric Baussan
Marco Bellato
Antonio Bergnoli
Thilo Birkenfeld
Sylvie Blin
D. Blum
Simon Blyth
Anastasia Bolshakova
Mathieu Bongrand
Clément Bordereau
Dominique Breton
Augusto Brigatti
Riccardo Brugnera
Riccardo Bruno
Antonio Budano
Jose Busto
I. Butorov
Anatael Cabrera
Barbara Caccianiga
Hao Cai
Xiao Cai
Yanke Cai
Z. Cai
Riccardo Callegari
Antonio Cammi
Agustin Campeny
C. Cao
Guofu Cao
Jun Cao
Rossella Caruso
C. Cerna
Chi Chan
Jinfan Chang
Yun Chang
Guoming Chen
Pingping Chen
Po-An Chen
Shaomin Chen
Xurong Chen
Yixue Chen
Yu Chen
Zhiyuan Chen
Zikang Chen
Jie Cheng
Yaping Cheng
Alexander Chepurnov
Alexey Chetverikov
Davide Chiesa
Pietro Chimenti
Artem Chukanov
Gérard Claverie
Catia Clementi
Barbara Clerbaux
Marta Colomer Molla
Selma Conforti Di Lorenzo
Daniele Corti
Flavio Dal Corso
Olivia Dalager
C. Taille
Z. Y. Deng
Ziyan Deng
Wilfried Depnering
Marco Diaz
Xuefeng Ding
Yayun Ding
Bayu Dirgantara
Sergey Dmitrievsky
Tadeas Dohnal
Dmitry Dolzhikov
Georgy Donchenko
Jianmeng Dong
Evgeny Doroshkevich
Marcos Dracos
Frédéric Druillole
Ran Du
S. X. Du
Stefano Dusini
Martin Dvorak
Timo Enqvist
H. Enzmann
Andrea Fabbri
D. Fan
Lei Fan
Jian Fang
Wen Fang
Marco Fargetta
Dmitry Fedoseev
Zheng-hao Fei
Li-Cheng Feng
Qichun Feng
R. Ford
Amélie Fournier
H. Gan
Feng Gao
Alberto Garfagnini
Arsenii Gavrikov
Marco Giammarchi
Nunzio Giudice
Maxim Gonchar
G. Gong
Hui Gong
Yuri Gornushkin
A. Gottel
Marco Grassi
Maxim Gromov
Vasily Gromov
M. H. Gu
X. Gu
Yunting Gu
M. Guan
Yuduo Guan
Nunzio Guardone
Cong Guo
Jingyuan Guo
Wanlei Guo
Xinheng Guo
Yuhang Guo
Paul Hackspacher
Caren Hagner
Ran Han
Yang Han
Miao He
W. He
Tobias Heinz
Patrick Hellmuth
Yue-kun Heng
Rafael Herrera
Y. Hor
Shaojing Hou
Yee Hsiung
Bei-Zhen Hu
Hang Hu
Jianrun Hu
Jun Hu
Shouyang Hu
Tao Hu
Yuxiang Hu
Zhuojun Hu
Guihong Huang
Hanxiong Huang
Kaixuan Huang
Wenhao Huang
Xinglong Huang
X. T. Huang
Yongbo Huang
Jiaqi Hui
L. Huo
Wenju Huo
Cédric Huss
Safeer Hussain
Ara Ioannisian
Roberto Isocrate
Beatrice Jelmini
Ignacio Jeria
Xiaolu Ji
Huihui Jia
Junji Jia
Siyu Jian
Di Jiang
Wei Jiang
Xiaoshan Jiang
X. Jing
Cécile Jollet
L. Kalousis
Philipp Kampmann
Li Kang
Rebin Karaparambil
Narine Kazarian
Amina Khatun
Khanchai Khosonthongkee
Denis Korablev
K. Kouzakov
Alexey Krasnoperov
Nikolay Kutovskiy
Pasi Kuusiniemi
Tobias Lachenmaier
Cecilia Landini
Sébastien Leblanc
Victor Lebrin
F. Lefèvre
R. Lei
Rupert Leitner
Jason Leung
Daozheng Li
Demin Li
Fei Li
Fule Li
Gaosong Li
Huiling Li
Mengzhao Li
Min Li
Nan Li
Qingjiang Li
Ruhui Li
Ruiting Lei
Shanfeng Li
Tao Li
Teng Li
Weidong Li
Wei-guo Li
Xiaomei Li
Xiaonan Li
Xinglong Li
Yi Li
Yichen Li
Zepeng Li
Zhaohan Li
Zhibing Li
Ziyuan Li
Zonghui Li
Hao Liang
Jiaming Yan
Ayut Limphirat
G. Lin
Shengxin Lin
Tao Lin
Ivano Lippi
Yang Liu
Haidong Liu
Hao Liu
Hongbang Liu
Hongjuan Liu
Hongtao Liu
Hui Liu
Jianglai Liu
Jinchang Liu
Min Liu
Qian Liu
Q. Liu
Runxuan Liu
Shubin Liu
Shulin Liu
Xiaowei Liu
Xiwen Liu
Yan Liu
Yunzhe Liu
Alexey Lokhov
Paolo Lombardi
Claudio Lombardo
K. Loo
Chuan Lu
Jingbin Lu
Junguang Lu
Shuxiang Lu
Bayarto Lubsandorzhiev
Sultim Lubsandorzhiev
Livia Ludhova
Arslan Lukanov
Daibin Luo
F. Luo
Guang Luo
Shu Luo
Wu Luo
Xiaojie Luo
Vladimir Lyashuk
B. Ma
Bing Ma
R. Q. Ma
Si Ma
Xiaoyan Ma
Xubo Ma
Jihane Maalmi
Jingyu Mai
Yury Malyshkin
Roberto Carlos Mandujano
Fabio Mantovani
Francesco Manzali
Xin Mao
Yajun Mao
S. Mari
F. Marini
Cristina Martellini
Gisèle Martin-chassard
Agnese Martini
Matthias Mayer
Davit Mayilyan
Ints Mednieks
Yu Meng
Anselmo Meregaglia
Emanuela Meroni
David J. Meyhofer
Mauro Mezzetto
Jonathan Andrew Miller
Lino Miramonti
Paolo Montini
Michele Montuschi
Axel Muller
M. Nastasi
D. Naumov
Elena Naumova
Diana Navas-Nicolas
Igor Nemchenok
Minh Thuan Nguyen Thi
Alexey Nikolaev
F. Ning
Zhe Ning
Hiroshi Nunokawa
Lothar Oberauer
Juan Pedro Ochoa-Ricoux
Alexander Olshevskiy
Domizia Orestano
Fausto Ortica
Rainer Othegraven
A. Paoloni
Sergio Parmeggiano
Y. P. Pei
Nicomede Pelliccia
Anguo Peng
Yu Peng
Yuefeng Peng
Z-R Peng
Frédéric Perrot
P. Petitjean
Fabrizio Petrucci
Oliver Pilarczyk
Luis Felipe Piñeres Rico
Artyom Popov
Pascal Poussot
Ezio Previtali
Fazhi Qi
M. Qi
Sen Qian
X. Qian
Zhen Qian
Hao-xue Qiao
Zhonghua Qin
S. Qiu
Gioacchino Ranucci
Neill Raper
A. Re
Henning Rebber
Abdel Rebii
Mariia Redchuk
Bin Ren
Jie Ren
Barbara Ricci
Mariam Rifai
Mathieu Roche
Narongkiat Rodphai
Aldo M. Romani
Bedřich Roskovec
X. Ruan
Arseniy Rybnikov
Andrey Sadovsky
Paolo Saggese
Simone Sanfilippo
Anut Sangka
Utane Sawangwit
Julia Sawatzki
Michaela Schever
Cédric Schwab
Konstantin Schweizer
Alexandr Selyunin
Andrea Serafini
Giulio Settanta
M. Settimo
Zhuang Shao
V. Sharov
Arina Shaydurova
Jingyan Shi
Yanan Shi
Vitaly Shutov
Andrey Sidorenkov
Fedor Šimkovic
Chiara Sirignano
Jaruchit Siripak
Monica Sisti
Maciej Slupecki
Mikhail Smirnov
Oleg Smirnov
Thiago Sogo-Bezerra
Sergey Sokolov
Julanan Songwadhana
Boonrucksar Soonthornthum
Albert Sotnikov
Ondvrej vSr'amek
Warintorn Sreethawong
A. Stahl
Luca Stanco
Konstantin Stankevich
Duvsan Vstef'anik
Hans Steiger
Jochen Steinmann
Tobias Sterr
M. Stock
Virginia Strati
Alexander Studenikin
Jun Su
Shifeng Sun
Xilei Sun
Yongjie Sun Sun
Yongzhao Sun
Zhengyang Sun
Narumon Suwonjandee
Michal Szelezniak
Qiang Tang
Quan Tang
Xiao Tang
Alexander Tietzsch
Igor Tkachev
Tomas Tmej
M. Torri
K. Treskov
Andrea Triossi
Giancarlo Troni
Wladyslaw Trzaska
Cristina Tuve
Nikita Ushakov
Vadim Vedin
Giuseppe Verde
Maxim Vialkov
Benoit Viaud
Cornelius Moritz Vollbrecht
C. Volpe
Katharina von Sturm
Vit Vorobel
Dmitriy Voronin
Lucia Votano
Pablo Walker
Caishen Wang
Chung-Hsiang Wang
En Wang
Guoli Wang
Jian Wang
Jun Wang
Lucinda W. Wang
Meifen Wang
Meng Wang
Ruiguang Wang
Siguang Wang
Wei Wang
Wenshuai Wang
Xi Wang
Xiangyue Wang
Yangfu Wang
Yaoguang Wang
Yi Xing Wang
Yifang Wang
Yuanqing Wang
Yuman Wang
Zhe Wang
Zheng Wang
Zhimin Wang
Zongyi Wang
Apimook Watcharangkool
Wei Wei
Wenlu Wei
Yadong Wei
K. Wen
Kaile Wen
Christopher Wiebusch
S. Wong
Bjoern Wonsak
Diru Wu
Qun Wu
Zhi Wu
Michael Wurm
Jacques Wurtz
Christian Wysotzki
Yufei Xi
D. Xia
Xiang Xiao
Xiaochuan Xie
Yu-guang Xie
Z. P. Xie
Zhao-Liang Xin
Z. Xing
Benda D. Xu
Chengze Xu
Donglian Xu
Fanrong Xu
The physics potential of detecting 8B solar neutrinos will be exploited at the Jiangmen Underground Neutrino Observatory (JUNO), in a model-… (see more)independent manner by using three distinct channels of the charged current (CC), neutral current (NC), and elastic scattering (ES) interactions. Due to the largest-ever mass of 13C nuclei in the liquid scintillator detectors and the expected low background level, 8B solar neutrinos are observable in the CC and NC interactions on 13C for the first time. By virtue of optimized event selections and muon veto strategies, backgrounds from the accidental coincidence, muon-induced isotopes, and external backgrounds can be greatly suppressed. Excellent signal-to-background ratios can be achieved in the CC, NC, and ES channels to guarantee the observation of the 8B solar neutrinos. From the sensitivity studies performed in this work, we show that JUNO, with 10 yr of data, can reach the 1σ precision levels of 5%, 8%, and 20% for the 8B neutrino flux, sin 2 θ 12 , and Δ m 21 2 , respectively. Probing the details of both solar physics and neutrino physics would be unique and helpful. In addition, when combined with the Sudbury Neutrino Observatory measurement, the world's best precision of 3% is expected for the measurement of the 8B neutrino flux.
Machine-learning-assisted and real-time-feedback-controlled growth of InAs/GaAs quantum dots
Chao Shen
Wenkang Zhan
Kaiyao Xin
Manyang Li
Zhenyu Sun
Hui Cong
Zhaofeng Wu
Chi Xu
Bo Xu
Zhongming Wei
Chao Zhao
Zhanguo Wang
Chunlai Xue
Dual quantum spin Hall insulator by density-tuned correlations in TaIrTe4.
Thomas Siyuan Ding
Hongyu Chen
Anyuan Gao
Tiema Qian
Zumeng Huang
Zhe Sun
Xin Han
Alex Strasser
Jiangxu Li
Michael Geiwitz
Mohamed Shehabeldin
Vsevolod Belosevich
Zihan Wang
Yiping Wang
Kenji Watanabe
Takashi Taniguchi
David C. Bell
Ziqiang Wang
Liang Fu … (see 8 more)
Yang Zhang
Xiaofeng Qian
Kenneth S. Burch
Youguo Shi
Ni Ni
Guoqing Chang
Su-Yang Xu
Qiong Ma
Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science
Xiangru Tang
Qiao Jin
Kunlun Zhu
Tongxin Yuan
Yichi Zhang
Wangchunshu Zhou
Meng Qu
Yilun Zhao
Zhuosheng Zhang
Arman Cohan
Zhiyong Lu
Mark Gerstein
F$^3$low: Frame-to-Frame Coarse-grained Molecular Dynamics with SE(3) Guided Flow Matching
Shaoning Li
Yusong Wang
Mingyu Li
Bin Shao
Nanning Zheng
Zhang Jian
Fusing Neural and Physical: Augment Protein Conformation Sampling with Tractable Simulations
Jiarui Lu
Zuobai Zhang
Bozitao Zhong
Chence Shi
The protein dynamics are common and important for their biological functions and properties, the study of which usually involves time-consum… (see more)ing molecular dynamics (MD) simulations *in silico*. Recently, generative models has been leveraged as a surrogate sampler to obtain conformation ensembles with orders of magnitude faster and without requiring any simulation data (a "zero-shot" inference). However, being agnostic of the underlying energy landscape, the accuracy of such generative model may still be limited. In this work, we explore the few-shot setting of such pre-trained generative sampler which incorporates MD simulations in a tractable manner. Specifically, given a target protein of interest, we first acquire some seeding conformations from the pre-trained sampler followed by a number of physical simulations in parallel starting from these seeding samples. Then we fine-tuned the generative model using the simulation trajectories above to become a target-specific sampler. Experimental results demonstrated the superior performance of such few-shot conformation sampler at a tractable computational cost.
Structure-Informed Protein Language Model
Zuobai Zhang
Jiarui Lu
Vijil Chenthamarakshan
Aurelie Lozano
Payel Das
Protein language models are a powerful tool for learning protein representations through pre-training on vast protein sequence datasets. Ho… (see more)wever, traditional protein language models lack explicit structural supervision, despite its relevance to protein function. To address this issue, we introduce the integration of remote homology detection to distill structural information into protein language models without requiring explicit protein structures as input. We evaluate the impact of this structure-informed training on downstream protein function prediction tasks. Experimental results reveal consistent improvements in function annotation accuracy for EC number and GO term prediction. Performance on mutant datasets, however, varies based on the relationship between targeted properties and protein structures. This underscores the importance of considering this relationship when applying structure-aware training to protein function prediction tasks. Code and model weights will be made available upon acceptance.
Heterogeneous ensemble prediction model of CO emission concentration in municipal solid waste incineration process using virtual data and real data hybrid-driven
Runyu Zhang
Heng Xia
Jiakun Chen
Wen Yu
JunFei Qiao
Iterative Graph Self-Distillation
Hanlin Zhang
Shuai Lin
Weiyang Liu
Pan Zhou
Xiaodan Liang
Eric P. Xing
Recently, there has been increasing interest in the challenge of how to discriminatively vectorize graphs. To address this, we propose a met… (see more)hod called Iterative Graph Self-Distillation (IGSD) which learns graph-level representation in an unsupervised manner through instance discrimination using a self-supervised contrastive learning approach. IGSD involves a teacher-student distillation process that uses graph diffusion augmentations and constructs the teacher model using an exponential moving average of the student model. The intuition behind IGSD is to predict the teacher network representation of the graph pairs under different augmented views. As a natural extension, we also apply IGSD to semi-supervised scenarios by jointly regularizing the network with both supervised and self-supervised contrastive loss. Finally, we show that fine-tuning the IGSD-trained models with self-training can further improve graph representation learning. Empirically, we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings, which well validates the superiority of IGSD.
Deep Equilibrium Models For Algorithmic Reasoning
Sophie Xhonneux
Yu He
Andreea Deac
In this blogpost we discuss the idea of teaching neural networks to reach fixed points when reasoning. Specifically, on the algorithmic reas… (see more)oning benchmark CLRS the current neural networks are told the number of reasoning steps they need. While a quick fix is to add a termination network that predicts when to stop, a much more salient inductive bias is that the neural network shouldn't change it's answer any further once the answer is correct, i.e. it should reach a fixed point. This is supported by denotational semantics, which tells us that while loops that terminate are the minimum fixed points of a function. We implement this idea with the help of deep equilibrium models and discuss several hurdles one encounters along the way. We show on several algorithms from the CLRS benchmark the partial success of this approach and the difficulty in making it work robustly across all algorithms.