Portrait of Jian Tang

Jian Tang

Core Academic Member
Canada CIFAR AI Chair
Associate Professor, HEC Montréal, Department of Decision Sciences
Adjunct Professor, Université de Montréal, Department of Computer Science and Operations Research
Founder, BioGeometry
Research Topics
Computational Biology
Deep Learning
Generative Models
Graph Neural Networks
Molecular Modeling

Biography

Jian Tang is an Associate professor at HEC's Department of Decision Sciences. He is also an Adjunct professor at the Department of Computer Science and Operations Research at University of Montreal and a Core Academic member at Mila - Quebec AI Institute. He is a Canada CIFAR AI Chair and the Founder of BioGeometry, an AI startup that focuses on generative AI for antibody discovery. Tang’s main research interests are deep generative models and graph machine learning, and their applications to drug discovery. He is an international leader in graph machine learning, and LINE, his node representation method, has been widely recognized and cited more than five thousand times. He has also done pioneering work on AI for drug discovery, such as developing the first open-source machine learning frameworks for drug discovery, TorchDrug and TorchProtein.

Current Students

Collaborating researcher
PhD - Université de Montréal
Principal supervisor :
PhD - Université de Montréal
Collaborating researcher - Carnegie Mellon University
PhD - Université de Montréal
PhD - Université de Montréal
Principal supervisor :
PhD - Université de Montréal
PhD - Université de Montréal
PhD - Université de Montréal
PhD - Université de Montréal

Publications

The Heterophilic Graph Learning Handbook: Benchmarks, Models, Theoretical Analysis, Applications and Challenges
Sitao Luan
Chenqing Hua
Qincheng Lu
Liheng Ma
Lirong Wu
Xinyu Wang
Minkai Xu
Xiao-Wen Chang
Rex Ying
Stan Z. Li
Stefanie Jegelka
Homophily principle, \ie{} nodes with the same labels or similar attributes are more likely to be connected, has been commonly believed to b… (see more)e the main reason for the superiority of Graph Neural Networks (GNNs) over traditional Neural Networks (NNs) on graph-structured data, especially on node-level tasks. However, recent work has identified a non-trivial set of datasets where GNN's performance compared to the NN's is not satisfactory. Heterophily, i.e. low homophily, has been considered the main cause of this empirical observation. People have begun to revisit and re-evaluate most existing graph models, including graph transformer and its variants, in the heterophily scenario across various kinds of graphs, e.g. heterogeneous graphs, temporal graphs and hypergraphs. Moreover, numerous graph-related applications are found to be closely related to the heterophily problem. In the past few years, considerable effort has been devoted to studying and addressing the heterophily issue. In this survey, we provide a comprehensive review of the latest progress on heterophilic graph learning, including an extensive summary of benchmark datasets and evaluation of homophily metrics on synthetic graphs, meticulous classification of the most updated supervised and unsupervised learning methods, thorough digestion of the theoretical analysis on homophily/heterophily, and broad exploration of the heterophily-related applications. Notably, through detailed experiments, we are the first to categorize benchmark heterophilic datasets into three sub-categories: malignant, benign and ambiguous heterophily. Malignant and ambiguous datasets are identified as the real challenging datasets to test the effectiveness of new models on the heterophily challenge. Finally, we propose several challenges and future directions for heterophilic graph representation learning.
In-Context Learning, Can It Break Safety?
Sophie Xhonneux
David Dobre
Michael Noukhovitch
Augmenting Evolutionary Models with Structure-based Retrieval
Yining Huang
Zuobai Zhang
Debora Susan Marks
Pascal Notin
GraphAny: A Foundation Model for Node Classification on Any Graph
Jianan Zhao
Hesham Mostafa
Mikhail Galkin
Michael M. Bronstein
Zhaocheng Zhu
Foundation models that can perform inference on any new task without requiring specific training have revolutionized machine learning in vis… (see more)ion and language applications. However, applications involving graph-structured data remain a tough nut for foundation models, due to challenges in the unique feature- and label spaces associated with each graph. Traditional graph ML models such as graph neural networks (GNNs) trained on graphs cannot perform inference on a new graph with feature and label spaces different from the training ones. Furthermore, existing models learn functions specific to the training graph and cannot generalize to new graphs. In this work, we tackle these two challenges with a new foundational architecture for inductive node classification named GraphAny. GraphAny models inference on a new graph as an analytical solution to a LinearGNN, thereby solving the first challenge. To solve the second challenge, we learn attention scores for each node to fuse the predictions of multiple LinearGNNs. Specifically, the attention module is carefully parameterized as a function of the entropy-normalized distance-features between multiple LinearGNNs predictions to ensure generalization to new graphs. Empirically, GraphAny trained on the Wisconsin dataset with only 120 labeled nodes can effectively generalize to 30 new graphs with an average accuracy of 67.26\% in an inductive manner, surpassing GCN and GAT trained in the supervised regime, as well as other inductive baselines.
GraphAny: A Foundation Model for Node Classification on Any Graph
Jianan Zhao
Hesham Mostafa
Mikhail Galkin
Michael M. Bronstein
Zhaocheng Zhu
The 1st International Workshop on Graph Foundation Models (GFM)
Haitao Mao
Jianan Zhao
Xiaoxin He
Zhikai Chen
Qian Huang
Zhaocheng Zhu
Micheal Bronstein
Xavier Bresson
Bryan Hooi
Haiyang Zhang
Xianfeng Tang
Luo Chen
Jiliang Tang
Dynamic System Modeling Using a Multisource Transfer Learning-Based Modular Neural Network for Industrial Application
Haoshan Duan
Xi Meng
JunFei Qiao
Establishing an accurate model of dynamic systems poses a challenge for complex industrial processes. Due to the ability to handle complex t… (see more)asks, modular neural networks (MNN) have been widely applied to industrial process modeling. However, the phenomenon of domain drift caused by operating conditions may lead to a cold start of the model, which affects the performance of MNN. For this reason, a multisource transfer learning-based MNN (MSTL-MNN) is proposed in this study. First, the knowledge-driven transfer learning process is performed with domain similarity evaluation, knowledge extraction, and fusion, aiming to form an initial subnetwork in the target domain. Then, the positive transfer process of effective knowledge can avoid the cold start problem of MNN. Second, during the data-driven fine-tuning process, a regularized self-organizing long short-term memory algorithm is designed to fine-tune the structure and parameters of the initial subnetwork, which can improve the prediction performance of MNN. Meanwhile, relevant theoretical analysis is given to ensure the feasibility of MSTL-MNN. Finally, the effectiveness of the proposed method is confirmed by two benchmark simulations and a real industrial dataset of a municipal solid waste incineration process. Experimental results demonstrate the merits of MSTL-MNN for industrial applications.
8-inch Wafer-scale Epitaxial Monolayer MoS2.
Hua Yu
Liangfeng Huang
Lanying Zhou
Yalin Peng
Xiuzhen Li
Peng Yin
Jiaojiao Zhao
M. Zhu
Shuopei Wang
Jieying Liu
Hongyue Du
Songge Zhang
Yuchao Zhou
Nianpeng Lu
Kaihui Liu
Na Li
Guangyu Zhang
8-inch Wafer-scale Epitaxial Monolayer MoS2.
Hua Yu
Liangfeng Huang
Lanying Zhou
Yalin Peng
Xiuzhen Li
Peng Yin
Jiaojiao Zhao
M. Zhu
Shuopei Wang
Jieying Liu
Hongyue Du
Songge Zhang
Yuchao Zhou
Nianpeng Lu
Kaihui Liu
Na Li
Guangyu Zhang
Large-scale, high-quality, and uniform monolayer MoS2 films are crucial for their applications in next-generation electronics and optoelectr… (see more)onics. Epitaxy is a mainstream technique for achieving high-quality MoS2 films and has been demonstrated at a wafer scale up to 4-inch. In this study, we report the epitaxial growth of 8-inch wafer-scale highly oriented monolayer MoS2 on sapphire with excellent spatial homogeneity, using a specially designed vertical chemical vapor deposition (VCVD) system. Field effect transistors (FETs) based on the as-grown 8-inch wafer-scale monolayer MoS2 film were fabricated and exhibited high performances, with an average mobility and an on/off ratio of 53.5 cm2V-1s-1 and 107, respectively. In addition, batch fabrication of logic devices and 11-stage ring oscillators were also demonstrated, showcasing excellent electrical functions. Our work may pave way of MoS2 in practical industry-scale applications. This article is protected by copyright. All rights reserved.
Model-independent Approach of the JUNO 8B Solar Neutrino Program
Jun-Zhang Zhao
B. Yue
Haoqi Lu
Yufeng Li
J. Ling
Zeyuan Yu
Angel Abusleme
Thomas Adam
Shakeel Ahmad
Rizwan Ahmed
Sebastiano Aiello
Muhammad Akram
Abid Aleem
Tsagkarakis Alexandros
Fengpeng An
Q. An
Giuseppe Andronico
Nikolay Anfimov
Vito Antonelli
Tatiana Antoshkina … (see 480 more)
Burin Asavapibhop
J. Andr'e
Didier Auguste
Weidong Bai
Nikita Balashov
Wander Baldini
Andrea Barresi
Davide Basilico
Eric Baussan
Marco Bellato
Antonio Bergnoli
Thilo Birkenfeld
Sylvie Blin
D. Blum
Simon Blyth
Anastasia Bolshakova
Mathieu Bongrand
Clément Bordereau
Dominique Breton
Augusto Brigatti
Riccardo Brugnera
Riccardo Bruno
Antonio Budano
Jose Busto
I. Butorov
Anatael Cabrera
Barbara Caccianiga
Hao Cai
Xiao Cai
Yanke Cai
Zucong Cai
Riccardo Callegari
Antonio Cammi
Agustin Campeny
C. Cao
Guofu Cao
Jun Cao
Rossella Caruso
C. Cerna
Chi Chan
Jinfan Chang
Yun Chang
Guoming Chen
Pingping Chen
Po-An Chen
Shaomin Chen
Xurong Chen
Yixue Chen
Yu Chen
Zhiyuan Chen
Zikang Chen
Jie Cheng
Yaping Cheng
Alexander Chepurnov
Alexey Chetverikov
Davide Chiesa
Pietro Chimenti
Artem Chukanov
Gérard Claverie
Catia Clementi
Barbara Clerbaux
Marta Colomer Molla
Selma Conforti Di Lorenzo
Daniele Corti
Flavio Dal Corso
Olivia Dalager
C. Taille
Z. Y. Deng
Ziyan Deng
Wilfried Depnering
Marco Diaz
Xuefeng Ding
Yayun Ding
Bayu Dirgantara
Sergey Dmitrievsky
Tadeas Dohnal
Dmitry Dolzhikov
Georgy Donchenko
Jianmeng Dong
Evgeny Doroshkevich
Marcos Dracos
Frédéric Druillole
Ran Du
S. X. Du
Stefano Dusini
Martin Dvorak
Timo Enqvist
H. Enzmann
Andrea Fabbri
D. Fan
Lei Fan
Jian Fang
Wen Fang
Marco Fargetta
Dmitry Fedoseev
Zheng-hao Fei
Li-Cheng Feng
Qichun Feng
R. Ford
Amélie Fournier
H. Gan
Feng Gao
Alberto Garfagnini
Arsenii Gavrikov
Marco Giammarchi
Nunzio Giudice
Maxim Gonchar
G. Gong
Hui Gong
Yuri Gornushkin
A. Gottel
Marco Grassi
Maxim Gromov
Vasily Gromov
M. H. Gu
Xiang Gu
Yunting Gu
M. Guan
Yuduo Guan
Nunzio Guardone
Cong Guo
Jingyuan Guo
Wanlei Guo
Xinheng Guo
Yuhang Guo
Paul Hackspacher
Caren Hagner
Ran Han
Yang Han
Miao He
W. He
Tobias Heinz
Patrick Hellmuth
Yue-kun Heng
Rafael Herrera
Yuenkeung Hor
Shaojing Hou
Yee Hsiung
Bei-Zhen Hu
Hang Hu
Jianrun Hu
Jun Hu
Shouyang Hu
Tao Hu
Yuxiang Hu
Zhuojun Hu
Guihong Huang
Hanxiong Huang
Kaixuan Huang
Wenhao Huang
Xinglong Huang
X. T. Huang
Yongbo Huang
Jiaqi Hui
L. Huo
Wenju Huo
Cédric Huss
Safeer Hussain
Ara Ioannisian
Roberto Isocrate
Beatrice Jelmini
Ignacio Jeria
Xiaolu Ji
Huihui Jia
Junji Jia
Siyu Jian
Di Jiang
Wei Jiang
Xiaoshan Jiang
X. Jing
Cécile Jollet
L. Kalousis
Philipp Kampmann
Li Kang
Rebin Karaparambil
Narine Kazarian
Amina Khatun
Khanchai Khosonthongkee
Denis Korablev
K. Kouzakov
Alexey Krasnoperov
Nikolay Kutovskiy
Pasi Kuusiniemi
Tobias Lachenmaier
Cecilia Landini
Sébastien Leblanc
Victor Lebrin
F. Lefèvre
R. Lei
Rupert Leitner
Jason Leung
Daozheng Li
Demin Li
Fei Li
Fule Li
Gaosong Li
Huiling Li
Mengzhao Li
Min Li
Nan Li
Qingjiang Li
Ruhui Li
Ruiting Lei
Shanfeng Li
Tao Li
Teng Li
Weidong Li
Wei-guo Li
Xiaomei Li
Xiaonan Li
Xinglong Li
Yi Li
Yichen Li
Zepeng Li
Zhaohan Li
Zhibing Li
Ziyuan Li
Zonghui Li
Hao Liang
Jiaming Yan
Ayut Limphirat
Gen Lin
Shengxin Lin
Tao Lin
Ivano Lippi
Yang Liu
Haidong Liu
H. Liu
Hongbang Liu
Hongjuan Liu
Hongtao Liu
Hui Liu
Jianglai Liu
Jinchang Liu
Min Liu
Qian Liu
Q. Liu
Runxuan Liu
Shubin Liu
Shulin Liu
Xiaowei Liu
Xiwen Liu
Yong Liu
Yunzhe Liu
Alexey Lokhov
Paolo Lombardi
Claudio Lombardo
K. Loo
Chuan Lu
Jingbin Lu
Junguang Lu
Shuxiang Lu
Bayarto Lubsandorzhiev
Sultim Lubsandorzhiev
Livia Ludhova
Arslan Lukanov
Daibin Luo
F. Luo
Guang Luo
Shu Luo
Wu Luo
Xiaojie Luo
Vladimir Lyashuk
B. Ma
Bing Ma
R. Q. Ma
Si Ma
Xiaoyan Ma
Xubo Ma
Jihane Maalmi
Jingyu Mai
Yury Malyshkin
Roberto Carlos Mandujano
Fabio Mantovani
Francesco Manzali
Xin Mao
Yajun Mao
S. Mari
F. Marini
Cristina Martellini
Gisèle Martin-chassard
Agnese Martini
Matthias Mayer
Davit Mayilyan
Ints Mednieks
Yu Meng
Anselmo Meregaglia
Emanuela Meroni
David J. Meyhofer
Mauro Mezzetto
Jonathan Andrew Miller
Lino Miramonti
Paolo Montini
Michele Montuschi
Axel Muller
M. Nastasi
D. Naumov
Elena Naumova
Diana Navas-Nicolas
Igor Nemchenok
Minh Thuan Nguyen Thi
Alexey Nikolaev
F. Ning
Zhe Ning
Hiroshi Nunokawa
Lothar Oberauer
Juan Pedro Ochoa-Ricoux
Alexander Olshevskiy
Domizia Orestano
Fausto Ortica
Rainer Othegraven
A. Paoloni
Sergio Parmeggiano
Y. P. Pei
Nicomede Pelliccia
Anguo Peng
Yu Peng
Yuefeng Peng
Z-R Peng
Frédéric Perrot
P. Petitjean
Fabrizio Petrucci
Oliver Pilarczyk
Luis Felipe Piñeres Rico
Artyom Popov
Pascal Poussot
Ezio Previtali
Fazhi Qi
M. Qi
Sen Qian
X. Qian
Zhen Qian
Hao-xue Qiao
Zhonghua Qin
S. Qiu
Gioacchino Ranucci
Neill Raper
A. Re
Henning Rebber
Abdel Rebii
Mariia Redchuk
Bin Ren
Jie Ren
Barbara Ricci
Mariam Rifai
Mathieu Roche
Narongkiat Rodphai
Aldo M. Romani
Bedřich Roskovec
X. Ruan
Arseniy Rybnikov
Andrey Sadovsky
Paolo Saggese
Simone Sanfilippo
Anut Sangka
Utane Sawangwit
Julia Sawatzki
Michaela Schever
Cédric Schwab
Konstantin Schweizer
Alexandr Selyunin
Andrea Serafini
Giulio Settanta
M. Settimo
Zhuang Shao
V. Sharov
Arina Shaydurova
Jingyan Shi
Yanan Shi
Vitaly Shutov
Andrey Sidorenkov
Fedor Šimkovic
Chiara Sirignano
Jaruchit Siripak
Monica Sisti
Maciej Slupecki
Mikhail Smirnov
Oleg Smirnov
Thiago Sogo-Bezerra
Sergey Sokolov
Julanan Songwadhana
Boonrucksar Soonthornthum
Albert Sotnikov
Ondvrej vSr'amek
Warintorn Sreethawong
A. Stahl
Luca Stanco
Konstantin Stankevich
Duvsan Vstef'anik
Hans Steiger
Jochen Steinmann
Tobias Sterr
M. Stock
Virginia Strati
Alexander Studenikin
Jun Su
Shifeng Sun
Xilei Sun
Yongjie Sun Sun
Yongzhao Sun
Zhengyang Sun
Narumon Suwonjandee
Michal Szelezniak
Qiang Tang
Quan Tang
Xiao Tang
Alexander Tietzsch
Igor Tkachev
Tomas Tmej
M. Torri
K. Treskov
Andrea Triossi
Giancarlo Troni
Wladyslaw Trzaska
Cristina Tuve
Nikita Ushakov
Vadim Vedin
Giuseppe Verde
Maxim Vialkov
Benoit Viaud
Cornelius Moritz Vollbrecht
C. Volpe
Katharina von Sturm
Vit Vorobel
Dmitriy Voronin
Lucia Votano
Pablo Walker
Caishen Wang
Chung-Hsiang Wang
En Wang
Guoli Wang
Jian Wang
Jun Wang
Lucinda W. Wang
Meifen Wang
Meng Wang
Ruiguang Wang
Siguang Wang
Wei Wang
Wenshuai Wang
Xi Wang
Xiangyue Wang
Yangfu Wang
Yaoguang Wang
Yi Xing Wang
Yifang Wang
Yuanqing Wang
Yuman Wang
Zhe Wang
Zheng Wang
Zhimin Wang
Zongyi Wang
Apimook Watcharangkool
Wei Wei
Wenlu Wei
Yadong Wei
K. Wen
Kaile Wen
Christopher Wiebusch
S. Wong
Bjoern Wonsak
Diru Wu
Qun Wu
Zhi Wu
Michael Wurm
Jacques Wurtz
Christian Wysotzki
Yufei Xi
D. Xia
Xiang Xiao
Xiaochuan Xie
Yu-guang Xie
Z. P. Xie
Zhao-Liang Xin
Z. Xing
Benda D. Xu
Chengze Xu
Donglian Xu
Fanrong Xu
The physics potential of detecting 8B solar neutrinos will be exploited at the Jiangmen Underground Neutrino Observatory (JUNO), in a model-… (see more)independent manner by using three distinct channels of the charged current (CC), neutral current (NC), and elastic scattering (ES) interactions. Due to the largest-ever mass of 13C nuclei in the liquid scintillator detectors and the expected low background level, 8B solar neutrinos are observable in the CC and NC interactions on 13C for the first time. By virtue of optimized event selections and muon veto strategies, backgrounds from the accidental coincidence, muon-induced isotopes, and external backgrounds can be greatly suppressed. Excellent signal-to-background ratios can be achieved in the CC, NC, and ES channels to guarantee the observation of the 8B solar neutrinos. From the sensitivity studies performed in this work, we show that JUNO, with 10 yr of data, can reach the 1σ precision levels of 5%, 8%, and 20% for the 8B neutrino flux, sin 2 θ 12 , and Δ m 21 2 , respectively. Probing the details of both solar physics and neutrino physics would be unique and helpful. In addition, when combined with the Sudbury Neutrino Observatory measurement, the world's best precision of 3% is expected for the measurement of the 8B neutrino flux.
Machine-learning-assisted and real-time-feedback-controlled growth of InAs/GaAs quantum dots
Chao Shen
Wenkang Zhan
Kaiyao Xin
Manyang Li
Zhenyu Sun
Hui Cong
Zhaofeng Wu
Chi Xu
Bo Xu
Zhongming Wei
Chao Zhao
Zhanguo Wang
Chunlai Xue
Dual quantum spin Hall insulator by density-tuned correlations in TaIrTe4.
Thomas Siyuan Ding
Hongyu Chen
Anyuan Gao
Tiema Qian
Zumeng Huang
Zhe Sun
Xin Han
Alex Strasser
Jiangxu Li
Michael Geiwitz
Mohamed Shehabeldin
Vsevolod Belosevich
Zihan Wang
Yiping Wang
Kenji Watanabe
Takashi Taniguchi
David C. Bell
Ziqiang Wang
Liang Fu … (see 8 more)
Yang Zhang
Xiaofeng Qian
Kenneth S. Burch
Youguo Shi
Ni Ni
Guoqing Chang
Su-Yang Xu
Qiong Ma