Portrait de Jian Tang

Jian Tang

Membre académique principal
Chaire en IA Canada-CIFAR
Professeur agrégé, HEC Montréal, Département de sciences de la décision
Professeur associé, Université de Montréal, Département d'informatique et de recherche opérationnelle (DIRO)
Fondateur, BioGeometry
Sujets de recherche
Apprentissage profond
Biologie computationnelle
Modèles génératifs
Modélisation moléculaire
Réseaux de neurones en graphes

Biographie

Jian Tang est professeur agrégé au département de sciences de la décision de HEC. Il est aussi professeur associé au département informatique et recherche opérationnelle (DIRO) de l'Université de Montréal et un membre académique principal à Mila – Institut québécois d’intelligence artificielle. Il est titulaire d'une chaire de recherche en IA Canada-CIFAR et le fondateur de BioGeometry, une entreprise en démarrage spécialisée dans l'IA générative pour la découverte d'anticorps. Ses principaux domaines de recherche sont les modèles génératifs profonds, l'apprentissage automatique des graphes et leurs applications à la découverte de médicaments. Il est un leader international dans le domaine de l'apprentissage automatique des graphes, et son travail représentatif sur l'apprentissage de la représentation des nœuds, LINE, a été largement reconnu et cité plus de 5 000 fois. Il a également réalisé de nombreux travaux pionniers sur l'IA pour la découverte de médicaments, notamment le premier cadre d'apprentissage automatique à source ouverte pour la découverte de médicaments, TorchDrug et TorchProtein.

Étudiants actuels

Collaborateur·rice de recherche
Doctorat - UdeM
Superviseur⋅e principal⋅e :
Collaborateur·rice de recherche - Carnegie Mellon University
Doctorat - UdeM
Superviseur⋅e principal⋅e :
Collaborateur·rice de recherche
Doctorat - UdeM

Publications

Towards Foundational Models for Molecular Learning on Large-Scale Multi-Task Datasets
Shenyang Huang
Joao Alex Cunha
Zhiyi Li
Gabriela Moisescu-Pareja
Oleksandr Dymov
Samuel Maddrell-Mander
Callum McLean
Frederik Wenkel
Luis Müller
Jama Hussein Mohamud
Ali Parviz
Michael Craig
Michał Koziarski
Jiarui Lu
Zhaocheng Zhu
Cristian Gabellini
Kerstin Klaser
Josef Dean
Cas Wognum … (voir 15 de plus)
Maciej Sypetkowski
Christopher Morris
Ioannis Koutis
Prudencio Tossou
Hadrien Mary
Therence Bois
Andrew William Fitzgibbon
Blazej Banaszewski
Chad Martin
Dominic Masters
Recently, pre-trained foundation models have enabled significant advancements in multiple fields. In molecular machine learning, however, wh… (voir plus)ere datasets are often hand-curated, and hence typically small, the lack of datasets with labeled features, and codebases to manage those datasets, has hindered the development of foundation models. In this work, we present seven novel datasets categorized by size into three distinct categories: ToyMix, LargeMix and UltraLarge. These datasets push the boundaries in both the scale and the diversity of supervised labels for molecular learning. They cover nearly 100 million molecules and over 3000 sparsely defined tasks, totaling more than 13 billion individual labels of both quantum and biological nature. In comparison, our datasets contain 300 times more data points than the widely used OGB-LSC PCQM4Mv2 dataset, and 13 times more than the quantum-only QM1B dataset. In addition, to support the development of foundational models based on our proposed datasets, we present the Graphium graph machine learning library which simplifies the process of building and training molecular machine learning models for multi-task and multi-level molecular datasets. Finally, we present a range of baseline results as a starting point of multi-task and multi-level training on these datasets. Empirically, we observe that performance on low-resource biological datasets show improvement by also training on large amounts of quantum data. This indicates that there may be potential in multi-task and multi-level training of a foundation model and fine-tuning it to resource-constrained downstream tasks. The Graphium library is publicly available on Github and the dataset links are available in Part 1 and Part 2.
CO emission predictions in municipal solid waste incineration based on reduced depth features and long short-term memory optimization
Runyu Zhang
Heng Xia
Xiaotong Pan
Wen Yu
JunFei Qiao
CATRO: Channel Pruning via Class-Aware Trace Ratio Optimization
Wenzheng Hu
Ning Liu
Zhengping Che
Mingyang Li
Changshui Zhang
Jianqiang Wang
Deep convolutional neural networks are shown to be overkill with high parametric and computational redundancy in many application scenarios,… (voir plus) and an increasing number of works have explored model pruning to obtain lightweight and efficient networks. However, most existing pruning approaches are driven by empirical heuristics and rarely consider the joint impact of channels, leading to unguaranteed and suboptimal performance. In this article, we propose a novel channel pruning method via class-aware trace ratio optimization (CATRO) to reduce the computational burden and accelerate the model inference. Utilizing class information from a few samples, CATRO measures the joint impact of multiple channels by feature space discriminations and consolidates the layerwise impact of preserved channels. By formulating channel pruning as a submodular set function maximization problem, CATRO solves it efficiently via a two-stage greedy iterative optimization procedure. More importantly, we present theoretical justifications on convergence of CATRO and performance of pruned networks. Experimental results demonstrate that CATRO achieves higher accuracy with similar computation cost or lower computation cost with similar accuracy than other state-of-the-art channel pruning algorithms. In addition, because of its class-aware property, CATRO is suitable to prune efficient networks adaptively for various classification subtasks, enhancing handy deployment and usage of deep networks in real-world applications.
Hybrid Simulator-Based Mechanism and Data-Driven for Multidemand Dioxin Emissions Intelligent Prediction in the MSWI Process
Heng Xia
Wen Yu
JunFei Qiao
Multi-objective PSO semi-supervised random forest method for dioxin soft sensor
Wen Xu
Heng Xia
Wen Yu
JunFei Qiao
Multi-reservoir ESN-based prediction strategy for dynamic multi-objective optimization
Cuili Yang
Danlei Wang
JunFei Qiao
Wen Yu
NOx emissions prediction for MSWI process based on dynamic modular neural network
Haoshan Duan
Xi Meng
JunFei Qiao
Online Measurement of Dioxin Emission in Solid Waste Incineration Using Fuzzy Broad Learning
Heng Xia
Wen Yu
JunFei Qiao
Dioxin (DXN) is a persistent organic pollutant produced from municipal solid waste incineration (MSWI) processes. It is a crucial environmen… (voir plus)tal indicator to minimize emission concentration by using optimization control, but it is difficult to monitor in real time. Aiming at online soft-sensing of DXN emission, a novel fuzzy tree broad learning system (FTBLS) is proposed, which includes offline training and online measurement. In the offline training part, weighted k-means is presented to construct a typical sample pool for reduced learning costs of offline and online phases. Moreover, the novel FTBLS, which contains a feature mapping layer, enhance layer, and increment layer, by replacing the fuzzy decision tree with neurons applied to construct the offline model. In the online measurement part, recursive principal component analysis is used to monitor the time-varying characteristic of the MSWI process. To measure DXN emission, offline FTBLS is reused for normal samples; for drift samples, fast incremental learning is used for online updates. A DXN data from the actual MSWI process is employed to prove the usefulness of FTBLS, where the RMSE of training and testing data are 0.0099 and 0.0216, respectively. This result shows that FTBLS can effectively realize DXN online prediction.
Tree Broad Learning System for Small Data Modeling.
Heng Xia
Wen Yu
JunFei Qiao
Broad learning system based on neural network (BLS-NN) has poor efficiency for small data modeling with various dimensions. Tree-based BLS (… (voir plus)TBLS) is designed for small data modeling by introducing nondifferentiable modules and an ensemble strategy to the traditional broad learning system (BLS). TBLS replaces the neurons of BLS with the tree modules to map the input data. Moreover, we present three new TBLS variant methods and their incremental learning implementations, which are motivated by deep, broad, and ensemble learning. Their major distinction is reflected in the incremental learning strategies based on: 1) mean square error (mse); 2) pseudo-inverse; and 3) pseudo-inverse theory and stack representation. Therefore, this study further explores the domain of BLS based on the nondifferentiable modules. The simulations are compared with some state-of-the-art (SOTA) BLS-NN and tree methods under high-, medium-, and low-dimensional benchmark datasets. Results show that the proposed method outperforms the BLS-NN, and the modeling accuracy is remarkably improved with the small training data of the proposed TBLS.
Zero-shot Logical Query Reasoning on any Knowledge Graph
Mikhail Galkin
Jincheng Zhou
Bruno Ribeiro
Zhaocheng Zhu
Complex logical query answering (CLQA) in knowledge graphs (KGs) goes beyond simple KG completion and aims at answering compositional querie… (voir plus)s comprised of multiple projections and logical operations. Existing CLQA methods that learn parameters bound to certain entity or relation vocabularies can only be applied to the graph they are trained on which requires substantial training time before being deployed on a new graph. Here we present UltraQuery, an inductive reasoning model that can zero-shot answer logical queries on any KG. The core idea of UltraQuery is to derive both projections and logical operations as vocabulary-independent functions which generalize to new entities and relations in any KG. With the projection operation initialized from a pre-trained inductive KG reasoning model, UltraQuery can solve CLQA on any KG even if it is only finetuned on a single dataset. Experimenting on 23 datasets, UltraQuery in the zero-shot inference mode shows competitive or better query answering performance than best available baselines and sets a new state of the art on 14 of them.
Giant Correlated Gap and Possible Room-Temperature Correlated States in Twisted Bilayer MoS_{2}.
Fanfan Wu
Qiaoling Xu
Qinqin Wang
Yanbang Chu
Lu Li
Jieying Liu
Jinpeng Tian
Yiru Ji
Le Liu
Yalong Yuan
Zhiheng Huang
Jiaojiao Zhao
Xiaozhou Zan
Kenji Watanabe
Takashi Taniguchi
Dongxia Shi
Gangxu Gu
Yang Xu
Lede Xian … (voir 3 de plus)
Wei Yang
Luojun Du
Guangyu Zhang
Moiré superlattices have emerged as an exciting condensed-matter quantum simulator for exploring the exotic physics of strong electronic co… (voir plus)rrelations. Notable progress has been witnessed, but such correlated states are achievable usually at low temperatures. Here, we report evidence of possible room-temperature correlated electronic states and layer-hybridized SU(4) model simulator in AB-stacked MoS_{2} homobilayer moiré superlattices. Correlated insulating states at moiré band filling factors v=1, 2, 3 are unambiguously established in twisted bilayer MoS_{2}. Remarkably, the correlated electronic state at v=1 shows a giant correlated gap of ∼126  meV and may persist up to a record-high critical temperature over 285 K. The realization of a possible room-temperature correlated state with a large correlated gap in twisted bilayer MoS_{2} can be understood as the cooperation effects of the stacking-specific atomic reconstruction and the resonantly enhanced interlayer hybridization, which largely amplify the moiré superlattice effects on electronic correlations. Furthermore, extreme large nonlinear Hall responses up to room temperature are uncovered near correlated electronic states, demonstrating the quantum geometry of moiré flat conduction band.
Multi-modal Molecule Structure-text Model for Text-based Retrieval and Editing
Shengchao Liu
Weili Nie
Chengpeng Wang
Jiarui Lu
Zhuoran Qiao
Ling Liu
Chaowei Xiao
Animashree Anandkumar
There is increasing adoption of artificial intelligence in drug discovery. However, existing studies use machine learning to mainly utilize … (voir plus)the chemical structures of molecules but ignore the vast textual knowledge available in chemistry. Incorporating textual knowledge enables us to realize new drug design objectives, adapt to text-based instructions and predict complex biological activities. Here we present a multi-modal molecule structure-text model, MoleculeSTM, by jointly learning molecules' chemical structures and textual descriptions via a contrastive learning strategy. To train MoleculeSTM, we construct a large multi-modal dataset, namely, PubChemSTM, with over 280,000 chemical structure-text pairs. To demonstrate the effectiveness and utility of MoleculeSTM, we design two challenging zero-shot tasks based on text instructions, including structure-text retrieval and molecule editing. MoleculeSTM has two main properties: open vocabulary and compositionality via natural language. In experiments, MoleculeSTM obtains the state-of-the-art generalization ability to novel biochemical concepts across various benchmarks.