Portrait of Amal Zouaq

Amal Zouaq

Associate Academic Member
Full Professor, Polytechnique Montréal, Department of Computer Engineering and Software Engineering
Associate Professor, University of Ottawa, School of Electrical Engineering and Computer Science
Research Topics
Generative Models
Information Retrieval
Knowledge Graphs
Learning on Graphs
Natural Language Processing
Representation Learning

Biography

Amal Zouaq is a Full Professor at Ecole Polytechnique de Montreal (GIGL). She holds an FRQS (Dual) Chair in AI and Digital Health. She is also an IVADO professor, a member of the CLIQ-AI consortium (Computational Linguistics in Québec), and an adjunct professor at the University of Ottawa.

Her research interests include artificial intelligence, natural language processing, and Semantic Web. She is the director of the LAMA-WeST research lab, specialized in all aspects of natural language processing and artificial intelligence, with a special focus on Semantic Web technologies. Semantic Web knowledge bases can represent a large-scale source of knowledge for artificial intelligence models and can be used, among other things, to ensure the validity of information, and the explainability of artificial intelligence models.

The LAMA-WeST research projects tackle challenges related to representation learning, natural language interfaces and question answering, automated reasoning, knowledge base learning and alignment, ontology learning, knowledge engineering and modeling, and information extraction and generation to name a few.

Prof. Zouaq serves as a member of the program committee in many conferences and journals in knowledge and data engineering, natural language processing, data mining and the Semantic Web.

Current Students

Master's Research - Polytechnique Montréal
Master's Research - Polytechnique Montréal
PhD - Polytechnique Montréal
Co-supervisor :
PhD - Polytechnique Montréal
PhD - Polytechnique Montréal
Master's Research - Polytechnique Montréal
PhD - Polytechnique Montréal
Principal supervisor :
PhD - Polytechnique Montréal
Co-supervisor :

Publications

Combining Domain and Alignment Vectors to Achieve Better Knowledge-Safety Trade-offs in LLMs
Megh Thakkar
Yash More
Quentin Fournier
Matthew D Riemer
Pin-Yu Chen
Payel Das
There is a growing interest in training domain-expert LLMs that excel in specific technical fields compared to their general-purpose instruc… (see more)tion-tuned counterparts. However, these expert models often experience a loss in their safety abilities in the process, making them capable of generating harmful content. As a solution, we introduce an efficient and effective merging-based alignment method called \textsc{MergeAlign} that interpolates the domain and alignment vectors, creating safer domain-specific models while preserving their utility. We apply \textsc{MergeAlign} on Llama3 variants that are experts in medicine and finance, obtaining substantial alignment improvements with minimal to no degradation on domain-specific benchmarks. We study the impact of model merging through model similarity metrics and contributions of individual models being merged. We hope our findings open new research avenues and inspire more efficient development of safe expert LLMs.
A Deep Dive into the Trade-Offs of Parameter-Efficient Preference Alignment Techniques
Megh Thakkar
Quentin Fournier
Matthew D Riemer
Pin-Yu Chen
Payel Das
MVP: Minimal Viable Phrase for Long Text Understanding.
Louis Clouâtre
Assessing the Generalization Capabilities of Neural Machine Translation Models for SPARQL Query Generation
Samuel Reyd
SORBET: A Siamese Network for Ontology Embeddings Using a Distance-Based Regression Loss and BERT
Francis Gosselin
SORBETmatcher results for OAEI 2023.
Francis Gosselin
Local Structure Matters Most: Perturbation Study in NLU
Louis Clouâtre
Prasanna Parthasarathi
Recent research analyzing the sensitivity of natural language understanding models to word-order perturbations has shown that neural models … (see more)are surprisingly insensitive to the order of words.In this paper, we investigate this phenomenon by developing order-altering perturbations on the order of words, subwords, and characters to analyze their effect on neural models’ performance on language understanding tasks.We experiment with measuring the impact of perturbations to the local neighborhood of characters and global position of characters in the perturbed texts and observe that perturbation functions found in prior literature only affect the global ordering while the local ordering remains relatively unperturbed.We empirically show that neural models, invariant of their inductive biases, pretraining scheme, or the choice of tokenization, mostly rely on the local structure of text to build understanding and make limited use of the global structure.
Local Structure Matters Most: Perturbation Study in NLU
Louis Clouâtre
Prasanna Parthasarathi
Recent research analyzing the sensitivity of natural language understanding models to word-order perturbations has shown that neural models … (see more)are surprisingly insensitive to the order of words.In this paper, we investigate this phenomenon by developing order-altering perturbations on the order of words, subwords, and characters to analyze their effect on neural models’ performance on language understanding tasks.We experiment with measuring the impact of perturbations to the local neighborhood of characters and global position of characters in the perturbed texts and observe that perturbation functions found in prior literature only affect the global ordering while the local ordering remains relatively unperturbed.We empirically show that neural models, invariant of their inductive biases, pretraining scheme, or the choice of tokenization, mostly rely on the local structure of text to build understanding and make limited use of the global structure.