Learn how to leverage generative AI to support and improve your productivity at work. The next cohort will take place online on April 28 and 30, 2026, in French.
We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
WiSE-OD: Benchmarking Robustness in Infrared Object Detection
Traditionally, constrained policy optimization with Reinforcement Learning (RL) requires learning a new policy from scratch for any new envi… (see more)ronment, goal or cost function, with limited generalization to new tasks and constraints. Given the sample inefficiency of many common deep RL methods, this procedure can be impractical for many real-world scenarios, particularly when constraints or tasks are changing. As an alternative, in the unconstrained setting, various works have sought to pre-train representations from offline datasets to accelerate policy optimization upon specification of a reward.
Such methods can permit faster adaptation to new tasks in a given environment, dramatically improving sample efficiency. Recently, zero-shot policy optimization has been explored by leveraging a particular
Despite extensive safety alignment, large language models (LLMs) remain vulnerable to jailbreak attacks that bypass safeguards to elicit har… (see more)mful content. While prior work attributes this vulnerability to safety training limitations, the internal mechanisms by which LLMs process adversarial prompts remain poorly understood. We present a mechanistic analysis of the jailbreaking behavior in a large-scale, safety-aligned LLM, focusing on LLaMA-2-7B-chat-hf. Leveraging edge attribution patching and subnetwork probing, we systematically identify computational circuits responsible for generating affirmative responses to jailbreak prompts. Ablating these circuits during the first token prediction can reduce attack success rates by up to 80\%, demonstrating its critical role in safety bypass. Our analysis uncovers key attention heads and MLP pathways that mediate adversarial prompt exploitation, revealing how important tokens propagate through these components to override safety constraints. These findings advance the understanding of adversarial vulnerabilities in aligned LLMs and pave the way for targeted, interpretable defenses mechanisms based on mechanistic interpretability.
Recent advances in word embeddings and language models use large-scale, unlabelled data and self-supervised learning to boost NLP performanc… (see more)e. Multilingual models, often trained on web-sourced data like Wikipedia, face challenges: few low-resource languages are included, their data is often noisy, and lack of labeled datasets makes it hard to evaluate performance outside high-resource languages like English. In this dissertation, we focus on languages spoken in Sub-Saharan Africa where all the indigenous languages in this region can be regarded as low-resourced in terms of the availability of labelled data for NLP tasks and unlabelled data found on the web. We analyse the noise in the publicly available corpora, and curate a high-quality corpus, demonstrating that the quality of semantic representations learned in word embeddings does not only depend on the amount of data but on the quality of pre-training data. We demonstrate empirically the limitations of word embeddings, and the opportunities the multilingual pre-trained language model (PLM) offers especially for languages unseen during pre-training and low-resource scenarios. We further study how to adapt and specialize multilingual PLMs to unseen African languages using a small amount of monolingual texts. To address the under-representation of the African languages in NLP research, we developed large scale human-annotated labelled datasets for 21 African languages in two impactful NLP tasks: named entity recognition and machine translation. We conduct an extensive empirical evaluation using state-of-the-art methods across supervised, weakly-supervised, and transfer learning settings.
Large language models (LLMs) possess vast semantic knowledge but often struggle with complex reasoning tasks, particularly in relational rea… (see more)soning problems such as kinship or spatial reasoning. In this paper, we present Path-of-Thoughts (PoT), a novel framework designed to tackle relation reasoning by decomposing the task into three key stages: graph extraction, path identification, and reasoning. Unlike previous approaches, PoT efficiently extracts a task-agnostic graph that identifies crucial entities, relations, and attributes within the problem context. Subsequently, PoT identifies relevant reasoning chains within the graph corresponding to the posed question, facilitating inference of potential answers. Experimental evaluations on four benchmark datasets, demanding long reasoning chains, demonstrate that PoT surpasses state-of-the-art baselines by a significant margin (maximum 21.3\%) without necessitating fine-tuning or extensive LLM calls. Furthermore, as opposed to prior neuro-symbolic methods, PoT exhibits improved resilience against LLM errors by leveraging the compositional nature of graphs.
The cervical spinal cord (cSC) is highly relevant to clinical dysfunction in multiple sclerosis (MS) but remains understudied using quantita… (see more)tive magnetic resonance imaging (MRI). We assessed magnetization transfer ratio (MTR), a semi‐quantitative MRI measure sensitive to MS‐related tissue microstructural changes, in the cSC and its relationship with clinical outcomes in radiologically isolated syndrome (RIS) and MS.
MTR data were acquired from 52 RIS, 201 relapsing–remitting MS (RRMS), 47 primary progressive MS (PPMS), and 43 control (CON) participants across four sites in the Canadian Prospective Cohort Study to Understand Progression in MS (CanProCo) using 3.0 T MRI systems. Mean MTR was compared between groups in whole cSC and sub‐regions between C2‐C4. Multiple linear regression was used to evaluate relationships between MTR and clinical outcomes, including the expanded disability status scale (EDSS), walking speed test (WST), and manual dexterity test (MDT).
There were consistent group differences in MTR, which were most pronounced between PPMS and CON (−5.8% to −3.7%, p ≤ 0.01). In PPMS, lower MTR was associated with greater disability as measured by EDSS (β = −0.3 to −0.1, p ≤ 0.03), WST (β = −0.9 to −0.5, p ≤ 0.04), and MDT (β = −0.6 and − 0.5, p = 0.04). In RRMS, MTR was associated with only EDSS (β = −0.1, p ≤ 0.03).
In this large sample of RIS and MS, cSC MTR was lowest in PPMS, with associations between MTR and clinical outcomes in MS but not RIS. These findings suggest that MTR provides important information about the underlying tissue microstructural integrity of the cSC relevant to clinical disability in established MS.
2025-06-28
Annals of Clinical and Translational Neurology (published)
The transmission network expansion planning (TNEP) problem is inherently complex because of its nonlinear and nonconvex nature, arising from… (see more) the inclusion of AC power flow constraints, discrete investment decisions, and multiple operating scenarios. These characteristics make the problem computationally challenging, particulary when scaling to larger systems with multistage planning horizons. Addressing this complexity requires advanced methodologies that balance the solution accuracy and computational efficiency. This paper presents a novel two-step framework for TNEP that first applies Benders decomposition to separate investment and operational decisions, followed by semidefinite linearization to reformulate the operational subproblems. The proposed approach enhances the solution quality by ensuring convexity in the subproblems and improves computational efficiency through decomposition. Numerical results for 6- , 10-, and 24-bus test systems demonstrate that the proposed method achieves superior performance compared to existing approaches in terms of solution accuracy and computational efficiency.
Latency-Aware Pruning and Quantization of Self-Supervised Speech Transformers for Edge Devices
Seyed Milad Ebrahimipour
Seyyed Hasan Mozafari
James J. Clark
Warren J. Gross
Brett H. Meyer
The growing adoption of self-supervised learning transformers for speech (speech SSL) is constrained by their significant computational and … (see more)memory demands, making deployment on resource-constrained edge devices challenging. We propose a latency-aware compression framework that integrates structured pruning and quantization to address these challenges. Guided by a latency model that considers the combined effects of pruning and quantization, our method dynamically identifies and removes less critical blocks while maintaining task performance, avoiding the inefficiencies of over-pruning and under-pruning seen in prior approaches. Unlike prior methods specialized in either post-training compression without fine-tuning data or in cases where fine-tuning data is available, our method is effective in both settings. Experimental results show that, in task-agnostic compression, our method achieves a 4.2 × speedup on the Hikey970 edge development platform, outperforming previous task-agnostic pruning methods in most tasks, while requiring only 21–24 GPU hours—a 3 × reduction compared to prior methods. Additionally, our method achieves a lower word error rate of 7.8% using task-specific pruning, while reducing computational overhead by approximately 19.4% in terms of GFLOPs compared to previous task-specific methods. Finally, our method consistently achieves higher accuracy than the state-of-the-art post-training compression approach across various latency speedup constraints, even without fine-tuning data.
2025-06-27
ACM Transactions on Embedded Computing Systems (published)
Augmenting large language models (LLMs) with external context significantly improves their performance in natural language processing (NLP) … (see more)tasks. However, LLMs struggle to answer queries reliably when the provided context lacks information, often resorting to ungrounded speculation or internal knowledge. Groundedness - generating responses strictly supported by the context - is essential for ensuring factual consistency and trustworthiness. This study focuses on detecting whether a given query is grounded in a document provided in context before the costly answer generation by LLMs. Such a detection mechanism can significantly reduce both inference time and resource consumption. We show that lightweight, task specific encoder models such as RoBERTa and NomicBERT, fine-tuned on curated datasets, can achieve accuracy comparable to state-of-the-art LLMs, such as Llama3 8B and GPT4o, in groundedness detection while reducing inference latency by orders of magnitude. The code is available at : https://github.com/chandarlab/Hallucinate-less
Dynamic graph learning methods have recently emerged as powerful tools for modelling relational data evolving through time. However, despite… (see more) extensive benchmarking efforts, it remains unclear whether current Temporal Graph Neural Networks (TGNNs) effectively capture core temporal patterns such as periodicity, cause-and-effect, and long-range dependencies. In this work, we introduce the Temporal Graph Reasoning Benchmark (T-GRAB), a comprehensive set of synthetic tasks designed to systematically probe the capabilities of TGNNs to reason across time. T-GRAB provides controlled, interpretable tasks that isolate key temporal skills: counting/memorizing periodic repetitions, inferring delayed causal effects, and capturing long-range dependencies over both spatial and temporal dimensions. We evaluate 11 temporal graph learning methods on these tasks, revealing fundamental shortcomings in their ability to generalize temporal patterns. Our findings offer actionable insights into the limitations of current models, highlight challenges hidden by traditional real-world benchmarks, and motivate the development of architectures with stronger temporal reasoning abilities. The code for T-GRAB can be found at: https://github.com/alirezadizaji/T-GRAB.
2025-06-25
MLoG-GenAI @ ACM SIGKDD Conference on Knowledge Discovery and Data Mining (oral)
Recently the minimal requirements for inter-brain coupling have attracted attention. Moreover, researchers have found that brains can couple… (see more) not only when individuals are in the same space, but also during technologically mediated interactions. Here we investigate whether inter-brain synchronization occurs when both conditions of spatial isolation and minimal interaction are satisfied. In particular, we use a real-time interaction paradigm, the Perceptual Crossing Experiment where individuals must locate their partners in a minimal virtual space using tactile stimuli alone. We report novel findings that contribute to our understanding of inter-brain synchronization and the minimal conditions of social interaction in virtual spaces : 1)inter-brain synchronization is present in the Alpha band during online minimal interaction, 2)five behavioral patterns and three inter-brain patterns can be found in the PCE and 3)different behavioral patterns in the interaction environment recruited different inter-brain networks such that frontal-fronto-central synchrony occurs when people are further apart in space, interacting with a multitude of objects. These findings have important implications for the understanding of social interaction processes, such that inter-brain coupling can occur even without extensive communication channels.