Portrait of Istabrak Abbes is unavailable

Istabrak Abbes

Master's Research - Université de Montréal
Supervisor
Research Topics
Continual Learning
Deep Learning
Natural Language Processing

Publications

Emergent Reasoning via Recursive Latent Reinforcement Pretraining
Large language models (LLMs) often rely on explicit chain-of-thought (CoT) traces to solve multi-step reasoning problems, but these traces i… (see more)ncrease inference cost, expose brittle prompt dependence, and complicate training objectives. We study an alternative: \emph{latent deliberation} implemented as a small recurrent refinement module that performs multiple internal ``thinking`` steps while keeping the external sequence length fixed. We introduce \textbf{Recursive Latent Reinforcement Pretraining (RLRP)}, a training recipe that augments a base causal LLM with a shared latent head executed for
Revisiting Replay and Gradient Alignment for Continual Pre-Training of Large Language Models
Matthew D Riemer
Tsuguchika Tabaru
Hiroaki Kingetsu
A. Chandar
Small Encoders Can Rival Large Decoders in Detecting Groundedness
Fernando Rodriguez
Alaa Boukhary
Adam Elwood
A. Chandar
Augmenting large language models (LLMs) with external context significantly improves their performance in natural language processing (NLP) … (see more)tasks. However, LLMs struggle to answer queries reliably when the provided context lacks information, often resorting to ungrounded speculation or internal knowledge. Groundedness - generating responses strictly supported by the context - is essential for ensuring factual consistency and trustworthiness. This study focuses on detecting whether a given query is grounded in a document provided in context before the costly answer generation by LLMs. Such a detection mechanism can significantly reduce both inference time and resource consumption. We show that lightweight, task specific encoder models such as RoBERTa and NomicBERT, fine-tuned on curated datasets, can achieve accuracy comparable to state-of-the-art LLMs, such as Llama3 8B and GPT4o, in groundedness detection while reducing inference latency by orders of magnitude. The code is available at : https://github.com/chandarlab/Hallucinate-less