Portrait of Adel Nabli is unavailable

Adel Nabli

PhD - Concordia University
Supervisor
Research Topics
Deep Learning
Distributed Systems
Optimization

Publications

ACCO: Accumulate while you Communicate, Hiding Communications in Distributed LLM Training
Louis Fournier
Pierre ERBACHER
Louis Serrano
Edouard Oyallon
WASH: Train your Ensemble with Communication-Efficient Weight Shuffling, then Average
Louis Fournier
Masih Aminbeidokhti
Edouard Oyallon
The performance of deep neural networks is enhanced by ensemble methods, which average the output of several models. However, this comes at … (see more)an increased cost at inference. Weight averaging methods aim at balancing the generalization of ensembling and the inference speed of a single model by averaging the parameters of an ensemble of models. Yet, naive averaging results in poor performance as models converge to different loss basins, and aligning the models to improve the performance of the average is challenging. Alternatively, inspired by distributed training, methods like DART and PAPA have been proposed to train several models in parallel such that they will end up in the same basin, resulting in good averaging accuracy. However, these methods either compromise ensembling accuracy or demand significant communication between models during training. In this paper, we introduce WASH, a novel distributed method for training model ensembles for weight averaging that achieves state-of-the-art image classification accuracy. WASH maintains models within the same basin by randomly shuffling a small percentage of weights during training, resulting in diverse models and lower communication costs compared to standard parameter averaging methods.
ACCO: Accumulate While You Communicate for Communication-Overlapped Sharded LLM Training
Louis Fournier
Pierre ERBACHER
Louis Serrano
Edouard Oyallon
ACCO: Accumulate while you Communicate, Hiding Communications in Distributed LLM Training
Louis Fournier
Pierre ERBACHER
Louis Serrano
Edouard Oyallon
Training Large Language Models (LLMs) relies heavily on distributed implementations, employing multiple GPUs to compute stochastic gradients… (see more) on model replicas in parallel. However, synchronizing gradients in data parallel settings induces a communication overhead increasing with the number of distributed workers, which can impede the efficiency gains of parallelization. To address this challenge, optimization algorithms reducing inter-worker communication have emerged, such as local optimization methods used in Federated Learning. While effective in minimizing communication overhead, these methods incur significant memory costs, hindering scalability: in addition to extra momentum variables, if communications are only allowed between multiple local optimization steps, then the optimizer's states cannot be sharded among workers. In response, we propose
ACCO: Accumulate While You Communicate for Communication-Overlapped Sharded LLM Training
Louis Fournier
Pierre ERBACHER
Louis Serrano
Edouard Oyallon
Training LLMs relies on distributed implementations using multiple GPUs to compute gradients in parallel with sharded optimizers. However, s… (see more)ynchronizing gradients in data parallel setups introduces communication overhead that grows with the number of workers, limiting parallelization efficiency. Local optimization algorithms reduce communications but incur high memory costs as they prevent optimizer state sharding, hindering scalability. To address this, we propose \textbf{AC}cumulate while \textbf{CO}mmunicate (\acco), a memory-efficient optimization algorithm for distributed LLM training. By synchronizing delayed gradients while computing new ones, \acco~reduces GPU idle time and supports heterogeneous hardware. To mitigate the convergence issues caused by delayed updates, we introduce a novel technique ensuring training dynamics align with standard distributed optimization. Compared to ZeRO-1, our approach is significantly faster and scales effectively across heterogeneous hardware.
$\textbf{A}^2\textbf{CiD}^2$: Accelerating Asynchronous Communication in Decentralized Deep Learning
Edouard Oyallon
A2CiD2: Accelerating Asynchronous Communication in Decentralized Deep Learning
Edouard Oyallon