Portrait de Cem Subakan

Cem Subakan

Membre académique associé
Professeur adjoint, Université Laval, Département d'informatique et de génie logiciel
Professeur associé, Concordia University, École de génie et d'informatique Gina-Cody
Sujets de recherche
Apprentissage multimodal

Biographie

Cem Subakan est professeur adjoint à l'Université Laval, au sein du Département d'informatique et de génie logiciel. Il est également professeur adjoint affilié au Département d'informatique et de génie logiciel de l'Université Concordia, ainsi que membre académique associé à Mila – Institut québécois d'intelligence artificielle. Il a obtenu un doctorat en informatique de l'Université de l'Illinois à Urbana-Champaign (UIUC) et a effectué un postdoctorat à Mila. Il agit en tant que relecteur pour plusieurs conférences, notamment NeurIPS, ICML, ICLR, ICASSP et MLSP, ainsi que pour des revues telles que IEEE Signal Processing Letters (SPL) et IEEE Transactions on Audio, Speech, and Language Processing (TASL). Ses recherches portent principalement sur l'apprentissage automatique appliqué à la parole et à l'audio. Plus précisément, il travaille sur l'apprentissage profond pour la séparation de sources et l'amélioration de la parole dans des conditions réalistes, l'interprétabilité des réseaux neuronaux, l'apprentissage continu et l'apprentissage multimodal. Il a reçu le Prix du meilleur article étudiant lors de la conférence IEEE Machine Learning for Signal Processing (MLSP) en 2017, ainsi que la bourse Sabura Muroga du Département d'informatique de l'UIUC. Il est également un contributeur clé au projet SpeechBrain, où il dirige la partie consacrée à la séparation de la parole.

Étudiants actuels

Maîtrise recherche - Université Laval
Doctorat - Concordia
Superviseur⋅e principal⋅e :
Postdoctorat - Université Laval
Doctorat - Concordia
Superviseur⋅e principal⋅e :
Doctorat - Université Laval
Co-superviseur⋅e :
Collaborateur·rice alumni - UdeM
Co-superviseur⋅e :
Maîtrise recherche - Université Laval

Publications

Discrete Audio Tokens: More Than a Survey!
Pooneh Mousavi
Gallil Maimon
Adel Moumen
Darius Petermann
Jiatong Shi
Haibin Wu
Haici Yang
Anastasia Kuznetsova
Artem Ploujnikov
Ricard Marxer
Bhuvana Ramabhadran
Benjamin Elizalde
Loren Lugosch
Jinyu Li
Phil Woodland
Minje Kim
Hung-yi Lee
Shinji Watanabe
Yossi Adi … (voir 1 de plus)
Discrete audio tokens are compact representations that aim to preserve perceptual quality, phonetic content, and speaker characteristics whi… (voir plus)le enabling efficient storage and inference, as well as competitive performance across diverse downstream tasks.They provide a practical alternative to continuous features, enabling the integration of speech and audio into modern large language models (LLMs). As interest in token-based audio processing grows, various tokenization methods have emerged, and several surveys have reviewed the latest progress in the field. However, existing studies often focus on specific domains or tasks and lack a unified comparison across various benchmarks. This paper presents a systematic review and benchmark of discrete audio tokenizers, covering three domains: speech, music, and general audio. We propose a taxonomy of tokenization approaches based on encoder-decoder, quantization techniques, training paradigm, streamability, and application domains. We evaluate tokenizers on multiple benchmarks for reconstruction, downstream performance, and acoustic language modeling, and analyze trade-offs through controlled ablation studies. Our findings highlight key limitations, practical considerations, and open challenges, providing insight and guidance for future research in this rapidly evolving area. For more information, including our main results and tokenizer database, please refer to our website: https://poonehmousavi.github.io/dates-website/.
Discrete Audio Tokens: More Than a Survey!
Pooneh Mousavi
Gallil Maimon
Adel Moumen
Darius Petermann
Jiatong Shi
Haibin Wu
Haici Yang
Anastasia Kuznetsova
Artem Ploujnikov
Ricard Marxer
Bhuvana Ramabhadran
Benjamin Elizalde
Loren Lugosch
Jinyu Li
Phil Woodland
Minje Kim
Hung-yi Lee
Shinji Watanabe
Yossi Adi … (voir 1 de plus)
Discrete audio tokens are compact representations that aim to preserve perceptual quality, phonetic content, and speaker characteristics whi… (voir plus)le enabling efficient storage and inference, as well as competitive performance across diverse downstream tasks. They provide a practical alternative to continuous features, enabling the integration of speech and audio into modern large language models (LLMs). As interest in token-based audio processing grows, various tokenization methods have emerged, and several surveys have reviewed the latest progress in the field. However, existing studies often focus on specific domains or tasks and lack a unified comparison across various benchmarks. This paper presents a systematic review and benchmark of discrete audio tokenizers, covering three domains: speech, music, and general audio. We propose a taxonomy of tokenization approaches based on encoder-decoder, quantization techniques, training paradigm, streamability, and application domains. We evaluate tokenizers on multiple benchmarks for reconstruction, downstream performance, and acoustic language modeling, and analyze trade-offs through controlled ablation studies. Our findings highlight key limitations, practical considerations, and open challenges, providing insight and guidance for future research in this rapidly evolving area. For more information, including our main results and tokenizer database, please refer to our website: https://poonehmousavi.github.io/dates-website/.
LiSTEN: Learning Soft Token Embeddings for Neural Audio LLMs
Pooneh Mousavi
Shubham Gupta
Foundation models based on large language models (LLMs) have shown great success in handling various tasks and modalities. However, adapting… (voir plus) these models for general-purpose audio-language tasks is challenging due to differences in acoustic environments and task variations. In this work, we introduce LiSTEN Learning Soft Token Embeddings for Neural Audio LLMs), a framework for adapting LLMs to speech and audio tasks. LiSTEN uses a dynamic prompt selection strategy with learnable key-value pairs, allowing the model to balance general and task-specific knowledge while avoiding overfitting in a multitask setting. Our approach reduces dependence on large-scale ASR or captioning datasets, achieves competitive performance with fewer trainable parameters, and simplifies training by using a single-stage process. Additionally, LiSTEN enhances interpretability by analyzing the diversity and overlap of selected prompts across different tasks.
ALAS: Measuring Latent Speech-Text Alignment For Spoken Language Understanding In Multimodal LLMs
Pooneh Mousavi
Yingzhi Wang
LiSTEN: Learning Soft Token Embeddings for Neural Audio LLMs
Pooneh Mousavi
Shubham Gupta
Foundation models based on large language models (LLMs) have shown great success in handling various tasks and modalities. However, adapting… (voir plus) these models for general-purpose audio-language tasks is challenging due to differences in acoustic environments and task variations. In this work, we introduce LiSTEN Learning Soft Token Embeddings for Neural Audio LLMs), a framework for adapting LLMs to speech and audio tasks. LiSTEN uses a dynamic prompt selection strategy with learnable key-value pairs, allowing the model to balance general and task-specific knowledge while avoiding overfitting in a multitask setting. Our approach reduces dependence on large-scale ASR or captioning datasets, achieves competitive performance with fewer trainable parameters, and simplifies training by using a single-stage process. Additionally, LiSTEN enhances interpretability by analyzing the diversity and overlap of selected prompts across different tasks.
Investigating the Effectiveness of Explainability Methods in Parkinson's Detection from Speech
Eleonora Mancini
Francesco Paissan
Paolo Torroni
Speech impairments in Parkinson's disease (PD) provide significant early indicators for diagnosis. While models for speech-based PD detectio… (voir plus)n have shown strong performance, their interpretability remains underexplored. This study systematically evaluates several explainability methods to identify PD-specific speech features, aiming to support the development of accurate, interpretable models for clinical decision-making in PD diagnosis and monitoring. Our methodology involves (i) obtaining attributions and saliency maps using mainstream interpretability techniques, (ii) quantitatively evaluating the faithfulness of these maps and their combinations obtained via union and intersection through a range of established metrics, and (iii) assessing the information conveyed by the saliency maps for PD detection from an auxiliary classifier. Our results reveal that, while explanations are aligned with the classifier, they often fail to provide valuable information for domain experts.
LMAC-TD: Producing Time Domain Explanations for Audio Classifiers
Eleonora Mancini
Francesco Paissan
Sample Compression for Continual Learning
Jacob Comeau
Mathieu Bazinet
Sample Compression for Continual Learning
Jacob Comeau
Mathieu Bazinet
Sample Compression for Self Certified Continual Learning
Jacob Comeau
Mathieu Bazinet
Continual learning algorithms aim to learn from a sequence of tasks, making the training distribution non-stationary. The majority of existi… (voir plus)ng continual learning approaches in the literature rely on heuristics and do not provide learning guarantees. In this paper, we present a new method called Continual Pick-to-Learn (CoP2L), which is able to retain the most representative samples for each task in an efficient way. CoP2L combines the Pick-to-Learn algorithm (rooted in the sample compression theory) and the experience replay continual learning scheme. This allows us to provide non-vacuous upper bounds on the generalization loss of the learned predictors, numerically computable after each task. We empirically evaluate our approach on several standard continual learning benchmarks across Class-Incremental, Task-Incremental, and Domain-Incremental settings. Our results show that CoP2L is highly competitive across all setups, often outperforming existing baselines, and significantly mitigating catastrophic forgetting compared to vanilla experience replay in the Class-Incremental setting. It is possible to leverage the bounds provided by CoP2L in practical scenarios to certify the predictor reliability on previously learned tasks, in order to improve the trustworthiness of the continual learning algorithm.
Sample Compression for Self Certified Continual Learning
Jacob Comeau
Mathieu Bazinet
Continual learning algorithms aim to learn from a sequence of tasks, making the training distribution non-stationary. The majority of existi… (voir plus)ng continual learning approaches in the literature rely on heuristics and do not provide learning guarantees. In this paper, we present a new method called Continual Pick-to-Learn (CoP2L), which is able to retain the most representative samples for each task in an efficient way. CoP2L combines the Pick-to-Learn algorithm (rooted in the sample compression theory) and the experience replay continual learning scheme. This allows us to provide non-vacuous upper bounds on the generalization loss of the learned predictors, numerically computable after each task. We empirically evaluate our approach on several standard continual learning benchmarks across Class-Incremental, Task-Incremental, and Domain-Incremental settings. Our results show that CoP2L is highly competitive across all setups, often outperforming existing baselines, and significantly mitigating catastrophic forgetting compared to vanilla experience replay in the Class-Incremental setting. It is possible to leverage the bounds provided by CoP2L in practical scenarios to certify the predictor reliability on previously learned tasks, in order to improve the trustworthiness of the continual learning algorithm.
ReTreever: Tree-based Coarse-to-Fine Representations for Retrieval
Shubham Gupta
Zichao Li
Tianyi Chen
Perouz Taslakian
Valentina Zantedeschi