Portrait of Cem (Yusuf) Subakan

Cem (Yusuf) Subakan

Associate Academic Member
Assistant Professor, Université Laval, Department of Computer Science and Software Engineering
Affiliate Assistant Professor, Concordia University, Gina Cody School of Engineering and Computer Science
Research Topics
Multimodal Learning

Biography

Cem Subakan is an assistant professor in the Computer Science and Software Engineering Department at Université Laval, and an affiliate assistant professor in the Computer Science and Software Engineering Department at Concordia University. He is also an associate academic member of Mila – Quebec Artificial Intelligence Institute. After receiving his PhD in computer science from the University of Illinois at Urbana-Champaign (UIUC), Subakan did a postdoc at Mila. He serves as a reviewer for many conferences including NeurIPS, ICML, ICLR, ICASSP and MLSP, as well as for journals, such as IEEE Signal Processing Letters and IEEE Transactions on Audio, Speech, and Language Processing. His principal research interest is machine learning for speech and audio. More specifically, he works on deep learning for source separation and speech enhancement under realistic conditions, neural network interpretability, continual learning and multi-modal learning.

Subakan was awarded the Best Student Paper Award at the 2017 IEEE Machine Learning for Signal Processing Conference, and also obtained a Sabura Muroga Fellowship from UIUC’s Department of Computer Science. He is a core contributor to the SpeechBrain project, leading the speech separation component.

Current Students

Master's Research - Université Laval
PhD - Concordia University
Principal supervisor :
Postdoctorate - Université Laval
PhD - Concordia University
Principal supervisor :
PhD - Université Laval
Co-supervisor :
Research Intern - Université de Montréal
Co-supervisor :

Publications

Dynamic HumTrans: Humming Transcription Using CNNs and Dynamic Programming
Shubham Gupta
Isaac Neri Gomez-Sarmiento
Faez Amjed Mezdari
LMAC-TD: Producing Time Domain Explanations for Audio Classifiers
Eleonora Mancini
Francesco Paissan
Audio Editing with Non-Rigid Text Prompts
Francesco Paissan
Zhepei Wang
Paris Smaragdis
In this paper, we explore audio-editing with non-rigid text edits. We show that the proposed editing pipeline is able to create audio edits … (see more)that remain faithful to the input audio. We explore text prompts that perform addition, style transfer, and in-painting. We quantitatively and qualitatively show that the edits are able to obtain results which outperform Audio-LDM, a recently released text-prompted audio generation model. Qualitative inspection of the results points out that the edits given by our approach remain more faithful to the input audio in terms of keeping the original onsets and offsets of the audio events.
Listenable Maps for Audio Classifiers
Open-Source Conversational AI with SpeechBrain 1.0
Titouan Parcollet
Adel Moumen
Sylvain de Langen
Peter William VanHarn Plantinga
Yingzhi Wang
Pooneh Mousavi
Luca Della Libera
Artem Ploujnikov
Francesco Paissan
Davide Borra
Salah Zaiem
Zeyu Zhao
Shucong Zhang
Georgios Karakasidis
Sung-Lin Yeh
Pierre Champion
Aku Rouhe
Rudolf Braun … (see 11 more)
Florian Mai
Juan Pablo Zuluaga
Seyed Mahed Mousavi
Andreas Nautsch
Xuechen Liu
Sangeet Sagar
Jarod Duret
Salima Mdhaffar
G. Laperriere
Renato de Mori
Yannick Estève
SpeechBrain is an open-source Conversational AI toolkit based on PyTorch, focused particularly on speech processing tasks such as speech rec… (see more)ognition, speech enhancement, speaker recognition, text-to-speech, and much more. It promotes transparency and replicability by releasing both the pre-trained models and the complete"recipes"of code and algorithms required for training them. This paper presents SpeechBrain 1.0, a significant milestone in the evolution of the toolkit, which now has over 200 recipes for speech, audio, and language processing tasks, and more than 100 models available on Hugging Face. SpeechBrain 1.0 introduces new technologies to support diverse learning modalities, Large Language Model (LLM) integration, and advanced decoding strategies, along with novel models, tasks, and modalities. It also includes a new benchmark repository, offering researchers a unified platform for evaluating models across diverse tasks
DASB -- Discrete Audio and Speech Benchmark
Pooneh Mousavi
Luca Della Libera
Jarod Duret
Artem Ploujnikov
Discrete audio tokens have recently gained considerable attention for their potential to connect audio and language processing, enabling the… (see more) creation of modern multimodal large language models. Ideal audio tokens must effectively preserve phonetic and semantic content along with paralinguistic information, speaker identity, and other details. While several types of audio tokens have been recently proposed, identifying the optimal tokenizer for various tasks is challenging due to the inconsistent evaluation settings in existing studies. To address this gap, we release the Discrete Audio and Speech Benchmark (DASB), a comprehensive leaderboard for benchmarking discrete audio tokens across a wide range of discriminative tasks, including speech recognition, speaker identification and verification, emotion recognition, keyword spotting, and intent classification, as well as generative tasks such as speech enhancement, separation, and text-to-speech. Our results show that, on average, semantic tokens outperform compression tokens across most discriminative and generative tasks. However, the performance gap between semantic tokens and standard continuous representations remains substantial, highlighting the need for further research in this field.
How Should We Extract Discrete Audio Tokens from Self-Supervised Models?
Pooneh Mousavi
Jarod Duret
Salah Zaiem
Luca Della Libera
Artem Ploujnikov
Phoneme Discretized Saliency Maps for Explainable Detection of AI-Generated Voice
Listenable Maps for Zero-Shot Audio Classifiers
Francesco Paissan
Luca Della Libera
Interpreting the decisions of deep learning models, including audio classifiers, is crucial for ensuring the transparency and trustworthines… (see more)s of this technology. In this paper, we introduce LMAC-ZS (Listenable Maps for Audio Classifiers in the Zero-Shot context), which, to the best of our knowledge, is the first decoder-based post-hoc interpretation method for explaining the decisions of zero-shot audio classifiers. The proposed method utilizes a novel loss function that maximizes the faithfulness to the original similarity between a given text-and-audio pair. We provide an extensive evaluation using the Contrastive Language-Audio Pretraining (CLAP) model to showcase that our interpreter remains faithful to the decisions in a zero-shot classification context. Moreover, we qualitatively show that our method produces meaningful explanations that correlate well with different text prompts.
CryCeleb: A Speaker Verification Dataset Based on Infant Cry Sounds
David Budaghyan
Arsenii Gorin
Charles Onu
This paper describes the Ubenwa CryCeleb dataset - a labeled collection of infant cries - and the accompanying CryCeleb 2023 task, which is … (see more)a public speaker verification challenge based on cry sounds. We released more than 6 hours of manually segmented cry sounds from 786 newborns for academic use, aiming to encourage research in infant cry analysis. The inaugural public competition attracted 59 participants, 11 of whom improved the baseline performance. The top-performing system achieved a significant improvement scoring 25.8% equal error rate, which is still far from the performance of state-of-the-art adult speaker verification systems. Therefore, we believe there is room for further research on this dataset, potentially extending beyond the verification task.
Focal Modulation Networks for Interpretable Sound Classification
The increasing success of deep neural networks has raised concerns about their inherent black-box nature, posing challenges related to inter… (see more)pretability and trust. While there has been extensive exploration of interpretation techniques in vision and language, interpretability in the audio domain has received limited attention, primarily focusing on post-hoc explanations. This paper addresses the problem of interpretability by-design in the audio domain by utilizing the recently proposed attention-free focal modulation networks (FocalNets). We apply FocalNets to the task of environmental sound classification for the first time and evaluate their interpretability properties on the popular ESC-50 dataset. Our method outperforms a similarly sized vision transformer both in terms of accuracy and interpretability. Furthermore, it is competitive against PIQ, a method specifically designed for post-hoc interpretation in the audio domain.
Resource-Efficient Separation Transformer
Luca Della Libera
Samuele Cornell
Frédéric Lepoutre
François Grondin
Transformers have recently achieved state-of-the-art performance in speech separation. These models, however, are computationally demanding … (see more)and require a lot of learnable parameters. This paper explores Transformer-based speech separation with a reduced computational cost. Our main contribution is the development of the Resource-Efficient Separation Transformer (RE-SepFormer), a self-attention-based architecture that reduces the computational burden in two ways. First, it uses non-overlapping blocks in the latent space. Second, it operates on compact latent summaries calculated from each chunk. The RE-SepFormer reaches a competitive performance on the popular WSJ0-2Mix and WHAM! datasets in both causal and non-causal settings. Remarkably, it scales significantly better than the previous Transformer-based architectures in terms of memory and inference time, making it more suitable for processing long mixtures.