Portrait de Anna (Cheng-Zhi) Huang

Anna (Cheng-Zhi) Huang

Membre affilié
Chaire en IA Canada-CIFAR
Professeure adjointe, Massachussetts Institute of Technology (MIT), Départment de génie électrique et logiciel
Professeure associée, Université de Montréal, Département d'informatique et de recherche opérationnelle (DIRO)
Chercheuse scientifique, Google Brain
Sujets de recherche
Modèles génératifs

Biographie

Anna Huang est professeure adjointe de génie électrique et informatique à la faculté des arts de la musique et du théâtre à MIT (Massachussetts Institute of Technology). Elle est aussi professeur associée au département d'informatique et de recherche opérationnelle (DIRO) de l'Université de Montréal. Anna Huang est également chercheuse chez Google DeepMind, où elle travaille sur le projet Magenta et professeure adjointe à l'Université de Montréal.

Ses recherches portent sur la conception de modèles génératifs et d'interfaces pour soutenir la création musicale et, plus généralement, le processus créatif. Son travail se situe à l'intersection de l'apprentissage automatique, de l'interaction humain-machine et de la musique. Elle est la créatrice de Music Transformer et de Coconet. Coconet est le modèle d'apprentissage automatique qui a alimenté le premier Doodle IA de Google, Doodle Bach, qui a harmonisé en deux jours 55 millions de mélodies provenant d'utilisateurs du monde entier. Ces dernières années, elle a été organisatrice et juge du concours international AI Song Contest et rédactrice invitée pour le numéro spécial de TISMIR sur l'IA et la créativité musicale. Elle est titulaire d'un doctorat de l'Université Harvard, d'une maîtrise du MIT Media Lab et détient un baccalauréat double de l'Université de la Californie du Sud en informatique et en composition musicale.

Étudiants actuels

Maîtrise recherche - UdeM
Co-superviseur⋅e :
Doctorat - UdeM
Co-superviseur⋅e :

Publications

Adaptive Accompaniment with ReaLchords
Yusong Wu
Tim Cooijmans
Kyle Kastner
Adam Roberts
Ian Simon
Alexander Scarlatos
Chris Donahue
Cassie Tarakajian
Shayegan Omidshafiei
Natasha Jaques
Jamming requires coordination, anticipation, and collaborative creativity between musicians. Current generative models of music produce expr… (voir plus)essive output but are not able to generate in an online manner, meaning simultaneously with other musicians (human or otherwise). We propose ReaLchords, an online generative model for improvising chord accompaniment to user melody. We start with an online model pretrained by maximum likelihood, and use reinforcement learning to finetune the model for online use. The finetuning objective leverages both a novel reward model that provides feedback on both harmonic and temporal coherency between melody and chord, and a divergence term that implements a novel type of distillation from a teacher model that can see the future melody. Through quantitative experiments and listening tests, we demonstrate that the resulting model adapts well to unfamiliar input and produce fitting accompaniment. ReaLchords opens the door to live jamming, as well as simultaneous co-creation in other modalities.
Grammar Generative Models for Music Notation
Deep generative models have been successfully applied in many learning experiments with digital data, such as images or audio. In the field … (voir plus)of music, they can also be used to generate symbolic representations, in the context of problems such as automatic music generation or transcription [1-3]. A significant challenge for generating structured symbolic data in general is obtaining well-formed results. This is especially true in the case of music. It is indeed widely accepted that musical notation represents, well beyond simple sequences of notes, a hierarchical organization of melodic and harmonic information, inducing non-local dependencies between musical objects [4]. A good representation of this information is essential for the interpretation and analysis of music pieces.
Improving Source Separation by Explicitly Modeling Dependencies between Sources
Ethan Manilow
Curtis Hawthorne
Bryan Pardo
Jesse Engel
We propose a new method for training a supervised source separation system that aims to learn the interdependent relationships between all c… (voir plus)ombinations of sources in a mixture. Rather than independently estimating each source from a mix, we reframe the source separation problem as an Orderless Neural Autoregressive Density Estimator (NADE), and estimate each source from both the mix and a random subset of the other sources. We adapt a standard source separation architecture, Demucs, with additional inputs for each individual source, in addition to the input mixture. We randomly mask these input sources during training so that the network learns the conditional dependencies between the sources. By pairing this training method with a blocked Gibbs sampling procedure at inference time, we demonstrate that the network can iteratively improve its separation performance by conditioning a source estimate on its earlier source estimates. Experiments on two source separation datasets show that training a Demucs model with an Orderless NADE approach and using Gibbs sampling (up to 512 steps) at inference time strongly outperforms a Demucs baseline that uses a standard regression loss and direct (one step) estimation of sources.
Improving Source Separation by Explicitly Modeling Dependencies between Sources
Ethan Manilow
Curtis Hawthorne
Bryan A. Pardo
Jesse Engel
We propose a new method for training a supervised source separation system that aims to learn the interdependent relationships between all c… (voir plus)ombinations of sources in a mixture. Rather than independently estimating each source from a mix, we reframe the source separation problem as an Orderless Neural Autoregressive Density Estimator (NADE), and estimate each source from both the mix and a random subset of the other sources. We adapt a standard source separation architecture, Demucs, with additional inputs for each individual source, in addition to the input mixture. We randomly mask these input sources during training so that the network learns the conditional dependencies between the sources. By pairing this training method with a blocked Gibbs sampling procedure at inference time, we demonstrate that the network can iteratively improve its separation performance by conditioning a source estimate on its earlier source estimates. Experiments on two source separation datasets show that training a Demucs model with an Orderless NADE approach and using Gibbs sampling (up to 512 steps) at inference time strongly outperforms a Demucs baseline that uses a standard regression loss and direct (one step) estimation of sources.