Portrait de Pouya Bashivan n'est pas disponible

Pouya Bashivan

Membre académique associé
Professeur adjoint, McGill University, Département de physiologie
Sujets de recherche
Neurosciences computationnelles

Biographie

Pouya Bashivan est professeur adjoint au Département de physiologie et membre du programme intégré en neurosciences de l'Université McGill, ainsi que membre associé de Mila – Institut québécois d'intelligence artificielle. Avant de se joindre à l'Université McGill, il a été chercheur postdoctoral à Mila, travaillant avec Irina Rish et Blake Richards. Auparavant, il a été chercheur postdoctoral au Département des sciences du cerveau et de la cognition et à l'Institut McGovern pour la recherche sur le cerveau du Massachusetts Institute of Technology (MIT), où il a travaillé avec le professeur James DiCarlo. Il a obtenu un doctorat en génie informatique de l'Université de Memphis en 2016, après avoir obtenu une licence et une maîtrise en ingénierie électrique et de contrôle de l'Université KNT (Téhéran, Iran).

L'objectif de la recherche menée à son laboratoire est de développer des modèles de réseaux neuronaux qui exploitent la mémoire pour résoudre des tâches complexes. Alors que nous nous appuyons souvent sur des mesures de performance des tâches pour trouver des modèles de réseaux neuronaux et des algorithmes d'apprentissage améliorés, nous utilisons également des mesures neuronales et comportementales provenant de cerveaux d’humains et d'autres animaux pour évaluer la similitude de ces modèles avec des cerveaux biologiquement évolués. Nous pensons que ces contraintes supplémentaires pourraient accélérer les progrès vers l'ingénierie d'un agent artificiellement intelligent de niveau humain.

Étudiants actuels

Maîtrise recherche - UdeM
Superviseur⋅e principal⋅e :
Maîtrise recherche - McGill
Maîtrise recherche - McGill
Stagiaire de recherche - McGill
Doctorat - McGill
Doctorat - McGill
Co-superviseur⋅e :

Publications

Adversarial Feature Desensitization
Reza Bayat
Adam Ibrahim
Kartik Ahuja
Mojtaba Faramarzi
Touraj Laleh
Neural networks are known to be vulnerable to adversarial attacks -- slight but carefully constructed perturbations of the inputs which can … (voir plus)drastically impair the network's performance. Many defense methods have been proposed for improving robustness of deep networks by training them on adversarially perturbed inputs. However, these models often remain vulnerable to new types of attacks not seen during training, and even to slightly stronger versions of previously seen attacks. In this work, we propose a novel approach to adversarial robustness, which builds upon the insights from the domain adaptation field. Our method, called Adversarial Feature Desensitization (AFD), aims at learning features that are invariant towards adversarial perturbations of the inputs. This is achieved through a game where we learn features that are both predictive and robust (insensitive to adversarial attacks), i.e. cannot be used to discriminate between natural and adversarial data. Empirical results on several benchmarks demonstrate the effectiveness of the proposed approach against a wide range of attack types and attack strengths. Our code is available at https://github.com/BashivanLab/afd.
Continual Learning with Self-Organizing Maps
Martin Schrimpf
Robert Ajemian
Matthew D Riemer
Yuhai Tu
Despite remarkable successes achieved by modern neural networks in a wide range of applications, these networks perform best in domain-speci… (voir plus)fic stationary environments where they are trained only once on large-scale controlled data repositories. When exposed to non-stationary learning environments, current neural networks tend to forget what they had previously learned, a phenomena known as catastrophic forgetting. Most previous approaches to this problem rely on memory replay buffers which store samples from previously learned tasks, and use them to regularize the learning on new ones. This approach suffers from the important disadvantage of not scaling well to real-life problems in which the memory requirements become enormous. We propose a memoryless method that combines standard supervised neural networks with self-organizing maps to solve the continual learning problem. The role of the self-organizing map is to adaptively cluster the inputs into appropriate task contexts - without explicit labels - and allocate network resources accordingly. Thus, it selectively routes the inputs in accord with previous experience, ensuring that past learning is maintained and does not interfere with current learning. Out method is intuitive, memoryless, and performs on par with current state-of-the-art approaches on standard benchmarks.