Portrait de Dhanya Sridhar

Dhanya Sridhar

Membre académique principal
Chaire en IA Canada-CIFAR
Professeure adjointe, Université de Montréal, Département d'informatique et de recherche opérationnelle
Sujets de recherche
Apprentissage de représentations
Apprentissage profond
Causalité
Modèles probabilistes
Raisonnement

Biographie

Dhanya Sridhar est professeure adjointe au Département d'informatique et de recherche opérationnelle (DIRO) de l'Université de Montréal, membre académique principale de Mila – Institut québécois d'intelligence artificielle et titulaire d'une chaire en IA Canada-CIFAR. Auparavant, elle a été chercheuse postdoctorale à l’Université Columbia. Elle a obtenu un doctorat de l’Université de la Californie à Santa Cruz. Ses recherches portent sur la combinaison de la causalité et de l'apprentissage automatique au service de systèmes d'IA qui sont résistants aux changements de distribution, s'adaptent efficacement à de nouvelles tâches et découvrent de nouvelles connaissances en même temps que nous.

Étudiants actuels

Doctorat - UdeM
Co-superviseur⋅e :
Collaborateur·rice de recherche - Helmholtz AI
Stagiaire de recherche - UdeM
Co-superviseur⋅e :
Doctorat - UdeM
Maîtrise recherche - UdeM
Superviseur⋅e principal⋅e :
Doctorat - UdeM
Superviseur⋅e principal⋅e :
Doctorat - UdeM
Superviseur⋅e principal⋅e :
Collaborateur·rice de recherche

Publications

Leveraging Structure Between Environments: Phylogenetic Regularization Incentivizes Disentangled Representations
Elliot Layne
Jason Hartford
Sébastien Lachapelle
Recently, learning invariant predictors across varying environments has been shown to improve the generalization of supervised learning meth… (voir plus)ods. This line of investigation holds great potential for application to biological problem settings, where data is often naturally heterogeneous. Biological samples often originate from different distributions, or environments. However, in biological contexts, the standard "invariant prediction" setting may not completely fit: the optimal predictor may in fact vary across biological environments. There also exists strong domain knowledge about the relationships between environments, such as the evolutionary history of a set of species, or the differentiation process of cell types. Most work on generic invariant predictors have not assumed the existence of structured relationships between environments. However, this prior knowledge about environments themselves has already been shown to improve prediction through a particular form of regularization applied when learning a set of predictors. In this work, we empirically evaluate whether a regularization strategy that exploits environment-based prior information can be used to learn representations that better disentangle causal factors that generate observed data. We find evidence that these methods do in fact improve the disentanglement of latent embeddings. We also show a setting where these methods can leverage phylogenetic information to estimate the number of latent causal features.
Estimating Social Influence from Observational Data
Caterina De Bacco
David Blei
We consider the problem of estimating social influence, the effect that a person's behavior has on the future behavior of their peers. The k… (voir plus)ey challenge is that shared behavior between friends could be equally explained by influence or by two other confounding factors: 1) latent traits that caused people to both become friends and engage in the behavior, and 2) latent preferences for the behavior. This paper addresses the challenges of estimating social influence with three contributions. First, we formalize social influence as a causal effect, one which requires inferences about hypothetical interventions. Second, we develop Poisson Influence Factorization (PIF), a method for estimating social influence from observational data. PIF fits probabilistic factor models to networks and behavior data to infer variables that serve as substitutes for the confounding latent traits. Third, we develop assumptions under which PIF recovers estimates of social influence. We empirically study PIF with semi-synthetic and real data from Last.fm, and conduct a sensitivity analysis. We find that PIF estimates social influence most accurately compared to related methods and remains robust under some violations of its assumptions.
Heterogeneous Supervised Topic Models
Hal Daumé III
David Blei