Publications

Predicting the Subhalo Mass Functions in Simulations from Galaxy Images
Tri Nguyen
J. Rose
Chris Lovell
Francisco Villaescusa-navarro
Strong gravitational lensing provides a powerful tool to directly infer the dark matter (DM) subhalo mass function (SHMF) in lens galaxies. … (voir plus)However, comparing observationally inferred SHMFs to theoretical predictions remains challenging, as the predicted SHMF can vary significantly between galaxies - even within the same cosmological model - due to differences in the properties and environment of individual galaxies. We present a machine learning framework to infer the galaxy-specific predicted SHMF from galaxy images, conditioned on the assumed inverse warm DM particle mass
From Efficiency to Equity: Measuring Fairness in Preference Learning
S. Gowaikar
Rashid A. Mushkani
As AI systems, particularly generative models, increasingly influence decision-making, ensuring that they are able to fairly represent diver… (voir plus)se human preferences becomes crucial. This paper introduces a novel framework for evaluating epistemic fairness in preference learning models inspired by economic theories of inequality and Rawlsian justice. We propose metrics adapted from the Gini Coefficient, Atkinson Index, and Kuznets Ratio to quantify fairness in these models. We validate our approach using a diverse collection of datasets, covering both visual preferences and textual content. Our analysis reveals variations in model performance across users, highlighting potential epistemic injustices. We explore pre-processing and in-processing techniques to mitigate these inequalities, demonstrating a complex relationship between model efficiency and fairness. This work contributes to AI ethics by providing a framework for evaluating and improving epistemic fairness in preference learning models, offering insights for developing more inclusive AI systems in contexts where diverse human preferences are crucial.
From Efficiency to Equity: Measuring Fairness in Preference Learning
S. Gowaikar
Rashid A. Mushkani
As AI systems, particularly generative models, increasingly influence decision-making, ensuring that they are able to fairly represent diver… (voir plus)se human preferences becomes crucial. This paper introduces a novel framework for evaluating epistemic fairness in preference learning models inspired by economic theories of inequality and Rawlsian justice. We propose metrics adapted from the Gini Coefficient, Atkinson Index, and Kuznets Ratio to quantify fairness in these models. We validate our approach using two datasets: a custom visual preference dataset (AI-EDI-Space) and the Jester Jokes dataset. Our analysis reveals variations in model performance across users, highlighting potential epistemic injustices. We explore pre-processing and in-processing techniques to mitigate these inequalities, demonstrating a complex relationship between model efficiency and fairness. This work contributes to AI ethics by providing a framework for evaluating and improving epistemic fairness in preference learning models, offering insights for developing more inclusive AI systems in contexts where diverse human preferences are crucial.
From Efficiency to Equity: Measuring Fairness in Preference Learning
S. Gowaikar
Rashid A. Mushkani
Longitudinal intergenerational hyperscanning reveals indices of relationship formation and loneliness
Ryssa Moffat
Emily S. Cross
Measuring What Matters: Connecting AI Ethics Evaluations to System Attributes, Hazards, and Harms
Over the past decade, an ecosystem of measures has emerged to evaluate the social and ethical implications of AI systems, largely shaped by … (voir plus)high-level ethics principles. These measures are developed and used in fragmented ways, without adequate attention to how they are situated in AI systems. In this paper, we examine how existing measures used in the computing literature map to AI system components, attributes, hazards, and harms. Our analysis draws on a scoping review resulting in nearly 800 measures corresponding to 11 AI ethics principles. We find that most measures focus on four principles – fairness, transparency, privacy, and trust – and primarily assess model or output system components. Few measures account for interactions across system elements, and only a narrow set of hazards is typically considered for each harm type. Many measures are disconnected from where harm is experienced and lack guidance for setting meaningful thresholds. These patterns reveal how current evaluation practices remain fragmented, measuring in pieces rather than capturing how harms emerge across systems. Framing measures with respect to system attributes, hazards, and harms can strengthen regulatory oversight, support actionable practices in industry, and ground future research in systems-level understanding.
Measuring What Matters: Connecting AI Ethics Evaluations to System Attributes, Hazards, and Harms
PoissonNet: A Local-Global Approach for Learning on Surfaces
Arman Maesumi
Tanish Makadia
Thibault Groueix
Vladimir Kim
Daniel Ritchie
Many network architectures exist for learning on meshes, yet their constructions entail delicate trade-offs between difficulty learning high… (voir plus)-frequency features, insufficient receptive field, sensitivity to discretization, and inefficient computational overhead. Drawing from classic local-global approaches in mesh processing, we introduce PoissonNet, a novel neural architecture that overcomes all of these deficiencies by formulating a local-global learning scheme, which uses Poisson's equation as the primary mechanism for feature propagation. Our core network block is simple; we apply learned local feature transformations in the gradient domain of the mesh, then solve a Poisson system to propagate scalar feature updates across the surface globally. Our local-global learning framework preserves the features's full frequency spectrum and provides a truly global receptive field, while remaining agnostic to mesh triangulation. Our construction is efficient, requiring far less compute overhead than comparable methods, which enables scalability -- both in the size of our datasets, and the size of individual training samples. These qualities are validated on various experiments where, compared to previous intrinsic architectures, we attain state-of-the-art performance on semantic segmentation and parameterizing highly-detailed animated surfaces. Finally, as a central application of PoissonNet, we show its ability to learn deformations, significantly outperforming state-of-the-art architectures that learn on surfaces.
PoissonNet: A Local-Global Approach for Learning on Surfaces
Arman Maesumi
Tanish Makadia
Thibault Groueix
Vladimir Kim
Daniel Ritchie
Many network architectures exist for learning on meshes, yet their constructions entail delicate trade-offs between difficulty learning high… (voir plus)-frequency features, insufficient receptive field, sensitivity to discretization, and inefficient computational overhead. Drawing from classic local-global approaches in mesh processing, we introduce PoissonNet, a novel neural architecture that overcomes all of these deficiencies by formulating a local-global learning scheme, which uses Poisson's equation as the primary mechanism for feature propagation. Our core network block is simple; we apply learned local feature transformations in the gradient domain of the mesh, then solve a Poisson system to propagate scalar feature updates across the surface globally. Our local-global learning framework preserves the features's full frequency spectrum and provides a truly global receptive field, while remaining agnostic to mesh triangulation. Our construction is efficient, requiring far less compute overhead than comparable methods, which enables scalability -- both in the size of our datasets, and the size of individual training samples. These qualities are validated on various experiments where, compared to previous intrinsic architectures, we attain state-of-the-art performance on semantic segmentation and parameterizing highly-detailed animated surfaces. Finally, as a central application of PoissonNet, we show its ability to learn deformations, significantly outperforming state-of-the-art architectures that learn on surfaces.
Reframing AI-for-Good: Radical Questioning in AI for Human Trafficking Interventions
Reframing AI-for-Good: Radical Questioning in AI for Human Trafficking Interventions
Reframing AI-for-Good: Radical Questioning in AI for Human Trafficking Interventions
This paper introduces Radical Questioning (RQ), a structured, pre-design ethics framework developed to assess whether artificial intelligenc… (voir plus)e (AI) should be applied to complex social problems rather than merely how. While much of responsible AI development focuses on aligning systems with principles such as fairness, transparency, and accountability, it often begins after the decision to build has already been made, implicitly treating the deployment of AI as a given rather than a question in itself. In domains such as human trafficking, marked by contested definitions, systemic injustice, and deep stakeholder asymmetries, such assumptions can obscure foundational ethical concerns. RQ offers an upstream, deliberative process for surfacing these concerns before design begins. Drawing from critical theory, participatory ethics, and relational responsibility, RQ formalizes a five-step framework to interrogate problem framings, confront techno-solutionist tendencies, and reflect on the moral legitimacy of intervention. Developed through interdisciplinary collaboration and engagement with survivor-led organizations, RQ was piloted in the domain of human trafficking (HT) which is a particularly high-stakes and ethically entangled application area. Its use led to a fundamental design shift: away from automated detection tools and toward survivor-controlled, empowerment-based technologies. We argue that RQ's novelty lies in both its temporal position, i.e, prior to technical design, and its orientation toward domains where harm is structural and ethical clarity cannot be achieved through one-size-fits-all solutions. RQ thus addresses a critical gap between abstract principles of responsible AI and the lived ethical demands of real-world deployment.