Portrait of Meghana Bhange

Meghana Bhange

PhD - École de technologie suprérieure
Supervisor
Co-supervisor
Research Topics
Deep Learning
Information Theory
Natural Language Processing

Publications

Survey on <scp>AI</scp> Ethics: A Socio‐Technical Perspective
Dave Mbiazi
Ivaxi Sheth
Patrik Joslin Kenfack
Abstract The past decade has observed a significant advancement in AI, with deep learning‐based models being deployed in diverse scenarios… (see more), including safety‐critical applications. As these AI systems become deeply embedded in our societal infrastructure, the repercussions of their decisions and actions have significant consequences, making the ethical implications of AI deployment highly relevant and essential. The ethical concerns associated with AI are multifaceted, including challenging issues of fairness, privacy and data protection, responsibility and accountability, safety and robustness, transparency and explainability, and environmental impact. These principles together form the foundations of ethical AI considerations that concern every stakeholder in the AI system lifecycle. In light of the present ethical and future x‐risk concerns, governments have shown increasing interest in establishing guidelines for the ethical deployment of AI. This work unifies the current and future ethical concerns of deploying AI into society. While we acknowledge and appreciate the technical surveys for each of the ethical principles concerned, in this paper, we aim to provide a comprehensive overview that not only addresses each principle from a technical point of view but also discusses them from a social perspective.
Conscious Data Contribution via Community-Driven Chain-of-Thought Distillation
Rushabh Solanki
Elliot Creager
Ulrich Matchi Aïvodji
Crowding Out The Noise: Algorithmic Collective Action Under Differential Privacy
Rushabh Solanki
Ulrich Matchi Aïvodji
Elliot Creager
The integration of AI into daily life has generated considerable attention and excitement, while also raising concerns about automating algo… (see more)rithmic harms and re-entrenching existing social inequities. While top-down solutions such as regulatory policies and improved algorithm design are common, the fact that AI trains on social data creates an opportunity for a grassroots approach, Algorithmic Collective Action, where users deliberately modify the data they share to steer a platform's learning process in their favor. This paper considers how these efforts interact with a firm's use of a differentially private model to protect user data, motivated by the growing regulatory focus on privacy and data protection. In particular, we investigate how the use of Differentially Private Stochastic Gradient Descent (DPSGD) affects the collective’s ability to influence the learning process. Our findings show that while differential privacy contributes to the protection of individual data, it introduces challenges for effective algorithmic collective action. We characterize lower bounds on the success of these actions as a function of the collective's size and the firm's privacy parameters, verifying these trends experimentally by training deep neural network classifiers across several datasets.