Portrait de Shin (Alexandre) Koseki

Shin (Alexandre) Koseki

Membre affilié
Professeur adjoint, Université de Montréal, École d'urbanisme et d'architecture de paysage
Sujets de recherche
Exploration des données

Biographie

Shin Koseki est professeur adjoint à l’École d'urbanisme et d'architecture de paysage de la Faculté de l’aménagement de l'Université de Montréal ainsi que directeur et titulaire de la Chaire UNESCO en paysage urbain. Formé en architecture et en urbanisme au Canada et en Suisse, il s'intéresse à l'intégration de nouvelles technologies dans les pratiques de planification, à la contribution de la démocratie interactive au développement durable des territoires, ainsi qu'au rôle de l'espace public dans l'acquisition de connaissances et de compétences. Ses domaines de recherche incluent l'application de systèmes d'intelligence artificielle dans la conception urbaine et les nouveaux processus de gouvernance environnementale et technologique.

En 2022, Shin Koseki a coécrit le livre blanc Mila–UN Habitat AI & Cities: Risks, Applications and Governance. Il bénéficie d’un financement des Fonds Nouvelles frontières en recherche et du ministère de l'Économie, de l'Innovation et de l'Énergie du Québec pour travailler sur la coconception de systèmes d'intelligence artificielle responsables dans les villes.

Shin Koseki a mené des recherches à l'École polytechnique fédérale de Lausanne (EPFL) et à l'École polytechnique fédérale de Zurich (ETH Zurich), à l'Université d'Oxford (Oxon.), à l'Université nationale de Singapour (NUS), au Massachusetts Institute of Technology (MIT), à l'Université de Zurich (UZH) et à l'Institut Max-Planck pour l'histoire de l'art et de l'architecture (Bibliotheca Hertziana). De retour à Montréal, sa ville natale, il travaille avec ses étudiant·e·s sur la revitalisation et la renaturalisation du fleuve Saint-Laurent ainsi que sur l'amélioration de la qualité de vie des communautés riveraines.

Étudiants actuels

Postdoctorat - UdeM
Maîtrise recherche - UdeM

Publications

Negotiative Alignment: Embracing Disagreement to Achieve Fairer Outcomes -- Insights from Urban Studies
Rashid A. Mushkani
Hugo Berard
LIVS: A Pluralistic Alignment Dataset for Inclusive Public Spaces
Rashid A. Mushkani
Shravan Nayak
Hugo Berard
Allison Cohen
Hadrien Bertrand
LIVS: A Pluralistic Alignment Dataset for Inclusive Public Spaces
Rashid A. Mushkani
Shravan Nayak
Hugo Berard
Allison Cohen
Hadrien Bertrand
We introduce the Local Intersectional Visual Spaces (LIVS) dataset, a benchmark for multi-criteria alignment of text-to-image (T2I) models i… (voir plus)n inclusive urban planning. Developed through a two-year participatory process with 30 community organizations, LIVS encodes diverse spatial preferences across 634 initial concepts, consolidated into six core criteria: Accessibility, Safety, Comfort, Invitingness, Inclusivity, and Diversity, through 37,710 pairwise comparisons. Using Direct Preference Optimization (DPO) to fine-tune Stable Diffusion XL, we observed a measurable increase in alignment with community preferences, though a significant proportion of neutral ratings highlights the complexity of modeling intersectional needs. Additionally, as annotation volume increases, accuracy shifts further toward the DPO-tuned model, suggesting that larger-scale preference data enhances fine-tuning effectiveness. LIVS underscores the necessity of integrating context-specific, stakeholder-driven criteria into generative modeling and provides a resource for evaluating AI alignment methodologies across diverse socio-spatial contexts.
AI-EDI-SPACE: A Co-designed Dataset for Evaluating the Quality of Public Spaces
S. Gowaikar
Hugo Berard
Rashid A. Mushkani
Emmanuel Beaudry Marchand
Toumadher Ammar
Advancements in AI heavily rely on large-scale datasets meticulously curated and annotated for training. However, concerns persist regarding… (voir plus) the transparency and context of data collection methodologies, especially when sourced through crowdsourcing platforms. Crowdsourcing often employs low-wage workers with poor working conditions and lacks consideration for the representativeness of annotators, leading to algorithms that fail to represent diverse views and perpetuate biases against certain groups. To address these limitations, we propose a methodology involving a co-design model that actively engages stakeholders at key stages, integrating principles of Equity, Diversity, and Inclusion (EDI) to ensure diverse viewpoints. We apply this methodology to develop a dataset and AI model for evaluating public space quality using street view images, demonstrating its effectiveness in capturing diverse perspectives and fostering higher-quality data.
AI-EDI-SPACE: A Co-designed Dataset for Evaluating the Quality of Public Spaces
S. Gowaikar
Hugo Berard
Rashid A. Mushkani
Emmanuel Beaudry Marchand
Toumadher Ammar
Advancements in AI heavily rely on large-scale datasets meticulously curated and annotated for training. However, concerns persist regarding… (voir plus) the transparency and context of data collection methodologies, especially when sourced through crowdsourcing platforms. Crowdsourcing often employs low-wage workers with poor working conditions and lacks consideration for the representativeness of annotators, leading to algorithms that fail to represent diverse views and perpetuate biases against certain groups. To address these limitations, we propose a methodology involving a co-design model that actively engages stakeholders at key stages, integrating principles of Equity, Diversity, and Inclusion (EDI) to ensure diverse viewpoints. We apply this methodology to develop a dataset and AI model for evaluating public space quality using street view images, demonstrating its effectiveness in capturing diverse perspectives and fostering higher-quality data.
From Efficiency to Equity: Measuring Fairness in Preference Learning
S. Gowaikar
Hugo Berard
Rashid A. Mushkani
As AI systems, particularly generative models, increasingly influence decision-making, ensuring that they are able to fairly represent diver… (voir plus)se human preferences becomes crucial. This paper introduces a novel framework for evaluating epistemic fairness in preference learning models inspired by economic theories of inequality and Rawlsian justice. We propose metrics adapted from the Gini Coefficient, Atkinson Index, and Kuznets Ratio to quantify fairness in these models. We validate our approach using two datasets: a custom visual preference dataset (AI-EDI-Space) and the Jester Jokes dataset. Our analysis reveals variations in model performance across users, highlighting potential epistemic injustices. We explore pre-processing and in-processing techniques to mitigate these inequalities, demonstrating a complex relationship between model efficiency and fairness. This work contributes to AI ethics by providing a framework for evaluating and improving epistemic fairness in preference learning models, offering insights for developing more inclusive AI systems in contexts where diverse human preferences are crucial.
From Efficiency to Equity: Measuring Fairness in Preference Learning
S. Gowaikar
Hugo Berard
Rashid A. Mushkani
As AI systems, particularly generative models, increasingly influence decision-making, ensuring that they are able to fairly represent diver… (voir plus)se human preferences becomes crucial. This paper introduces a novel framework for evaluating epistemic fairness in preference learning models inspired by economic theories of inequality and Rawlsian justice. We propose metrics adapted from the Gini Coefficient, Atkinson Index, and Kuznets Ratio to quantify fairness in these models. We validate our approach using two datasets: a custom visual preference dataset (AI-EDI-Space) and the Jester Jokes dataset. Our analysis reveals variations in model performance across users, highlighting potential epistemic injustices. We explore pre-processing and in-processing techniques to mitigate these inequalities, demonstrating a complex relationship between model efficiency and fairness. This work contributes to AI ethics by providing a framework for evaluating and improving epistemic fairness in preference learning models, offering insights for developing more inclusive AI systems in contexts where diverse human preferences are crucial.
Deployment of digital technologies in African cities: emerging issues and policy recommendations for local governments
Leandry Jieutsa
Irina Gbaguidi
Wijdane Nadifi
Evaluation algorithmique inclusive de la qualité des espaces publics
Toumadher Ammar
Rashid Ahmad Mushkani
Hugo Berard
Sarah Tannir