Publications

Fairness Through Domain Awareness: Mitigating Popularity Bias For Music Discovery
Rebecca Salganik
As online music platforms grow, music recommender systems play a vital role in helping users navigate and discover content within their vast… (voir plus) musical databases. At odds with this larger goal, is the presence of popularity bias, which causes algorithmic systems to favor mainstream content over, potentially more relevant, but niche items. In this work we explore the intrinsic relationship between music discovery and popularity bias. To mitigate this issue we propose a domain-aware, individual fairness-based approach which addresses popularity bias in graph neural network (GNNs) based recommender systems. Our approach uses individual fairness to reflect a ground truth listening experience, i.e., if two songs sound similar, this similarity should be reflected in their representations. In doing so, we facilitate meaningful music discovery that is robust to popularity bias and grounded in the music domain. We apply our BOOST methodology to two discovery based tasks, performing recommendations at both the playlist level and user level. Then, we ground our evaluation in the cold start setting, showing that our approach outperforms existing fairness benchmarks in both performance and recommendation of lesser-known content. Finally, our analysis explains why our proposed methodology is a novel and promising approach to mitigating popularity bias and improving the discovery of new and niche content in music recommender systems.
Findings of the Association for Computational Linguistics: NAACL 2024, Mexico City, Mexico, June 16-21, 2024
Mohamed Abdalla
Gavin Abercrombie
Rodrigo Agerri
Zeljko Agic
Eneko Agirre
Monica Agrawal
Wasi Uddin Ahmad
James Allan
Aijun An
Antonios Anasta-sopoulos
Mark Anderson
Jacob Andreas
Marianna Apidianaki
Alessio Palmero
Yuki Aprosio
Ehsaneddin Arase
Giuseppe Asgari
Wilker Attardi
Aziz JinYeong … (voir 480 de plus)
Timothy Bak
Mohamad Hardyman Baldwin
Pierpaolo Barawi
Ali Basile
Ja-smijn Basirat
Timo Bastings
Gábor Baumann
Eyal Bella
Farah Ben-David
Luciana Benamara
Benotti Yevgeni
Brijesh Berzak
Federico Bhatt
Chris Bianchi
Lidong Biemann
Alexandra Bing
Birch Eduardo
Gemma Blanco
Aurélien Boleda
Florian Bossard
Leonid Boudin
Ronan Boytsov
Pavel Le Bras
Chris Braslavski
Eleftheria Brew
Thomas Briakou
Emanuele Brochhagen
Wray Buglia-rello
Buntine Elena
Aoife Cabrio
Ruken Cahill
Jose Cakici
Marie Camacho-Collados
Pengfei Candito
Ziqiang Cao
Dallas Cao
Paula Card
Tommaso Carvalho
Andrew Caselli
Tanmoy Cattle
Ilias Chakrabor-ty
Angel X Chalkidis
Ching-Yun Chang
Snigdha Chang
Chen Chaturvedi
Kehai Chen
Long Chen
Lu Chen
Muhao Chen
Wei Chen
Wenhu Chen
Wenliang Chen
Xiang Chen
Yidong Chen
Yun-Nung Chen
Zhiyu Chen
Zhuang Chen
Hao Chen
Yu Cheng
Colin Cheng
Cherry Hai
Eunsol Leong Chieu
Leshem Choi
Monojit Choshen
Christos Choudhury
Yi-Ling Christodoulopou-los
Stephen Chung
Vincent Clark
Simone Claveau
John M Conia
Caio Filippo Conroy
Mathias Corro
Leyang Creutz
Aron Cui
Anna E Culotta
Amanda Cercas Currey
Curry Raj
Daniel Dabre
Cristian Dakota
Verna Danescu-Niculescu-Mizil
Budhaditya Dankers
Deb Vera
Zhenyun Demberg
Li Deng
Ruihai Dong
Antoine Dong
Eduard Doucet
Nan Dragut
Kevin Duan
Greg Duh
Ondrej Durrett
Tomasz Dusek
Dwojak Julian Martin
Asif Eisenschlos
Yanai Ekbal
Cristina Elazar
Luis España-Bonet
Espinosa-Anke Allyson
Kilian Ettinger
Evang Alexander
Agnieszka Fabbri
Meng Falenska
Marcello Fang
Hao Federico
Anna Fei
Feldman Naomi
Fuli Feldman
Xiaocheng Feng
Yansong Feng
Eric Feng
Francis Le Ferrand
Eli-sabetta Ferraro
Simone Fersini
Mark Filice
Mark Finlayson
Jennifer Fishel
Annemarie Foster
Friedrich Matthias
Zhe Gallé
Siddhant Gan
Judith Garg
Kallirroi Gaspers
Alborz Georgila
Geramifard Luke
Mor Gessler
Abbas Geva
Sahar Ghaddar
Filip Ghannay
Mario Ginter
Tejas Giulianelli
Sharon Gokhale
Rob Goldwater
Kyle van der Goot
Tanya Gorman
Jia-Chen Goyal
Qing Gu
Frank Gu
Lin Guerin
Honglei Gui
Qipeng Guo
Vivek Guo
Gupta Thanh-Le
Nizar Ha
Ivan Habash
Barry Habernal
Xianpei Haddow
Daniel Han
Peter Hardt
Di Hase
Michael He
Behnam Heck
Peter Hedayatnia
Daniel Heeman
Jack Hershcovich
Ryuichiro Hes-sel
Julia Higashinaka
Enamul Hockenmaier
Andreas Hoque
Yufang Hotho
Hou Dirk
Kristen Hovy
Di Howell
Xuming Hu
Fei Hu
Jie Huang
Lifu Huang
Peijie Huang
Shaohan Huang
Shujian Huang
Xuanjing Huang
Zhen Huang
Mika Huang
Hämäläinen Kentaro
Inui Kokil
Hyeju Jaidka
Mustafa Jang
Yangfeng Jarrar
Lifeng Ji
Mali Jin
Qin Jin
Richard Jin
David Johansson
Preethi Jurgens
Jyothi Ehsan
Diptesh Kamalloo
S. Kanojia
Sarvnaz Kar
Pei Karimi
Daniel Ke
So-pan Khashabi
Tushar Khosla
Hyounghun Khot
Jin-Dong Kim
Joo-Kyung Kim
Taeuk Kim
Kim Roman
Rebecca Klinger
Ivan Knowles
Ekaterina Kobyzev
Philipp Kochmar
Koehn Mamoru
Rik Komachi
Lingpeng Koncel-Kedziorski
Julia Kong
Amrith Kreutzer
Kal-pesh Krishna
Udo Krishna
Artur Kruschwitz
Adhiguna Kulmizev
Kuncoro Wai
Gerasimos Lam
Mirella Lampouras
Staffan Lapata
Mark Larsson
Ivano Last
Lauriola Thu
Dong-Ho Le
Hwanhee Lee
Jinhyuk Lee
Mark G Lee
SangKeun Lee
Oliver Lee
Heather Le-mon
Piyawat Lent
Gina-Anne Lertvittayakumjorn
Miryam Levow
Bing de Lhoneux
Chuyuan Li
Dongxu Li
Jing Li
Junhui Li
Juntao Li
Liang Li
Peng Li
Piji Li
Sujian Li
Li Tao
Wenjie Li
Xin Li
Yongbin Li
Yu Li
Yufei Li
Zhifei Li
Constantine Li
Chenghua Lignos
Hongyu Lin
Robert Lin
Bing Litschko
Hao Liu
Kang Liu
Ming Liu
Qianying Liu
Tin-gwen Liu
Xuebo Liu
Yang Liu
Zhiyuan Liu
Zoey Liu
Ximing Liu
Anh Tuan Lu
Luu Chenyang
Lyu Ji
Jing Ma
Ruotian Ma
Xiaojuan Ma
Aman Ma
Harish Tayyar Madaan
Andrea Madabushi
Navonil Ma-dotto
Prodromos Majumder
Shervin Malakasiotis
Yuning Malmasi
Kelly Mao
Vukosi Marchi-sio
Stella Marivate
Lara J Markantonatou
Bruno Martin
Yuval Martins
Sérgio Marton
Yuji Matos
Julian Matsumoto
Bryan McAuley
Ryan McCann
Kathleen McDonald
McKeown Mahnoosh
Yuxian Mehrabani
Samuel Meng
Timothee Mensah
Margot Mickus
Simon Mieskes
Yasuhide Mille
Makoto Miura
Daichi Miwa
David R Mochihashi
Lili Mortensen
Kha-lil Mou
Benjamin Mrini
Philippe Muller
Smaranda Muller
Rudra Muresan
Thomas Murthy
Müller Max
Müller-Eberstein Maria
Nona Nadejde
Mikio Naderi
Hideki Nakano
Linyong Nakayama
Nan
Franco Maria
Tapas Nardini
Mark-Jan Nayak
Isar Nederhof
Mariana Nejadgholi
Dat Quoc Neves
Nguyen Le-Minh
Thien Huu Nguyen
Vahid Nguyen
Partovi Nia
Jan Niehues
Qiang Ning
Maciej Ogrodniczuk
Alice Oh
Naoaki Okazaki
Manabu Okumura
Matan Orbach
Nedjma Ou-sidhoum
Vasile Pais
Nikolaos Pappas
Joonsuk Park
Yannick Parmentier
Prasannan Parthasarathi
Lucia Passaro
Ramakanth Pasunuru
Siddharth Patwardhan
Hao Peng
Lis Pereira
Laura Perez-Beltrachini
Maxime Peyrard
Jonas Pfeiffer
Bryan A. Plummer
Maja Popovic
Soujanya Poria
Daniel Preotiuc-Pietro
Emily Prud'hommeaux
Vikram Pudi
Peng Qian
Tieyun Qian
Deepak Ramachandran
Carlos Ramisch
Leonardo Ranaldi
Sudha Rao
Shauli Ravfogel
Marek Rei
Leonardo F. R. Ribeiro
Oleg Rokhlenko
Salvatore Romeo
Joseph Le Roux
Alla Rozov-skaya
Terry Ruas
Raphael Rubino
Ivan Vladimir Meza Ruiz
Maria Ryskina
Hassan Sajjad
Shubhra Kanti
Karmaker Santu
Maarten Sap
Naomi Saphra
Asad B. Sayeed
Dominik Schlechtweg
Viktor Schlegel
Natalie Schluter
Nathan Schneider
Hinrich Schuetze
H. Schwartz
Jingbo Shang
Vasu Sharma
Tianze Shi
Mohammad Shoeybi
Lei Shu
Melanie Siegel Maneesh
Kumar Singh
Pranaydeep Singh
Sunayana Sitaram
Kevin Small
Luca Soldaini
Aina Garí Soler
Wei Song
Xingyi Song
Yan Song
Jeffrey S. Sorensen
Aitor Soroa
Jacopo Staiano
Efstathios Stamatatos
Gabriel Stanovsky
Shane Steinert-Threlkeld
Jannik Strötgen
Sara Stymne
Jinsong Su
Saku Sugawara
Alessandro Suglia
Aixin Sun
Cheng-jie Sun
Kai Sun
György Szarvas
Víctor M. Sánchez-Cartagena
Gözde Gül ¸Sahin
Zeerak Talat
Chenhao Tan
Hao Tan
Tianyi Tang
Jesse Thomason
Brian Thompson
Yuanhe Tian
Zhiliang Tian
Amalia Todirascu
Sara Tonelli
Paolo Torroni
Kristina Toutanova
Amine xv Trabelsi
Trang Tran
David R. Traum
Kewei Tu
Martin Tutek
Ana Sabina Uban
Takehito Utsuro
Olga Vechtomova
Yannick Versley
Karin M. Verspoor
David Vilar
David Vilares 0001
Serena Villa-ta
Esaú Villatoro-Tello
Thuy Vu
Ivan Vuli´c
Fei Xia
Tong Xiao
Bo Xu
Huijuan Xu
Nianwen Xue
S. Yadav
Hang Yan
Rui Yan
Min Yang
Wei Yang
Yezhou Yang
Yi Yang
Zhenglu Yang
Jin-Ge Yao
Wei Ye
Yongjing Yin
Naoki Yoshinaga
Koichiro Yoshino
Jianfei Yu
Juntao Yu Mo
Yu Manzil Zaheer
Fabio Massimo Zanzotto
Weixin Zeng
Luke Zettlemoyer
Biao Zhang
Chen Zhang
Crystina Zhang
Jiajun Zhang
Jingyi Zhang
Justine Zhang
Meishan Zhang
Ningyu Zhang
Shaolei Zhang
Sheng Zhang
Shiyue Zhang
Shuai Zhang
Shuo Zhang
Wei Zhang
Yang Zhang
Zhe Zhang
Jieyu Zhao
Shiwan Zhao
Hai-Tao Zheng
Zaixiang Zheng
Jie Zhou
Yi Zhou
Xiaodan Zhu
A framework for fair decision-making over time with time-invariant utilities
Sriram Sankaranarayanan
Guanyi Wang
Game Theoretical Formulation for Residential Community Microgrid via Mean Field Theory: Proof of Concept
Mohamad Aziz
Issmail ElHallaoui
Incentive-based demand response aggregators are widely recognized as a powerful strategy to increase the flexibility of residential communit… (voir plus)y MG (RCM) while allowing consumers’ assets to participate in the operation of the power system in critical peak times. RCM implementing demand response approaches are of high interest as collectively, they have a high impact on shaping the demand curve during peak time while providing a wide range of economic and technical benefits to consumers and utilities. The penetration of distributed energy resources such as battery energy storage and photovoltaic systems introduces additional flexibility to manage the community loads and increase revenue. This letter proposes a game theoretical formulation for an incentive-based residential community microgrid, where an incentive-based pricing mechanism is developed to encourage peak demand reduction and share the incentive demand curve with the residential community through the aggregator. The aggregator’s objective is to maximize the welfare of the residential community by finding the optimal community equilibrium electricity price. Each household communicates with each other and with the distributed system operator (DSO) through the aggregator and aims to minimize the local electricity cost.
Hessian Aware Low-Rank Weight Perturbation for Continual Learning
Jiaqi Li
Yuanhao Lai
Rui Wang
Changjian Shui
Sabyasachi Sahoo
Charles Ling
Shichun Yang
Boyu Wang
Fan Zhou
Continual learning aims to learn a series of tasks sequentially without forgetting the knowledge acquired from the previous ones. In this wo… (voir plus)rk, we propose the Hessian Aware Low-Rank Perturbation algorithm for continual learning. By modeling the parameter transitions along the sequential tasks with the weight matrix transformation, we propose to apply the low-rank approximation on the task-adaptive parameters in each layer of the neural networks. Specifically, we theoretically demonstrate the quantitative relationship between the Hessian and the proposed low-rank approximation. The approximation ranks are then globally determined according to the marginal increment of the empirical loss estimated by the layer-specific gradient and low-rank approximation error. Furthermore, we control the model capacity by pruning less important parameters to diminish the parameter growth. We conduct extensive experiments on various benchmarks, including a dataset with large-scale tasks, and compare our method against some recent state-of-the-art methods to demonstrate the effectiveness and scalability of our proposed method. Empirical results show that our method performs better on different benchmarks, especially in achieving task order robustness and handling the forgetting issue. The source code is at https://github.com/lijiaqi/HALRP.
High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise.
Eduard Gorbunov
Abdurakhmon Sadiev
Marina Danilova
Samuel Horváth
Pavel Dvurechensky
Alexander Gasnikov
Peter Richtárik
An improved column-generation-based matheuristic for learning classification trees
Krunal Kishor Patel
Guy Desaulniers
An Improved Neuro-Symbolic Architecture to Fine-Tune Generative AI Systems
Chao Yin
Gilles Pesant
Improving the Generalizability and Robustness of Large-Scale Traffic Signal Control
Tianyu Shi
François-Xavier Devailly
Denis Larocque
A number of deep reinforcement-learning (RL) approaches propose to control traffic signals. Compared to traditional approaches, RL approache… (voir plus)s can learn from higher-dimensionality input road and vehicle sensors and better adapt to varying traffic conditions resulting in reduced travel times (in simulation). However, these RL methods require training from massive traffic sensor data. To offset this relative inefficiency, some recent RL methods have the ability to first learn from small-scale networks and then generalize to unseen city-scale networks without additional retraining (zero-shot transfer). In this work, we study the robustness of such methods along two axes. First, sensor failures and GPS occlusions create missing-data challenges and we show that recent methods remain brittle in the face of these missing data. Second, we provide a more systematic study of the generalization ability of RL methods to new networks with different traffic regimes. Again, we identify the limitations of recent approaches. We then propose using a combination of distributional and vanilla reinforcement learning through a policy ensemble. Building upon the state-of-the-art previous model which uses a decentralized approach for large-scale traffic signal control with graph convolutional networks (GCNs), we first learn models using a distributional reinforcement learning (DisRL) approach. In particular, we use implicit quantile networks (IQN) to model the state-action return distribution with quantile regression. For traffic signal control problems, an ensemble of standard RL and DisRL yields superior performance across different scenarios, including different levels of missing sensor data and traffic flow patterns. Furthermore, the learning scheme of the resulting model can improve zero-shot transferability to different road network structures, including both synthetic networks and real-world networks (e.g., Luxembourg, Manhattan). We conduct extensive experiments to compare our approach to multi-agent reinforcement learning and traditional transportation approaches. Results show that the proposed method improves robustness and generalizability in the face of missing data, varying road networks, and traffic flows.
Inertia-Based Indices to Determine the Number of Clusters in K-Means: An Experimental Evaluation
Andrei Rykov
Renato Cordeiro De Amorim
Boris Mirkin
This paper gives an experimentally supported review and comparison of several indices based on the conventional K-means inertia criterion fo… (voir plus)r determining the number of clusters,
Information Complexity of Stochastic Convex Optimization: Applications to Generalization, Memorization, and Tracing
Idan Attias
MAHDI HAGHIFAM
Roi Livni
Daniel M. Roy
In this work, we investigate the interplay between memorization and learning in the context of stochastic convex optimization (SCO)… (voir plus). We define memorization via the information a learning algorithm reveals about its training data points. We then quantify this information using the framework of conditional mutual information (CMI) proposed by Steinke and Zakynthinou (2020). Our main result is a precise characterization of the tradeoff between the accuracy of a learning algorithm and its CMI, answering an open question posed by Livni (2023). We show that, in the
Interacting with a Visuotactile Countertop
M. Jenkin
Francois Hogan
Jean-François Tremblay
Bobak H. Baghi