Publications

2023 S TOCHASTIC S IMULATED Q UANTUM A NNEALING FOR F AST S OLVING C OMBINATORIAL O PTIMIZATION P ROBLEMS
Naoya Onizawa
Ryoma Sasaki
Duckgyu Shin
Takahiro Hanyu
method. Additionally, it can handle a 100-times larger problem size compared to QA and a 25-times larger problem size compared to a traditio… (voir plus)nal SA method, respectively, for similar convergence probabilities.
2023 S TOCHASTIC Q UANTUM M ONTE C ARLO A LGORITHM FOR L ARGE -S CALE C OMBINATORIAL O PTIMIZATION P ROBLEMS
Naoya Onizawa
Ryoma Sasaki
Duckgyu Shin
Takahiro Hanyu
computing. In addition, it solves problems using two orders-of-magnitude larger number of spins than the D-Wave Two QA machine.
2023 S TOCHASTIC Q UANTUM M ONTE C ARLO A LGORITHM FOR L ARGE -S CALE C OMBINATORIAL O PTIMIZATION P ROBLEMS
Naoya Onizawa
Ryoma Sasaki
Duckgyu Shin
Takahiro Hanyu
computing. In addition, it solves problems using two orders-of-magnitude larger number of spins than the D-Wave Two QA machine.
Adjusting Machine Learning Decisions for Equal Opportunity and Counterfactual Fairness
Yixin Wang
David Blei
Machine learning ( ml ) methods have the potential to automate high-stakes decisions, such as bail admissions or credit lending, by analyzin… (voir plus)g and learning from historical data. But these algorithmic decisions may be unfair: in learning from historical data, they may replicate discriminatory practices from the past. In this paper, we propose two algorithms that adjust fitted ML predictors to produce decisions that are fair. Our methods provide post-hoc adjustments to the predictors, without requiring that they be retrained. We consider a causal model of the ML decisions, define fairness through counterfactual decisions within the model, and then form algorithmic decisions that capture the historical data as well as possible, but are provably fair. In particular, we consider two definitions of fairness. The first is “equal counterfactual opportunity,” where the counterfactual distribution of the decision is the same regardless of the protected attribute; the second is counterfactual fairness. We evaluate the algorithms, and the trade-o � between accuracy and fairness, on datasets about admissions, income, credit, and recidivism.
AfriMTE and AfriCOMET: Empowering COMET to Embrace Under-resourced African Languages
Jiayi Wang
Sweta Agrawal
Ricardo Rei
Eleftheria Briakou
Marine Carpuat
Marek Masiak
Xuanli He
Sofia Bourhim
Andiswa Bukula
Muhidin A. Mohamed
Temitayo Olatoye
Hamam Mokayede
Christine Mwase
Wangui Kimotho
Foutse Yuehgoh
Anuoluwapo Aremu
Jessica Ojo
Shamsuddeen Hassan Muhammad
Salomey Osei … (voir 37 de plus)
Abdul-Hakeem Omotayo
Chiamaka Chukwuneke
Perez Ogayo
Oumaima Hourrane
Salma El Anigri
Lolwethu Ndolela
Thabiso Mangwana
Shafie Abdi Mohamed
Ayinde Hassan
Oluwabusayo Olufunke Awoyomi
Lama Alkhaled
sana Sabah al-azzawi
Naome A. Etori
Millicent A. Ochieng
Clemencia Siro
Samuel Njoroge
Eric Muchiri
Wangari Kimotho
Lyse Naomi Wamba Momo
Daud Abolade
Simbiat Ajao
Tosin P. Adewumi
Iyanuoluwa Shode
Ricky Macharm
Ruqayya Nasir Iro
Saheed Salahudeen Abdullahi
Stephen E. Moore
Bernard Opoku
Zainab Akinjobi
Abeeb Afolabi
Nnaemeka Casmir Obiefuna
Onyekachi Ogbu
Sam Brian
V. Otiende
CHINEDU EMMANUEL MBONU
Toadoum Sari Sakayo
Pontus Stenetorp
Despite the progress we have recorded in scaling multilingual machine translation (MT) models and evaluation data to several under-resourced… (voir plus) African languages, it is difficult to measure accurately the progress we have made on these languages because evaluation is often performed on n -gram matching metrics like BLEU that often have worse correlation with human judgments. Embedding-based metrics such as COMET correlate better; however, lack of evaluation data with human ratings for under-resourced languages, complexity of annotation guidelines like Multidimensional Quality Metrics (MQM), and limited language coverage of multilingual encoders have hampered their applicability to African languages. In this paper, we address these challenges by creating high-quality human evaluation data with a simplified MQM guideline for error-span annotation and direct assessment (DA) scoring for 13 typologi-cally diverse African languages. Furthermore, we develop A FRI COMET—a COMET evaluation metric for African languages by leveraging DA training data from high-resource languages and African-centric multilingual encoder (AfroXLM-Roberta) to create the state-of-the-art evaluation metric for African languages MT with respect to Spearman-rank correlation with human judgments ( +0 . 406 ).
AfriMTE and AfriCOMET: Empowering COMET to Embrace Under-resourced African Languages
Jiayi Wang
Sweta Agrawal
Ricardo Rei
Eleftheria Briakou
Marine Carpuat
Marek Masiak
Xuanli He
Sofia Bourhim
Andiswa Bukula
Muhidin A. Mohamed
Temitayo Olatoye
Hamam Mokayed
Christine Mwase
Wangui Kimotho
Foutse Yuehgoh
Aremu Anuoluwapo
Jessica Ojo
Shamsuddeen Hassan Muhammad
Salomey Osei … (voir 37 de plus)
Abdul-Hakeem Omotayo
Chiamaka Ijeoma Chukwuneke
Perez Ogayo
Oumaima Hourrane
Salma El Anigri
Lolwethu Ndolela
Thabiso Mangwana
Shafie Abdi Mohamed
Ayinde Hassan
Oluwabusayo Olufunke Awoyomi
Lama Alkhaled
sana Sabah al-azzawi
Naome Etori
Millicent Ochieng
Clemencia Siro
Samuel Njoroge
Eric Muchiri
Wangari Kimotho
Lyse Naomi Wamba
Daud Abolade
Simbiat Ajao
Tosin Adewumi
Iyanuoluwa Shode
Ricky Macharm
Ruqayya Nasir Iro
Saheed Salahudeen Abdullahi
Stephen Moore
Bernard Opoku
Zainab Akinjobi
Abeeb Afolabi
Nnaemeka Casmir Obiefuna
Onyekachi Ogbu
Sam Brian
Verrah Akinyi Otiende
CHINEDU EMMANUEL MBONU
Toadoum Sari Sakayo
Pontus Stenetorp
Despite the progress we have recorded in scaling multilingual machine translation (MT) models and evaluation data to several under-resourced… (voir plus) African languages, it is difficult to measure accurately the progress we have made on these languages because evaluation is often performed on n -gram matching metrics like BLEU that often have worse correlation with human judgments. Embedding-based metrics such as COMET correlate better; however, lack of evaluation data with human ratings for under-resourced languages, complexity of annotation guidelines like Multidimensional Quality Metrics (MQM), and limited language coverage of multilingual encoders have hampered their applicability to African languages. In this paper, we address these challenges by creating high-quality human evaluation data with a simplified MQM guideline for error-span annotation and direct assessment (DA) scoring for 13 typologi-cally diverse African languages. Furthermore, we develop A FRI COMET—a COMET evaluation metric for African languages by leveraging DA training data from high-resource languages and African-centric multilingual encoder (AfroXLM-Roberta) to create the state-of-the-art evaluation metric for African languages MT with respect to Spearman-rank correlation with human judgments ( +0 . 406 ).
AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages
Shamsuddeen Hassan Muhammad
Idris Abdulmumin
Abinew Ayele
Nedjma OUSIDHOUM
Seid Muhie Yimam
Ibrahim Ahmad
Meriem Beloucif
Saif Mohammad
Sebastian Ruder
Oumaima Hourrane
Alipio Jorge
Pavel Brazdil
Felermino Ali
Davis David
Salomey Osei
Bello Shehu-Bello
Falalu Lawan
Tajuddeen Gwadabe
Samuel Rutunda … (voir 7 de plus)
Tadesse Belay
Wendimu Baye Messelle
Hailu Balcha
Sisay Adugna Chala
Hagos Gebremichael
Bernard Opoku
Stephen Arthur
AI Agents Learn to Trust
Ardavan S. Nobandegani
T. Shultz
AmbieGen: A Search-based Framework for Autonomous Systems Testing
Dmytro Humeniuk
Giuliano Antoniol
ArK: Augmented Reality with Knowledge Emergent Infrastructure
Qiuyuan Huang
J. Park
Abhinav Gupta
Pan Lu
Paul N. Bennett
Ran Gong
Subhojit Som
Baolin Peng
Owais Khan Mohammed
Yejin Choi
Jianfeng Gao
Despite the growing adoption of mixed reality and interactive AI, it remains challenging to generate high-quality 2D/3D scenes in unseen env… (voir plus)ironments. Typically, an AI agent requires collecting extensive training data for every new task, which can be costly or impossible for many domains. In this study, we develop an infinite agent that learns to transfer knowledge memory from general foundation models (e.g., GPT4, DALLE) to novel domains or scenarios for scene understanding and generation in physical or virtual worlds. Central to our approach is the interactive emerging mechanism, dubbed Augmented Reality with Knowledge Emergent Infrastructure (ArK) , which leverages knowledge-memory to generate scenes in unseen physical worlds and virtual reality environments. The knowledge interactive emergent ability (Figure 1) is demonstrated through i) micro-action of cross-modality : in multi-modality models to collect a large amount of relevant knowledge-memory data for each interaction task (e.g., unseen scene understanding) from the physical reality; and ii) macro-behavior of reality-agnostic : in mix-reality environments to improve interactions that tailor to different characterized roles, target variables, collaborative information, and so on. We validate ArK’s effectiveness in scene generation and editing tasks and show that our ArK approach, combined with large foundation models, significantly improves the quality of generated 2D/3D scenes, highlighting its potential in applications such as metaverse and gaming simulation.
Augmenting Transit Network Design Algorithms with Deep Learning
Andrew Holliday
This paper considers the use of deep learning models to enhance optimization algorithms for transit network design. Transit network design i… (voir plus)s the problem of determining routes for transit vehicles that minimize travel time and operating costs, while achieving full service coverage. State-of-the-art meta-heuristic search algorithms give good results on this problem, but can be very time-consuming. In contrast, neural networks can learn sub-optimal but fast-to-compute heuristics based on large amounts of data. Combining these approaches, we develop a fast graph neural network model for transit planning, and use it to initialize state-of-the-art search algorithms. We show that this combination can improve the results of these algorithms on a variety of metrics by up to 17%, without increasing their run time; or they can match the quality of the original algorithms while reducing the computing time by up to a factor of 50.
Auxiliary Losses for Learning Generalizable Concept-based Models
Ivaxi Sheth