Medium-scale flexible integrated circuits based on 2D semiconductors
Yalin Peng
Chenyang Cui
Lu Li
Yuchen Wang
Qinqin Wang
Jinpeng Tian
Zhiheng Huang
Biying Huang
Yangkun Zhang
Xiuzhen Li
Yanbang Chu
Wei Yang
Dongxia Shi
Luojun Du
Na Li
Guangyu Zhang
Sliding ferroelectric memories and synapses based on rhombohedral-stacked bilayer MoS2
Xiuzhen Li
Biao Qin
Yaxian Wang
Yue Xi
Zhiheng Huang
Mengze Zhao
Yalin Peng
Zitao Chen
Zitian Pan
Jundong Zhu
Chenyang Cui
Rong Yang
Wei Yang
Sheng Meng
Dongxia Shi
Xuedong Bai
Can Liu
Na Li
Kaihui Liu … (voir 3 de plus)
Kai-Wen Liu
Luojun Du
Guangyu Zhang
AfriHG: News headline generation for African Languages
Toyib Ogunremi
Serah Akojenu
Anthony Soronnadi
Olubayo Adekanmbi
This paper introduces AfriHG -- a news headline generation dataset created by combining from XLSum and MasakhaNEWS datasets focusing on 16 l… (voir plus)anguages widely spoken by Africa. We experimented with two seq2eq models (mT5-base and AfriTeVa V2), and Aya-101 LLM. Our results show that Africa-centric seq2seq models such as AfriTeVa V2 outperform the massively multilingual mT5-base model. Finally, we show that the performance of fine-tuning AfriTeVa V2 with 313M parameters is competitive to prompting Aya-101 LLM with more than 13B parameters.
The design and technology development of the JUNO central detector
Angel Abusleme
Thomas Adam
Shakeel Ahmad
Rizwan Ahmed
Sebastiano Aiello
Muhammad Akram
Abid Aleem
Tsagkarakis Alexandros
Fengpeng An
Qi An
Giuseppe Andronico
Nikolay Anfimov
Vito Antonelli
Tatiana Antoshkina
Burin Asavapibhop
J. D. de André
João Pedro Athayde Marcondes de André
Didier Auguste
Weidong Bai
Nikita Balashov … (voir 616 de plus)
Wander Baldini
Andrea Barresi
Davide Basilico
Eric Baussan
Marco Bellato
Marco Beretta
Antonio Bergnoli
Daniel Bick
Thilo Birkenfeld
David Blum
Simon Blyth
Anastasia Bolshakova
Mathieu Bongrand
Clément Bordereau
Dominique Breton
Augusto Brigatti
Riccardo Brugnera
Riccardo Bruno
Antonio Budano
Jose Busto
Anatael Cabrera
Barbara Caccianiga
Hao Cai
Xiao Cai
Yanke Cai
Zhiyan Cai
Z. Cai
Stéphane Callier
Antonio Cammi
Agustin Campeny
Chuanya Cao
C. Cao
Guofu Cao
Jun Cao
Rossella Caruso
Cédric Cerna
Vanessa Cerrone
Chi Chan
Jinfan Chang
Yun Chang
Guoming Chen
Pingping Chen
Shaomin Chen
Yixue Chen
Yu Chen
Zhiyuan Chen
Zikang Chen
Jie Cheng
Yaping Cheng
Yu Chin Cheng
Yuanyuan Zhang
Alexander Chepurnov
Alexey Chetverikov
Davide Chiesa
Pietro Chimenti
Ziliang Chu
Artem Chukanov
Gérard Claverie
Catia Clementi
Barbara Clerbaux
Marta Colomer Molla
Selma Conforti Di Lorenzo
Alberto Coppi
Daniele Corti
Flavio Dal Corso
Olivia Dalager
Christophe De La Taille
Zhi Deng
Ziyan Deng
Wilfried Depnering
Marco Diaz
Xuefeng Ding
Yayun Ding
Bayu Dirgantara
Sergey Dmitrievsky
Tadeas Dohnal
Dmitry Dolzhikov
Georgy Donchenko
Jianmeng Dong
Evgeny Doroshkevich
Wei Dou
Marcos Dracos
Frédéric Druillole
Ran Du
S. X. Du
Shuxian Du
Stefano Dusini
Hongyue Duyang
Timo Enqvist
Andrea Fabbri
Ulrike Fahrendholz
Lei Fan
Jian Fang
Wenxing Fang
Marco Fargetta
Dmitry Fedoseev
Zhengyong Fei
Li-Cheng Feng
Qichun Feng
Federico Ferraro
Amélie Fournier
H. Gan
Haonan Gan
Feng Gao
Alberto Garfagnini
Arsenii Gavrikov
Marco Giammarchi
Nunzio Giudice
Maxim Gonchar
Guanghua Gong
Hui Gong
Yuri Gornushkin
Alexandre Göttel
A. Gottel
Marco Grassi
Maxim Gromov
Vasily Gromov
Minghao Gu
Xiaofei Gu
X. Gu
Yu Gu
Yunting Gu
Mengyun Guan
M. Guan
Yuduo Guan
Nunzio Guardone
Cong Guo
Wanlei Guo
Xinheng Guo
Yuhang Guo
Caren Hagner
Ran Han
Yang Han
Jiajun Hao
Miao He
Wei He
Tobias Heinz
Patrick Hellmuth
Yuekun Heng
Rafael Herrera
Yuenkeung Hor
Yuenkeung Hor
Shaojing Hou
Yee Hsiung
Bei-Zhen Hu
Hang Hu
Jianrun Hu
Jun Hu
Shouyang Hu
Tao Hu
Yuxiang Hu
Zhuojun Hu
Guihong Huang
Hanxiong Huang
Kaixi Huang
Kaixuan Huang
Wenhao Huang
Xin Huang
Xingtao Huang
Yongbo Huang
Jiaqi Hui
Lei Huo
Wenju Huo
Cédric Huss
Safeer Hussain
Ara Ioannisian
Roberto Isocrate
Beatrice Jelmini
Ignacio Jeria
Xiaolu Ji
Huihui Jia
Junji Jia
Siyu Jian
Di Jiang
Wei Jiang
Xiaoshan Jiang
Xiaoping Jing
X. Jing
Cécile Jollet
Philipp Kampmann
Li Kang
Rebin Karaparambil
Narine Kazarian
Ali Khan
Amina Khatun
Khanchai Khosonthongkee
Denis Korablev
Konstantin Kouzakov
Alexey Krasnoperov
Nikolay Kutovskiy
Pasi Kuusiniemi
Tobias Lachenmaier
Cecilia Landini
Sébastien Leblanc
Victor Lebrin
Frederic Lefevre
Rui Li
Rupert Leitner
Jason Leung
Daozheng Li
Demin Li
Fei Li
Yi Wang
Fule Li
Gaosong Li
Huiling Li
Mengzhao Li
Min Li
Nan Li
Qingjiang Li
Ruhui Li
Ruiting Lei
Shanfeng Li
Tao Li
Teng Li
Weidong Li
Weiguo Li
Xiaomei Li
Xiaonan Li
Xinglong Li
Xiwen Li
Yi Li
Yichen Li
Yufeng Li
Zepeng Li
Zhaohan Li
Zhibing Li
Ziyuan Li
Zonghai Li
Hao Liang
Jiaming Yan
Jiajun Liao
Ayut Limphirat
Guey-Lin Lin
Shengxin Lin
Tao Lin
Jiajie Ling
Ivano Lippi
Caimei Liu
Yang Liu
Haidong Liu
Haotian Liu
Hongbang Liu
Hongjuan Liu
Hongtao Liu
Hui Liu
Jianglai Liu
Jinchang Liu
Min Liu
Qian Liu
Qin Liu
Runxuan Liu
Shubin Liu
Shulin Liu
Xiaowei Liu
Xiwen Liu
Yong Liu
Yunzhe Liu
Alexey Lokhov
Paolo Lombardi
Claudio Lombardo
Kai Loo
Chuan Lu
Haoqi Lu
Jingbin Lu
Junguang Lu
Peizhi Lu
Shuxiang Lu
Bayarto Lubsandorzhiev
Sultim Lubsandorzhiev
Livia Ludhova
Arslan Lukanov
Daibin Luo
F. Luo
Fengjiao Luo
Guang Luo
Jianyi Luo
Shu Luo
Wuming Luo
Xiaojie Luo
Xiaolan Luo
Vladimir Lyashuk
B. Ma
Bing Ma
Bangzheng Ma
R. Q. Ma
Si Ma
Qiumei Ma
Xiaoyan Ma
Xubo Ma
Jihane Maalmi
Marco Magoni
Jingyu Mai
Yury Malyshkin
Roberto Carlos Mandujano
Fabio Mantovani
Xin Mao
Yajun Mao
S. Mari
Stefano M. Mari
Filippo Marini
Agnese Martini
Matthias Mayer
Davit Mayilyan
Ints Mednieks
Yu Meng
Anita Meraviglia
Anselmo Meregaglia
Emanuela Meroni
David Meyhöfer
Mauro Mezzetto
Lino Miramonti
Paolo Montini
Michele Montuschi
Axel Muller
M. Nastasi
Massimiliano Nastasi
Dmitry V. Naumov
Elena Naumova
Diana Navas-Nicolas
Igor Nemchenok
Minh Thuan Nguyen Thi
Alexey Nikolaev
F. Ning
Feipeng Ning
Zhe Ning
Hiroshi Nunokawa
Lothar Oberauer
Juan Pedro Ochoa-Ricoux
Alexander Olshevskiy
Domizia Orestano
Fausto Ortica
Rainer Othegraven
A. Paoloni
Alessandro Paoloni
Sergio Parmeggiano
Y. P. Pei
Yatian Pei
Luca Pelicci
Anguo Peng
Yu Peng
Haiping Peng
Yuefeng Peng
Z-R Peng
Frédéric Perrot
Zhaoyuan Peng
P. Petitjean
Pierre-Alexandre Petitjean
Fabrizio Petrucci
Oliver Pilarczyk
Luis Felipe Piñeres Rico
Artyom Popov
Pascal Poussot
Ezio Previtali
Fazhi Qi
Ming Qi
Sen Qian
Xiaohui Qian
Zhen Qian
Hao Qiao
Zhonghua Qin
S. Qiu
Shoukang Qiu
Gioacchino Ranucci
Reem Rasheed
A. Re
Alessandra Re
Abdel Rebii
Mariia Redchuk
Bin Ren
Jie Ren
Barbara Ricci
Mariam Rifai
Mathieu Roche
Narongkiat Rodphai
Aldo Romani
Bedřich Roskovec
Xichao Ruan
Arseniy Rybnikov
Andrey Sadovsky
Paolo Saggese
Simone Sanfilippo
Anut Sangka
Utane Sawangwit
Julia Sawatzki
Michaela Schever
Cédric Schwab
Konstantin Schweizer
Alexandr Selyunin
Andrea Serafini
Giulio Settanta
M. Settimo
Zhuang Shao
Mariangela Settimo
V. Sharov
Arina Shaydurova
Vladislav Sharov
Jingyan Shi
Yanan Shi
Vitaly Shutov
Andrey Sidorenkov
Fedor Šimkovic
Chiara Sirignano
Jaruchit Siripak
Monica Sisti
Maciej Slupecki
Mikhail Smirnov
Oleg Smirnov
Thiago Sogo-Bezerra
Sergey Sokolov
Wuying Song
Julanan Songwadhana
Boonrucksar Soonthornthum
Albert Sotnikov
Ondřej Šrámek
Warintorn Sreethawong
Achim Stahl
Luca Stanco
Konstantin Stankevich
Dušan Štefánik
Hans Steiger
Jochen Steinmann
Tobias Sterr
M. Stock
Virginia Strati
Matthias Raphael Stock
Alexander Studenikin
Jun Su
Shifeng Sun
Xilei Sun
Yongjie Sun
Yongzhao Sun
Zhengyang Sun
Narumon Suwonjandee
Michal Szelezniak
Akira Takenaka
Qiang Tang
Quan Tang
Xiao Tang
Vidhya Thara Hariharan
Eric Theisen
Alexander Tietzsch
Igor Tkachev
Tomas Tmej
M. Torri
Francesco Tortorici
Marco Danilo Claudio Torri
K. Treskov
Andrea Triossi
Konstantin Treskov
Riccardo Triozzi
Giancarlo Troni
Wladyslaw Trzaska
Y. Tung
Cristina Tuve
Nikita Ushakov
Yu-Chen Tung
Vadim Vedin
Giuseppe Verde
Maxim Vialkov
Benoit Viaud
Cornelius Moritz Vollbrecht
Katharina von Sturm
Vit Vorobel
Dmitriy Voronin
Lucia Votano
Pablo Walker
Caishen Wang
Chung-Hsiang Wang
Derun Wang
En Wang
Guoli Wang
Jian Wang
Jun Wang
Lucinda W. Wang
Meng Wang
Ruiguang Wang
Lu Wang
Siguang Wang
Wei Wang
Wenshuai Wang
Xi Wang
Xiangyue Wang
Yangfu Wang
Yaoguang Wang
Yi Xing Wang
Yifang Wang
Yuanqing Wang
Yuman Wang
Zhe Wang
Zheng Wang
Zhimin Wang
Apimook Watcharangkool
Wei Wei
Wenlu Wei
Yadong Wei
K. Wen
Kaile Wen
Jun Weng
Christopher Wiebusch
Rosmarie Wirth
Bjoern Wonsak
Liangjian Wen
Diru Wu
Qun Wu
Shuai Wu
Zhi Wu
Michael Wurm
Jacques Wurtz
Christian Wysotzki
Yufei Xi
Dongmei Xia
Xiang Xiao
Xiaochuan Xie
Yuguang Xie
Zhangquan Xie
Zhao Xin
Zhizhong Xing
Benda Xu
Cheng Xu
Donglian Xu
Fanrong Xu
Hangkun Xu
Jilei Xu
Jing Xu
Meihang Xu
Yin Xu
Yu Xu
Baojun Yan
Qiyu Yan
Taylor Yan
Wenqi Yan
Xiongbo Yan
Yupeng Yan
Changgen Yang
Chengfeng Yang
Jie Yang
Lei Yang
Xiaoyu Yang
Yifan Yang
Haifeng Yao
Jiaxuan Ye
Mei Ye
Ziping Ye
Frédéric Yermia
Zhengyun You
Boxiang Yu
Chiye Yu
Chunxu Yu
Guojun Yu
Hongzhao Yu
Miao Yu
Xianghui Yu
Zeyuan Yu
Zezhong Yu
Cenxi Yuan
Chengzhuo Yuan
Jing-Yu Tang
Zhenxiong Yuan
Baobiao Yue
Noman Zafar
Vitalii Zavadskyi
Shan Zeng
Tingxuan Zeng
Yuda Zeng
Liang Zhan
Aiqiang Zhang
Bin Zhang
Binting Zhang
Feiyang Zhang
Honghao Zhang
Jiawen Zhang
Jie Zhang
Jin Zhang
Jingbo Zhang
Jinnan Zhang
Mohan Zhang
Peng Zhang
Qingmin Zhang
Shiqi Zhang
Shu Zhang
Tao Zhang
Xiaomei Zhang
Xin Zhang
Xuantong Zhang
Yinhong Zhang
Yiyu Zhang
Yongpeng Zhang
Yu Zhang
Yumei Zhang
Zhenyu Zhang
Zhijian Zhang
Jie Zhao
Rong Zhao
Runze Zhao
Shujun Zhao
Dongqin Zheng
Hua Zheng
Yangheng Zheng
Weirong Zhong
Jing Zhou
Li Zhou
Nan Zhou
Shun Zhou
Tong Zhou
Xiang Zhou
Jingsen Zhu
Kangfu Zhu
Kejun Zhu
Zhihang Zhu
Bo Zhuang
Honglin Zhuang
Liang Zong
Jiaheng Zou
Sebastian Zwickel
Torque-Aware Momentum
Pranshu Malviya
Goncalo Mordido
Aristide Baratin
Reza Babanezhad Harikandeh
Efficiently exploring complex loss landscapes is key to the performance of deep neural networks. While momentum-based optimizers are widely … (voir plus)used in state-of-the-art setups, classical momentum can still struggle with large, misaligned gradients, leading to oscillations. To address this, we propose Torque-Aware Momentum (TAM), which introduces a damping factor based on the angle between the new gradients and previous momentum, stabilizing the update direction during training. Empirical results show that TAM, which can be combined with both SGD and Adam, enhances exploration, handles distribution shifts more effectively, and improves generalization performance across various tasks, including image classification and large language model fine-tuning, when compared to classical momentum-based optimizers.
Torque-Aware Momentum
Pranshu Malviya
Goncalo Mordido
Aristide Baratin
Reza Babanezhad Harikandeh
Divergent Perception: Framing Creative Cognition Through the Lens of Sensory Flexibility
Antoine Bellemare‐Pepin
Divergent Perception: Framing Creative Cognition Through the Lens of Sensory Flexibility
Antoine Bellemare‐Pepin
Creativity is a cornerstone of human evolution and is typically defined as the multifaceted ability to produce novel and useful artifacts. A… (voir plus)lthough much research has focused on divergent thinking, growing evidence underscores the importance of perceptual processing in fostering creativity, particularly through perceptual flexibility. The present work aims to offer a framework that relates creativity to perception, showing how sensory affordances, especially in ambiguous stimuli, can contribute to the generation of novel ideas. In doing so, we contextualize the phenomenon of pareidolia, which involves seeing familiar patterns in noisy or ambiguous stimuli, as a key perceptual mechanism of idea generation—one of the central stages of the creative process. We introduce “divergent perception” to describe the process by which individuals actively engage with the perceptual affordances provided by ambiguous sensory information, and illustrate how this concept could account for the heightened creativity observed in psychedelic and psychotic states. Moreover, we explore how divergent perception relates to cognitive mechanisms crucial in creative thinking, particularly focusing on the role of attention. Finally, we discuss future paths for the exploration of divergent perception, including targeted manipulation of stimulus characteristics and the investigation of the intricate interplay between bottom‐up and top‐down cognitive processes.
Polaris: a universal tool for chromatin loop annotation in bulk and single-cell Hi-C data
Yusen Hou
Audrey Baguette
Yanlin Zhang
Annotating chromatin loops is essential for understanding the 3D genome’s role in gene regulation, but current methods struggle with low c… (voir plus)overage, particularly in single-cell datasets. Chromatin loops are kilo-to mega-range structures that exhibit broader features, such as co-occurring loops, stripes, and domain boundaries along axial directions of Hi-C contact maps. However, existing tools primarily focus on detecting localized, highly-concentrated, interactions. Furthermore, the wide variety of available chromatin conformation datasets is rarely utilized in developing effective loop callers. Here, we present Polaris, a universal tool that integrates axial attention with a U-shaped backbone to accurately detect loops across different 3D genome assays. By leveraging extensive Hi-C contact maps in a pretrain-finetune paradigm, Polaris achieves consistent performance across various datasets. We compare Polaris against existing tools in loop annotation from both bulk and single-cell data and find that Polaris outperforms other programs across different cell types, species, sequencing depths, and assays.
Path-of-Thoughts: Extracting and Following Paths for Robust Relational Reasoning with Large Language Models
Ge Zhang
Mohammad Alomrani
Hongjian Gu
Jiaming Zhou
Yaochen Hu
Bin Wang
Qun Liu
Yingxue Zhang
Jianye Hao
Large language models (LLMs) possess vast semantic knowledge but often struggle with complex reasoning tasks, particularly in relational rea… (voir plus)soning problems such as kinship or spatial reasoning. In this paper, we present Path-of-Thoughts (PoT), a novel framework designed to tackle relation reasoning by decomposing the task into three key stages: graph extraction, path identification, and reasoning. Unlike previous approaches, PoT efficiently extracts a task-agnostic graph that identifies crucial entities, relations, and attributes within the problem context. Subsequently, PoT identifies relevant reasoning chains within the graph corresponding to the posed question, facilitating inference of potential answers. Experimental evaluations on four benchmark datasets, demanding long reasoning chains, demonstrate that PoT surpasses state-of-the-art baselines by a significant margin (maximum 21.3%) without necessitating fine-tuning or extensive LLM calls. Furthermore, as opposed to prior neuro-symbolic methods, PoT exhibits improved resilience against LLM errors by leveraging the compositional nature of graphs.
Path-of-Thoughts: Extracting and Following Paths for Robust Relational Reasoning with Large Language Models
Ge Zhang
Mohammad Alomrani
Hongjian Gu
Jiaming Zhou
Yaochen Hu
Bin Wang
Qun Liu
Yingxue Zhang
Jianye Hao
Large language models (LLMs) possess vast semantic knowledge but often struggle with complex reasoning tasks, particularly in relational rea… (voir plus)soning problems such as kinship or spatial reasoning. In this paper, we present Path-of-Thoughts (PoT), a novel framework designed to tackle relation reasoning by decomposing the task into three key stages: graph extraction, path identification, and reasoning. Unlike previous approaches, PoT efficiently extracts a task-agnostic graph that identifies crucial entities, relations, and attributes within the problem context. Subsequently, PoT identifies relevant reasoning chains within the graph corresponding to the posed question, facilitating inference of potential answers. Experimental evaluations on four benchmark datasets, demanding long reasoning chains, demonstrate that PoT surpasses state-of-the-art baselines by a significant margin (maximum 21.3%) without necessitating fine-tuning or extensive LLM calls. Furthermore, as opposed to prior neuro-symbolic methods, PoT exhibits improved resilience against LLM errors by leveraging the compositional nature of graphs.
The Superposition of Diffusion Models Using the Itô Density Estimator
Marta Skreta
Lazar Atanackovic
Alexander Tong
The Cambrian explosion of easily accessible pre-trained diffusion models suggests a demand for methods that combine multiple different pre-t… (voir plus)rained diffusion models without incurring the significant computational burden of re-training a larger combined model. In this paper, we cast the problem of combining multiple pre-trained diffusion models at the generation stage under a novel proposed framework termed superposition. Theoretically, we derive superposition from rigorous first principles stemming from the celebrated continuity equation and design two novel algorithms tailor-made for combining diffusion models in SuperDiff. SuperDiff leverages a new scalable It\^o density estimator for the log likelihood of the diffusion SDE which incurs no additional overhead compared to the well-known Hutchinson's estimator needed for divergence calculations. We demonstrate that SuperDiff is scalable to large pre-trained diffusion models as superposition is performed solely through composition during inference, and also enjoys painless implementation as it combines different pre-trained vector fields through an automated re-weighting scheme. Notably, we show that SuperDiff is efficient during inference time, and mimics traditional composition operators such as the logical OR and the logical AND. We empirically demonstrate the utility of using SuperDiff for generating more diverse images on CIFAR-10, more faithful prompt conditioned image editing using Stable Diffusion, as well as improved conditional molecule generation and unconditional de novo structure design of proteins. https://github.com/necludov/super-diffusion