Publications

Tri-process model of interpersonal mindfulness: theoretical framework and study protocol
Bassam Khoury
Viktoriya Manova
Lena Adel
Michael Lifshitz
Rodrigo C. Vergara
Harmehr Sekhon
Soham Rej
According to the Center for Disease Control and Prevention, over 14% of the US population practice mindfulness meditation. The effects of mi… (voir plus)ndfulness training on physical and mental health have been consistently documented, but its effects on interpersonal relationships are not yet fully understood or investigated. Interpersonal relationships play a crucial role in the wellbeing of individuals and society, and therefore, warrants further study. The aim of this paper is to present a tri-process theoretical model of interpersonal mindfulness and a study protocol to validate the proposed model. Specifically, according to the proposed model, mindfulness meditation training increases the self-awareness, self-regulation, and prosociality of those receiving the training, which ameliorates the quality of interpersonal interactions and the socioemotional support provided to other individuals. Finally, better socioemotional support increases the support receiver’s ability to regulate their emotions. Using a multiphasic longitudinal design involving 640 participants randomized into 480 dyads, the proposed protocol aims to validate the tri-process model and to investigate its mechanisms of actions. The proposed study has important theoretical and social implications and will allow devising new and more effective interpersonal mindfulness programs with applications in multiple fields.
Understanding the normative leadership of the world health organization (who): a mixed-method approach
Miriam Cohen
Jean-Louis Denis
Pierre Larouche
Gaelle Foucault
Marie-Andree Girard
Medical SAM Adapter: Adapting Segment Anything Model for Medical Image Segmentation
Junde Wu
Rao Fu
Huihui Fang
Yuanpei Liu
Zhao-Yang Wang
Yanwu Xu
Yueming Jin
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation due to its impressive capabilities in var… (voir plus)ious segmentation tasks and its prompt-based interface. However, recent studies and individual experiments have shown that SAM underperforms in medical image segmentation, since the lack of the medical specific knowledge. This raises the question of how to enhance SAM's segmentation capability for medical images. In this paper, instead of fine-tuning the SAM model, we propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model using a light yet effective adaptation technique. In Med-SA, we propose Space-Depth Transpose (SD-Trans) to adapt 2D SAM to 3D medical images and Hyper-Prompting Adapter (HyP-Adpt) to achieve prompt-conditioned adaptation. We conduct comprehensive evaluation experiments on 17 medical image segmentation tasks across various image modalities. Med-SA outperforms several state-of-the-art (SOTA) medical image segmentation methods, while updating only 2\% of the parameters. Our code is released at https://github.com/KidsWithTokens/Medical-SAM-Adapter.
Ranking code clones to support maintenance activities
Osama Ehsan
Ying Zou
Dong Qiu
Rhythmic Information Sampling in the Brain during Visual Recognition
Laurent Caplette
Frédéric Gosselin
Towards Compute-Optimal Transfer Learning
Massimo Caccia
Alexandre Galashov
Arthur Douillard
Amal Rannen-Triki
Dushyant Rao
Michela Paganini
Marc'aurelio Ranzato
Razvan Pascanu
When Do Graph Neural Networks Help with Node Classification? Investigating the Impact of Homophily Principle on Node Distinguishability
Sitao Luan
Chenqing Hua
Minkai Xu
Qincheng Lu
Jiaqi Zhu
Xiao-Wen Chang
Jie Fu
Jure Leskovec
Better Training of GFlowNets with Local Credit and Incomplete Trajectories
Ling Pan
Nikolay Malkin
Dinghuai Zhang
Can We Scale Transformers to Predict Parameters of Diverse ImageNet Models?
Boris Knyazev
DOHA HWANG
Pretraining a neural network on a large dataset is becoming a cornerstone in machine learning that is within the reach of only a few communi… (voir plus)ties with large-resources. We aim at an ambitious goal of democratizing pretraining. Towards that goal, we train and release a single neural network that can predict high quality ImageNet parameters of other neural networks. By using predicted parameters for initialization we are able to boost training of diverse ImageNet models available in PyTorch. When transferred to other datasets, models initialized with predicted parameters also converge faster and reach competitive final performance.
Equivariance With Learned Canonicalization Functions
Sékou-Oumar Kaba
Arnab Kumar Mondal
Yan Zhang
Symmetry-based neural networks often constrain the architecture in order to achieve invariance or equivariance to a group of transformations… (voir plus). In this paper, we propose an alternative that avoids this architectural constraint by learning to produce a canonical representation of the data. These canonicalization functions can readily be plugged into non-equivariant backbone architectures. We offer explicit ways to implement them for many groups of interest. We show that this approach enjoys universality while providing interpretable insights. Our main hypothesis is that learning a neural network to perform canonicalization is better than doing it using predefined heuristics. Our results show that learning the canonicalization function indeed leads to better results and that the approach achieves great performance in practice.
Graphically Structured Diffusion Models
Christian Dietrich Weilbach
William Harvey
High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance
Abdurakhmon Sadiev
Marina Danilova
Eduard Gorbunov
Samuel Horváth
Pavel Dvurechensky
Alexander Gasnikov
Peter Richtárik
During recent years the interest of optimization and machine learning communities in high-probability convergence of stochastic optimization… (voir plus) methods has been growing. One of the main reasons for this is that high-probability complexity bounds are more accurate and less studied than in-expectation ones. However, SOTA high-probability non-asymptotic convergence results are derived under strong assumptions such as the boundedness of the gradient noise variance or of the objective's gradient itself. In this paper, we propose several algorithms with high-probability convergence results under less restrictive assumptions. In particular, we derive new high-probability convergence results under the assumption that the gradient/operator noise has bounded central