We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
Adaptation, Comparison and Practical Implementation of Fairness Schemes in Kidney Exchange Programs
In Kidney Exchange Programs (KEPs), each participating patient is registered together with an incompatible donor. Donors without an incompat… (see more)ible patient can also register. Then, KEPs typically maximize overall patient benefit through donor exchanges. This aggregation of benefits calls into question potential individual patient disparities in terms of access to transplantation in KEPs. Considering solely this utilitarian objective may become an issue in the case where multiple exchange plans are optimal or near-optimal. In fact, current KEP policies are all-or-nothing, meaning that only one exchange plan is determined. Each patient is either selected or not as part of that unique solution. In this work, we seek instead to find a policy that contemplates the probability of patients of being in a solution. To guide the determination of our policy, we adapt popular fairness schemes to KEPs to balance the usual approach of maximizing the utilitarian objective. Different combinations of fairness and utilitarian objectives are modelled as conic programs with an exponential number of variables. We propose a column generation approach to solve them effectively in practice. Finally, we make an extensive comparison of the different schemes in terms of the balance of utility and fairness score, and validate the scalability of our methodology for benchmark instances from the literature.
2025-08-01
European Journal of Operational Research (published)
Inverter-based resources (IBRs) can cause instability in weak AC grids. While supplementary damping controllers (SDCs) effectively mitigate … (see more)this instability, they are typically designed for specific resonance frequencies but struggle with large shifts caused by changing grid conditions. This paper proposes a deep reinforcement learning-based agent (DRL Agent) as an adaptive SDC to handle shifted resonance frequencies. To address the time-consuming nature of training DRL Agents in electromagnetic transient (EMT) simulations, we coordinate fast root mean square (RMS) and EMT simulations. Resonance frequencies of the weak grid instability are accurately reproduced by RMS simulations to support the training process. The DRL Agent’s efficacy is tested in unseen scenarios outside the training dataset. We then iteratively improve the DRL Agent’s performance by modifying the reward function and hyper-parameters.
Many causal systems such as biological processes in cells can only be observed indirectly via measurements, such as gene expression. Causal … (see more)representation learning---the task of correctly mapping low-level observations to latent causal variables---could advance scientific understanding by enabling inference of latent variables such as pathway activation. In this paper, we develop methods for inferring latent variables from multiple related datasets (environments) and tasks. As a running example, we consider the task of predicting a phenotype from gene expression, where we often collect data from multiple cell types or organisms that are related in known ways. The key insight is that the mapping from latent variables driven by gene expression to the phenotype of interest changes sparsely across closely related environments. To model sparse changes, we introduce Tree-Based Regularization (TBR), an objective that minimizes both prediction error and regularizes closely related environments to learn similar predictors. We prove that under assumptions about the degree of sparse changes, TBR identifies the true latent variables up to some simple transformations. We evaluate the theory empirically with both simulations and ground-truth gene expression data. We find that TBR recovers the latent causal variables better than related methods across these settings, even under settings that violate some assumptions of the theory.
The PA7-clade (or group 3) of Pseudomonas aeruginosa is now recognized as a distinct species, Pseudomonas paraeruginosa. We report here the … (see more)genomic sequences of six new strains of P. paraeruginosa: Zw26 (the first complete genome of a cystic fibrosis isolate of P. paraeruginosa), draft genomes of four burn and wound strains from Argentina very closely related to PA7, and of Pa5196, the strain in which arabinosylation of type IV pili was documented. We compared the genomes of 82 strains of P. paraeruginosa and confirmed that the species is divided into two sub-clades. Core genomes are very similar, while most differences are found in "regions of genomic plasticity" (RGPs). Several genomic deletions were identified, and most are common to the CR1 sub-clade that includes Zw26 and Pa5196. All strains lack the type 3 secretion system (T3SS) and instead use an alternative virulence strategy involving an exolysin, a characteristic shared with group 5 P. aeruginosa. All strains tend to be multiresistant like PA7, with a significant proportion of carbapenem-resistant strains, either oprD mutants or carrying carbapenemase genes. Although P. paraeruginosa is still relatively rare, it has a worldwide distribution. Its multiresistance and its alternative virulence strategy need to be considered in future therapeutic development.IMPORTANCEPseudomonas aeruginosa is an important opportunistic pathogen causing respiratory infections, notably in cystic fibrosis, and burn and wound infections. Our study reports six new genomes of Pseudomonas paraeruginosa, a new species recently reported as distinct from P. aeruginosa. The number of sequenced genomes of P. paraeruginosa is only about 1% that of P. aeruginosa. We compare the genomic content of nearly all strains of P. paraeruginosa in GenBank, highlighting the differences in core and accessory genomes, antimicrobial resistance genes, and virulence factors. This novel species is very similar in environmental spectrum to P. aeruginosa but is notably resistant to last-line antibiotics and uses an alternative virulence strategy based on exolysin-this strategy being shared with some P. aeruginosa outliers.
Accurate modeling of physical systems governed by partial differential equations is a central challenge in scientific computing. In oceanogr… (see more)aphy, high-resolution current data are critical for coastal management, environmental monitoring, and maritime safety. However, available satellite products, such as Copernicus data for sea water velocity at ~0.08 degrees spatial resolution and global ocean models, often lack the spatial granularity required for detailed local analyses. In this work, we (a) introduce a supervised deep learning framework based on neural operators for solving PDEs and providing arbitrary resolution solutions, and (b) propose downscaling models with an application to Copernicus ocean current data. Additionally, our method can model surrogate PDEs and predict solutions at arbitrary resolution, regardless of the input resolution. We evaluated our model on real-world Copernicus ocean current data and synthetic Navier-Stokes simulation datasets.
Obeticholic acid (OCA) is the second line therapy for primary biliary cholangitis. While efficient in promoting BA detoxification and limiti… (see more)ng liver fibrosis, its clinical use is restricted by severe dose-dependent side effects. We tested the hypothesis that adding n-3 polyunsaturated fatty acids, eicosapentaenoic (EPA) and docosahexaenoic (DHA) acids to OCA may improve the therapeutic effect of the low drug dosage. Several liver cell lines were exposed to vehicle, low or high OCA dose (1-20μM) in the presence or absence of EPA/DHA for 24H. To induce ER stress, apoptosis, and fibrosis, HepG2 cells were exposed to a 400μM BA mixture or to 2ng/mL TGF-β. For inflammation analyses, THP-1 cells were activated with 100ng/mL LPS. The impact OCA+EPA/DHA was assessed using transcriptomic (qRT-PCR), proteomic (ELISA, caspase-3), and metabolomic (LC-MS/MS) approaches. The addition of EPA/DHA reinforced the ability of low OCA dose to down-regulate the expression of genes involved in BA synthesis (CYP7A1, CYP8B1) and uptake (NTCP) and to up-regulate MRP2 & 3 genes expression. EPA/DHA also enhanced the anti-inflammatory response of the drug by reducing the expression of the LPS-induced cytokines: TNFα, IL-6, IL-1β and MCP-1 in THP-1 macrophages. OCA+EPA/DHA decreased the expression of BIP, CHOP and COL1A1 genes and the caspase-3 activity. EPA+DHA potentiate the response to low OCA doses on BA toxicity, and provide additional benefits on ER stress, apoptosis, inflammation and fibrosis. These observations support the idea that adding n-3 polyunsaturated fatty acids to the drug may reduce the risk of dose-related side effects in patients treated with OCA.
Choice of immunoassay influences population seroprevalence estimates. Post-hoc adjustments for assay performance could improve comparability… (see more) of estimates across studies and enable pooled analyses. We assessed post-hoc adjustment methods using data from 2021–2023 SARS-CoV-2 serosurveillance studies in Alberta, Canada: one that tested 124,008 blood donations using Roche immunoassays (SARS-CoV-2 nucleocapsid total antibody and anti-SARS-CoV-2 S) and another that tested 214,780 patient samples using Abbott immunoassays (SARS-CoV-2 IgG and anti-SARS-CoV-2 S). Comparing datasets, seropositivity for antibodies against nucleocapsid (anti-N) diverged after May 2022 due to differential loss of sensitivity as a function of time since infection. The commonly used Rogen-Gladen adjustment did not reduce this divergence. Regression-based adjustments using the assays’ semi-quantitative results produced more similar estimates of anti-N seroprevalence and rolling incidence proportion (proportion of individuals infected in recent months). Seropositivity for antibodies targeting SARS-CoV-2 spike protein was similar without adjustment, and concordance was not improved when applying an alternative, functional threshold. These findings suggest that assay performance substantially impacted population inferences from SARS-CoV-2 serosurveillance studies in the Omicron period. Unlike methods that ignore time-varying assay sensitivity, regression-based methods using the semi-quantitative assay resulted in increased concordance in estimated anti-N seropositivity and rolling incidence between cohorts using different assays.
Corrigendum to "Child- and Proxy-reported Differences in Patient-reported Outcome and Experience Measures in Pediatric Surgery: Systematic Review and Meta-analysis" [Journal of Pediatric Surgery 60 (2025) 162172].
Corrigendum to "Virtual Reality for Pediatric Trauma Education - A Preliminary Face and Content Validation Study" [Journal of Pediatric Surgery 60 (2025) 161951].
The majority of signal data captured in the real world uses numerous sensors with different resolutions. In practice, most deep learning arc… (see more)hitectures are fixed-resolution; they consider a single resolution at training and inference time. This is convenient to implement but fails to fully take advantage of the diverse signal data that exists. In contrast, other deep learning architectures are adaptive-resolution; they directly allow various resolutions to be processed at training and inference time. This provides computational adaptivity but either sacrifices robustness or compatibility with mainstream layers, which hinders their use. In this work, we introduce Adaptive Resolution Residual Networks (ARRNs) to surpass this tradeoff. We construct ARRNs from Laplacian residuals, which serve as generic adaptive-resolution adapters for fixed-resolution layers. We use smoothing filters within Laplacian residuals to linearly separate input signals over a series of resolution steps. We can thereby skip Laplacian residuals to cast high-resolution ARRNs into low-resolution ARRNs that are computationally cheaper yet numerically identical over low-resolution signals. We guarantee this result when Laplacian residuals are implemented with perfect smoothing kernels. We complement this novel component with Laplacian dropout, which randomly omits Laplacian residuals during training. This regularizes for robustness to a distribution of lower resolutions. This also regularizes for numerical errors that may occur when Laplacian residuals are implemented with approximate smoothing kernels. We provide a solid grounding for the advantageous properties of ARRNs through a theoretical analysis based on neural operators, and empirically show that ARRNs embrace the challenge posed by diverse resolutions with computational adaptivity, robustness, and compatibility with mainstream layers.
Data augmentation is a widely used and effective technique to improve the generalization performance of deep neural networks. Yet, despite o… (see more)ften facing limited data availability when working with medical images, it is frequently underutilized. This appears to come from a gap in our collective understanding of the efficacy of different augmentation techniques across different tasks and modalities. One modality where this is especially true is ultrasound imaging. This work addresses this gap by analyzing the effectiveness of different augmentation techniques at improving model performance across a wide range of ultrasound image analysis tasks. To achieve this, we introduce a new standardized benchmark of 14 ultrasound image classification and semantic segmentation tasks from 10 different sources and covering 11 body regions. Our results demonstrate that many of the augmentations commonly used for tasks on natural images are also effective on ultrasound images, even more so than augmentations developed specifically for ultrasound images in some cases. We also show that diverse augmentation using TrivialAugment, which is widely used for natural images, is also effective for ultrasound images. Moreover, our proposed methodology represents a structured approach for assessing various data augmentations that can be applied to other contexts and modalities.