Publications

Noisy Pairing and Partial Supervision for Stylized Opinion Summarization
Reinald Kim
Mirella Lapata. 2020
Un-611
Emmanuel Bengio
Maxinder S. Kan-620
Asja Fischer
Somnath Basu
Roy Chowdhury
Chao Zhao
Tanya Goyal
Junyi Jiacheng Xu
Jessy Li
Ivor Wai-hung Tsang
James T. Kwok
Neil Houlsby
Andrei Giurgiu
Stanisław Jastrzębski … (see 22 more)
Bruna Morrone
Quentin de Laroussilhe
Mona Gesmundo
Attariyan Sylvain
Gelly
Thomas Wolf
Lysandre Debut
Julien Victor Sanh
Clement Chaumond
Anthony Delangue
Pier-339 Moi
Tim ric Cistac
R´emi Rault
Morgan Louf
Funtow-900 Joe
Sam Davison
Patrick Shleifer
Von Platen
Clara Ma
Yacine Jernite
Julien Plu
Canwen Xu
Opinion summarization research has primar-001 ily focused on generating summaries reflect-002 ing important opinions from customer reviews 0… (see more)03 without paying much attention to the writing 004 style. In this paper, we propose the stylized 005 opinion summarization task, which aims to 006 generate a summary of customer reviews in 007 the desired (e.g., professional) writing style. 008 To tackle the difficulty in collecting customer 009 and professional review pairs, we develop a 010 non-parallel training framework, Noisy Pair-011 ing and Partial Supervision ( NAPA ), which 012 trains a stylized opinion summarization sys-013 tem from non-parallel customer and profes-014 sional review sets. We create a benchmark P RO - 015 S UM by collecting customer and professional 016 reviews from Yelp and Michelin. Experimental 017 results on P RO S UM and FewSum demonstrate 018 that our non-parallel training framework con-019 sistently improves both automatic and human 020 evaluations, successfully building a stylized 021 opinion summarization model that can gener-022 ate professionally-written summaries from cus-023 tomer reviews. 024
Normalization Layers Are All That Sharpness-Aware Minimization Needs
Maximilian Mueller
Tiffany Joyce Vlaar
Matthias Hein
Sharpness-aware minimization (SAM) was proposed to reduce sharpness of minima and has been shown to enhance generalization performance in va… (see more)rious settings. In this work we show that perturbing only the affine normalization parameters (typically comprising 0.1% of the total parameters) in the adversarial step of SAM can outperform perturbing all of the parameters.This finding generalizes to different SAM variants and both ResNet (Batch Normalization) and Vision Transformer (Layer Normalization) architectures. We consider alternative sparse perturbation approaches and find that these do not achieve similar performance enhancement at such extreme sparsity levels, showing that this behaviour is unique to the normalization layers. Although our findings reaffirm the effectiveness of SAM in improving generalization performance, they cast doubt on whether this is solely caused by reduced sharpness.
A Novel Deep Multi-head Attentive Vulnerable Line Detector
Miles Q. Li
Ashita Diwan
An Online Newton’s Method for Time-Varying Linear Equality Constraints
Jean-Luc Lupien
We consider online optimization problems with time-varying linear equality constraints. In this framework, an agent makes sequential decisio… (see more)ns using only prior information. At every round, the agent suffers an environment-determined loss and must satisfy time-varying constraints. Both the loss functions and the constraints can be chosen adversarially. We propose the Online Projected Equality-constrained Newton Method (OPEN-M) to tackle this family of problems. We obtain sublinear dynamic regret and constraint violation bounds for OPEN-M under mild conditions. Namely, smoothness of the loss function and boundedness of the inverse Hessian at the optimum are required, but not convexity. Finally, we show OPEN-M outperforms state-of-the-art online constrained optimization algorithms in a numerical network flow application.
Optimising Electric Vehicle Charging Station Placement Using Advanced Discrete Choice Models
Steven Lamontagne
Bernard Gendron
Miguel F. Anjos
Ribal Atallah
D'epartement d'informatique et de recherche op'erationnelle
U. Montr'eal
S. O. Mathematics
U. Edinburgh
Institut de Recherche d'Hydro-Qu'ebec
We present a new model for finding the optimal placement of electric vehicle charging stations across a multiperiod time frame so as to maxi… (see more)mise electric vehicle adoption. Via the use of stochastic discrete choice models and user classes, this work allows for a granular modelling of user attributes and their preferences in regard to charging station characteristics. We adopt a simulation approach and precompute error terms for each option available to users for a given number of scenarios. This results in a bilevel optimisation model that is, however, intractable for all but the simplest instances. Our major contribution is a reformulation into a maximum covering model, which uses the precomputed error terms to calculate the users covered by each charging station. This allows solutions to be found more efficiently than for the bilevel formulation. The maximum covering formulation remains intractable in some instances, so we propose rolling horizon, greedy, and greedy randomised adaptive search procedure heuristics to obtain good-quality solutions more efficiently. Extensive computational results are provided, and they compare the maximum covering formulation with the current state of the art for both exact solutions and the heuristic methods. History: Accepted by Andrea Lodi, Area Editor for Design & Analysis of Algorithms–Discrete. Funding: This work was supported by Hydro-Québec and the Natural Sciences and Engineering Research Council of Canada [Discovery grant 2017-06054; Collaborative Research and Development Grant CRDPJ 536757–19]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/ijoc.2022.0185 .
Optimism and Adaptivity in Policy Optimization
Veronica Chelu
Tom Zahavy
Arthur Guez
Sebastian Flennerhag
Optimizing Fairness over Time with Homogeneous Workers (Short Paper).
Bart-jan Van Rossum
Rui Chen
PAC-Bayesian Generalization Bounds for Adversarial Generative Models
Sokhna Diarra Mbacke
Florence Clerc
We extend PAC-Bayesian theory to generative models and develop generalization bounds for models based on the Wasserstein distance and the to… (see more)tal variation distance. Our first result on the Wasserstein distance assumes the instance space is bounded, while our second result takes advantage of dimensionality reduction. Our results naturally apply to Wasserstein GANs and Energy-Based GANs, and our bounds provide new training objectives for these two. Although our work is mainly theoretical, we perform numerical experiments showing non-vacuous generalization bounds for Wasserstein GANs on synthetic datasets.
Patient experience or patient satisfaction? A systematic review of child- and family-reported experience measures in pediatric surgery.
Julia Ferreira
Prachikumari Patel
Elena Guadagno
Nikki Ow
Jo Wray
Sherif Emil
Performative Prediction with Neural Networks
Performative prediction is a framework for learning models that influence the data they intend to predict. We focus on finding classifiers t… (see more)hat are performatively stable, i.e. optimal for the data distribution they induce. Standard convergence results for finding a performatively stable classifier with the method of repeated risk minimization assume that the data distribution is Lipschitz continuous to the model's parameters. Under this assumption, the loss must be strongly convex and smooth in these parameters; otherwise, the method will diverge for some problems. In this work, we instead assume that the data distribution is Lipschitz continuous with respect to the model's predictions, a more natural assumption for performative systems. As a result, we are able to significantly relax the assumptions on the loss function. In particular, we do not need to assume convexity with respect to the model's parameters. As an illustration, we introduce a resampling procedure that models realistic distribution shifts and show that it satisfies our assumptions. We support our theory by showing that one can learn performatively stable classifiers with neural networks making predictions about real data that shift according to our proposed procedure.
Performative Prediction with Neural Networks
Physics-Guided Adversarial Machine Learning for Aircraft Systems Simulation
Houssem Ben Braiek
Thomas Reid
In the context of aircraft system performance assessment, deep learning technologies allow us to quickly infer models from experimental meas… (see more)urements, with less detailed system knowledge than usually required by physics-based modeling. However, this inexpensive model development also comes with new challenges regarding model trustworthiness. This article presents a novel approach, physics-guided adversarial machine learning (ML), which improves the confidence over the physics consistency of the model. The approach performs, first, a physics-guided adversarial testing phase to search for test inputs revealing behavioral system inconsistencies, while still falling within the range of foreseeable operational conditions. Then, it proceeds with a physics-informed adversarial training to teach the model the system-related physics domain foreknowledge through iteratively reducing the unwanted output deviations on the previously uncovered counterexamples. Empirical evaluation on two aircraft system performance models shows the effectiveness of our adversarial ML approach in exposing physical inconsistencies of both the models and in improving their propensity to be consistent with physics domain knowledge.