Mila > News > Mila’s Best Performance Awards on the eve of ICML

10 Jun 2019

Mila’s Best Performance Awards on the eve of ICML

/
Tags

Mila’s Mile-Ex premises will be a doldrum this week, when the researchers flock to the sunny shores of Long Beach, California — a mere 40 minutes from Hollywood — for the Thirty-Sixth International Conference on Machine Learning (ICML). Our students and professors had twenty-two papers accepted to the main conference, not counting the thirty-one workshops on the weekend afterwards. On the final night before they depart, we’ve assembled a panel of unbiased experts to recognize some global extrema of their collective achievements.

The r-Strategist Award for most papers

Named for the evolutionary theory of r/K selection, juxtaposing “r-strategist” species which produce lots of offspring with low survival against “K-strategist” species with few offspring but high survival, this award recognizes the Mila professor with the highest total number of papers at the conference.

It came as no surprise to our panel, and it will doubtless fail to surprise our readers, that this honor belongs to Mila scientific director Yoshua Bengio, who out-spawned the rest of the field with four papers.

What may interest some readers is the second-place finish of the comparatively new Mila professor Ioannis Mitliagkas, who joined the Mila faculty only two years ago at the tender age of 32, and, with three papers, nudged out more seasoned competitors like Aaron CourvilleJoelle PineauMarc Bellemare, and Doina Precup, who finished with two apiece. Bettors would to well to consider the implications for a potential upset at NeurIPS.

The K-strategist Award for most citations on Google Scholar

At this stage of the game, expectations in this category are low: most papers don’t get cited in other papers before the conference which publishes them has happened yet.

That’s why our panelists were so gobsmacked to discover that this honor would be shared by two papers, both which somehow managed to stimulate enough interest in the community to rack up five citation apiece before their formal debut this week.

Those two papers are “Safe Policy Improvement with Baseline Bootstrapping” by Microsoft research scientist Romain Laroche, Microsoft associate researcher Remi Tachet des Combes, and Mila PhD student Paul Trichelair, and “Stochastic Gradient Push for Distributed Deep Learning” by McGill PhD student Mahmoud Assram, University of Edinburgh PhD student Nicolas Loizou, current Facebook research scientist / former Mila postdoc Nicolas Ballas, and Mila professor Michael Rabbat.

The Thousand Words Award for prettiest figure

Our panel selected Figure 2 from the paper “Hierarchical Importance Weighted Autoencoders,” whose delicately colored pastel sprays of z_j’s called to mind the later work of nineteenth-century pointilist Georges Seurat.

Left/Gauche : Huang and al’s “Figure 2” from ”Hierarchical Importance Weighted Autoencoders” (2019). Right/Droite : Georges Seurat’s “Le Bec du Hoc, Grandcamp” (1885).

The paper containing the winning figure was written by Mila PhD student Chin-Wei Huang, Mila postdoc Kris Sankaran, Mila master’s student Eeshan Dhekane, Element AI research scientist Alexandre Lacoste, and Mila professor Aaron Courville.

A close runner-up in this category was Figure 1 in “State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations,” whose three-panel cartoon strip of the state reification method compared favorably, in our panel’s opinion, with the narrative art of the great Bill Watterson.

Above: “Figure 1” from Lamb et al’s “State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations” (2019). Below: Excerpt from Watterson’s “Calvin and Hobbes” (1993).

The authors of the state reification paper are Mila PhD students Alex LambAnirudh Goyal, and Sandeep Subramanian; Mila postdoc Jonathan Binas; Mila professors Ioannis Mitliagkas and Yoshua Bengio; and University of Colorado Boulder master’s student Denis Kasakov and professor Michael Mozer.

The Visual Intimidation Award for most equations

The clear winner in this category was “The Value Function Polytope in Reinforcement Learning,” which contains over 90 equations, spread across three theorems, ten lemmas, and three corollaries.

Excerpt from page 14 of the supplementary materials in Dadashi et al’s “The Value Function Polytope in Reinforcement Learning” (2019). / Extrait de la page 14 de la documentation complémentaire de Dadashi et al “The Value Function Polytope in Reinforcement Learning” (2019)

Our panel congratulates Google AI resident Robert Dadashi, Mila professor Marc Bellemare, Mila PhD student Adrien Ali Taïga, Mila professor Nicolas Le Roux, and Google research scientist / University of Alberta professor Dale Schuurmans on this distinction.

The Apocalypse Prevention Award (sponsored by Elon Musk) for most uses of the word “safety”

Our panelists found themselves in a sticky situation here, since it transpires that Mila’s twenty-two ICML papers don’t actually use the word “safety.” After sustained negotiations, they determined to give this award to “Fairwashing: The Risk of Rationalization” for its 165 uses of “fairness” (including close variants such as “fair,” “fairer,” “unfair.”)

Excerpt from page 2 of Aïvodji et al’s “Fairwashing: The risk of rationalization”

 

The paper was written by Université du Quebec à Montréal postdoc Ulrich Aïvodji and professor Sébastien Gambs, ENSTA Paris Tech master’s student Olivier Fortineau, RIKEN Center for Advanced Intelligence Project researcher Hiromi Arai, Osaka University professor Satoshi Hara, and Mila professor Alain Tapp.

It far outstripped the second-place finisher, “Compositional Fairness Constraints for Graph Embeddings” by Mila PhD student Avishek Joey Boseand Mila professor Will Hamilton, which used the word “fairness” a mere 64 times.

The Dove of Peace Award for most international coauthors

This award goes to the team of seven researchers responsible for “Manifold Mixup: Better Representations by Interpolating Hidden States.”

They are citizens of India (Mila intern Vikas Verma), the United States (Mila PhD student Alex Lamb), New Zealand (Mila PhD student Christopher Beckham), Iran (Sharif University of Technology PhD student Amir Najafi), Greece (Mila professor Ioannis Mitliagkas), Canada and France (Mila scientific director Yoshua Bengio), and finally, Spain (Facebook research scientist David Lopez-Paz)

The Unjaundiced Innocent (“Rookie”) Award for an outstanding contribution by a first-time ICML attendee

Our panel has chosen to honor the efforts of Mila PhD student Avishek Joey Bose, who flies to Long Beach next week as the lead author on “Compositional Fairness Constraints for Graph Embeddings,” written with Mila professor Will Hamilton. With rapturous eyes and the candor of youth, Joey tells our reporter, “I’m super thrilled that my first big paper during my PhD is with Will, who was so instrumental and helpful in this process. It’s even better to know that it’s both my first paper and his first paper as a supervisor, and ergo the first for our group.”

May his smile endure through the end of the week.

Our panel wishes good luck to all of Mila’s members as they present their work, and entreats them to remember that they’re all maximal, in one dimension or more.

For the full list of Mila’s ICML papers, go here.