Publications

Two Families of Indexable Partially Observable Restless Bandits and Whittle Index Computation
Nima Akbarzadeh
Understanding the Evolution of Linear Regions in Deep Reinforcement Learning
Setareh Cohan
Nam Hee Gordon Kim
Michiel van de Panne
Policies produced by deep reinforcement learning are typically characterised by their learning curves, but they remain poorly understood in … (voir plus)many other respects. ReLU-based policies result in a partitioning of the input space into piecewise linear regions. We seek to understand how observed region counts and their densities evolve during deep reinforcement learning using empirical results that span a range of continuous control tasks and policy network dimensions. Intuitively, we may expect that during training, the region density increases in the areas that are frequently visited by the policy, thereby affording fine-grained control. We use recent theoretical and empirical results for the linear regions induced by neural networks in supervised learning settings for grounding and comparison of our results. Empirically, we find that the region density increases only moderately throughout training, as measured along fixed trajectories coming from the final policy. However, the trajectories themselves also increase in length during training, and thus the region densities decrease as seen from the perspective of the current trajectory. Our findings suggest that the complexity of deep reinforcement learning policies does not principally emerge from a significant growth in the complexity of functions observed on-and-around trajectories of the policy.
Unsupervised Dependency Graph Network
Yikang Shen
Shawn Tan
Peng Li
Jie Zhou
Recent work has identified properties of pretrained self-attention models that mirror those of dependency parse structures. In particular, s… (voir plus)ome self-attention heads correspond well to individual dependency types. Inspired by these developments, we propose a new competitive mechanism that encourages these attention heads to model different dependency relations. We introduce a new model, the Unsupervised Dependency Graph Network (UDGN), that can induce dependency structures from raw corpora and the masked language modeling task. Experiment results show that UDGN achieves very strong unsupervised dependency parsing performance without gold POS tags and any other external information. The competitive gated heads show a strong correlation with human-annotated dependency types. Furthermore, the UDGN can also achieve competitive performance on masked language modeling and sentence textual similarity tasks.
Usefulness of School Absenteeism Data for Predicting Infl uenza Outbreaks,
Joseph R. Egger
A. Hoen
John S. Brownstein
Donald R. Olson
Kevin James Konty
and second-round PCR were 94°C for 3 min, followed by 40 cycles of 94°C for 30 s, 55°C for 30 s, and 72°C for 2 min. Expected amplifi ca… (voir plus)tion products were 458 bp (PCR-1) and 304 bp (PCR-2). Using dilutions of a synthetic template corresponding to the target sequence, we estimated the sensitivity of the amplifi cation assay to be 5 copies of target sequence by limiting-dilution assay. Negative (sterile water) and positive controls (synthetic template dilutions) were
Vision-Language Pretraining: Current Trends and the Future
Damien Teney
Aida Nematzadeh
In the last few years, there has been an increased interest in building multimodal (vision-language) models that are pretrained on larger bu… (voir plus)t noisier datasets where the two modalities (e.g., image and text) loosely correspond to each other (e.g., Lu et al., 2019; Radford et al., 2021). Given a task (such as visual question answering), these models are then often fine-tuned on task-specific supervised datasets. (e.g., Lu et al., 2019; Chen et al.,2020; Tan and Bansal, 2019; Li et al., 2020a,b). In addition to the larger pretraining datasets, the transformer architecture (Vaswani et al., 2017) and in particular self-attention applied to two modalities are responsible for the impressive performance of the recent pretrained models on downstream tasks (Hendricks et al., 2021). In this tutorial, we focus on recent vision-language pretraining paradigms. Our goal is to first provide the background on image–language datasets, benchmarks, and modeling innovations before the multimodal pretraining area. Next we discuss the different family of models used for vision-language pretraining, highlighting their strengths and shortcomings. Finally, we discuss the limits of vision-language pretraining through statistical learning, and the need for alternative approaches such as causal representation learning.
Vision-Language Pretraining: Current Trends and the Future
Damien Teney
Aida Nematzadeh
In the last few years, there has been an increased interest in building multimodal (vision-language) models that are pretrained on larger bu… (voir plus)t noisier datasets where the two modalities (e.g., image and text) loosely correspond to each other (e.g., Lu et al., 2019; Radford et al., 2021). Given a task (such as visual question answering), these models are then often fine-tuned on task-specific supervised datasets. (e.g., Lu et al., 2019; Chen et al.,2020; Tan and Bansal, 2019; Li et al., 2020a,b). In addition to the larger pretraining datasets, the transformer architecture (Vaswani et al., 2017) and in particular self-attention applied to two modalities are responsible for the impressive performance of the recent pretrained models on downstream tasks (Hendricks et al., 2021). In this tutorial, we focus on recent vision-language pretraining paradigms. Our goal is to first provide the background on image–language datasets, benchmarks, and modeling innovations before the multimodal pretraining area. Next we discuss the different family of models used for vision-language pretraining, highlighting their strengths and shortcomings. Finally, we discuss the limits of vision-language pretraining through statistical learning, and the need for alternative approaches such as causal representation learning.
Washing The Unwashable : On The (Im)possibility of Fairwashing Detection
A. Shamsabadi
Mohammad Yaghini
Natalie Dullerud
Sierra Calanda Wyllie
Aisha Alaagib
Sébastien Gambs
Nicolas Papernot
What does it mean to be an AI Ethicist: An ontology of existing roles
Shalaleh Rismani
With the increasing adoption of Artificial Intelligence systems (AIS) in various application and the growing efforts to regulate such system… (voir plus)s, a new set of occupations has emerged in the industry. This new set of roles take different titles and hold varying responsibilities. However, the individuals in these roles are tasked with interpreting and operationalizing best practices for developing ethical and safe AI systems. We will broadly refer to this new set of occupations as AI ethicists and recognize that they often hold a specific role in the intersection of technology development, business needs, and societal implications. In this work, we examine what it means to be an AI ethicist in the industry and propose an ontology of existing roles under this broad title along with their required competencies. We create this ontology by examining the job postings for such roles over the past two years and conduct expert interviews with fourteen individuals who currently hold such a role in the industry. The proposed ontology will inform executives and leaders who are looking to build responsible AI teams and provide educators the necessary information for creating new learning objectives and curriculum.
What does it mean to be an AI Ethicist: An ontology of existing roles
Shalaleh Rismani
With the increasing adoption of Artificial Intelligence systems (AIS) in various application and the growing efforts to regulate such system… (voir plus)s, a new set of occupations has emerged in the industry. This new set of roles take different titles and hold varying responsibilities. However, the individuals in these roles are tasked with interpreting and operationalizing best practices for developing ethical and safe AI systems. We will broadly refer to this new set of occupations as AI ethicists and recognize that they often hold a specific role in the intersection of technology development, business needs, and societal implications. In this work, we examine what it means to be an AI ethicist in the industry and propose an ontology of existing roles under this broad title along with their required competencies. We create this ontology by examining the job postings for such roles over the past two years and conduct expert interviews with fourteen individuals who currently hold such a role in the industry. The proposed ontology will inform executives and leaders who are looking to build responsible AI teams and provide educators the necessary information for creating new learning objectives and curriculum.
Machine learning application development: practitioners’ insights
Md. Saidur Rahman
Alaleh Hamidi
Jinghui Cheng
Giuliano Antoniol
Hironori Washizaki
Heterogeneous Crowd Simulation Using Parametric Reinforcement Learning
Kaidong Hu
Michael Brandon Haworth
Vladimir Pavlovic
Petros Faloutsos
Mubbasir. T. Kapadia
Agent-based synthetic crowd simulation affords the cost-effective large-scale simulation and animation of interacting digital humans. Model-… (voir plus)based approaches have successfully generated a plethora of simulators with a variety of foundations. However, prior approaches have been based on statically defined models predicated on simplifying assumptions, limited video-based datasets, or homogeneous policies. Recent works have applied reinforcement learning to learn policies for navigation. However, these approaches may learn static homogeneous rules, are typically limited in their generalization to trained scenarios, and limited in their usability in synthetic crowd domains. In this article, we present a multi-agent reinforcement learning-based approach that learns a parametric predictive collision avoidance and steering policy. We show that training over a parameter space produces a flexible model across crowd configurations. That is, our goal-conditioned approach learns a parametric policy that affords heterogeneous synthetic crowds. We propose a model-free approach without centralization of internal agent information, control signals, or agent communication. The model is extensively evaluated. The results show policy generalization across unseen scenarios, agent parameters, and out-of-distribution parameterizations. The learned model has comparable computational performance to traditional methods. Qualitatively the model produces both expected (laminar flow, shuffling, bottleneck) and unexpected (side-stepping) emergent qualitative behaviours, and quantitatively the approach is performant across measures of movement quality.
Heterogeneous Crowd Simulation Using Parametric Reinforcement Learning
Kaidong Hu
Brandon Haworth
Vladimir Pavlovic
Petros Faloutsos
Mubbasir Kapadia
Agent-based synthetic crowd simulation affords the cost-effective large-scale simulation and animation of interacting digital humans. Model-… (voir plus)based approaches have successfully generated a plethora of simulators with a variety of foundations. However, prior approaches have been based on statically defined models predicated on simplifying assumptions, limited video-based datasets, or homogeneous policies. Recent works have applied reinforcement learning to learn policies for navigation. However, these approaches may learn static homogeneous rules, are typically limited in their generalization to trained scenarios, and limited in their usability in synthetic crowd domains. In this article, we present a multi-agent reinforcement learning-based approach that learns a parametric predictive collision avoidance and steering policy. We show that training over a parameter space produces a flexible model across crowd configurations. That is, our goal-conditioned approach learns a parametric policy that affords heterogeneous synthetic crowds. We propose a model-free approach without centralization of internal agent information, control signals, or agent communication. The model is extensively evaluated. The results show policy generalization across unseen scenarios, agent parameters, and out-of-distribution parameterizations. The learned model has comparable computational performance to traditional methods. Qualitatively the model produces both expected (laminar flow, shuffling, bottleneck) and unexpected (side-stepping) emergent qualitative behaviours, and quantitatively the approach is performant across measures of movement quality.