Publications

Multiscale PHATE Exploration of SARS-CoV-2 Data Reveals Multimodal Signatures of Disease
Manik Kuchroo
Jessie Huang
Patrick Wong
Jean-Christophe Grenier
Dennis Shung
Alexander Tong
Carolina Lucas
Jon Klein
Daniel B. Burkhardt
Scott Gigante
Abhinav Godavarthi
Benjamin Israelow
Tianyang Mao
Ji Eun Oh
Julio Silva
Takehiro Takahashi
Camila D. Odio
Arnau Casanovas-Massana
John Fournier
Shelli Farhadian … (voir 7 de plus)
Charles S. Dela Cruz
Albert I. Ko
F. Perry Wilson
Akiko Iwasaki
Smita Krishnaswamy
Learning Inter-Modal Correspondence and Phenotypes From Multi-Modal Electronic Health Records
Kejing Yin
William K. Cheung
Jonathan Poon
Non-negative tensor factorization has been shown a practical solution to automatically discover phenotypes from the electronic health record… (voir plus)s (EHR) with minimal human supervision. Such methods generally require an input tensor describing the inter-modal interactions to be pre-established; however, the correspondence between different modalities (e.g., correspondence between medications and diagnoses) can often be missing in practice. Although heuristic methods can be applied to estimate them, they inevitably introduce errors, and leads to sub-optimal phenotype quality. This is particularly important for patients with complex health conditions (e.g., in critical care) as multiple diagnoses and medications are simultaneously present in the records. To alleviate this problem and discover phenotypes from EHR with unobserved inter-modal correspondence, we propose the collective hidden interaction tensor factorization (cHITF) to infer the correspondence between multiple modalities jointly with the phenotype discovery. We assume that the observed matrix for each modality is marginalization of the unobserved inter-modal correspondence, which are reconstructed by maximizing the likelihood of the observed matrices. Extensive experiments conducted on the real-world MIMIC-III dataset demonstrate that cHITF effectively infers clinically meaningful inter-modal correspondence, discovers phenotypes that are more clinically relevant and diverse, and achieves better predictive performance compared with a number of state-of-the-art computational phenotyping models.
Using Open Source Licensing to Regulate the Assembly of LAWS: A Preliminary Analysis
Cheng Lin
Lethal autonomous weapons (LAWS) are an emerging technology capable of automatically targeting and exercising lethal force. Many scholars an… (voir plus)d advocates have petitioned to ban the technology internationally for a myriad of reasons. However, there are practical challenges to implementing a ban. One such challenge is posed by the “intangible” nature of the software that LAWS depends on, which is incompatible with implementation mechanisms such as export control. Given the dual-use nature of software, and the fact that software is developed by teams of individuals, a number of soft governance mechanisms have been proposed to regulate this technology. In this paper, we investigate the feasibility of one particular approach: leveraging open source licenses as a means to prohibit the use of certain software in LAWS. This approach is largely motivated by the fact that open source software underpins all of technology, especially AI. Through a review of the recent tech activism and open source activism, we evaluate whether open source licenses can feasibly limit the use of open source software to only non-LAWS applications. We distill the current challenges facing “ethics-driven” open source licensing efforts into three main obstacles: the need for clarity of licensing language, the lack of enforceability of licenses, and the lack of cohesiveness of the open source community. We propose that addressing these factors are also success criteria for future anti-LAWS open source initiatives. We find that open source licenses provide more theoretical than practical promise in regulating LAWS, and conclude that cohesion in the open source community is the key to their potential practical success in the future.
Global Surveillance of COVID-19 by mining news media using a multi-source dynamic embedded topic model
Pratheeksha Nair
Zhi Wen
Imane Chafi
Anya Okhmatovskaia
Guido Powell
Yannan Shen
On Posterior Collapse and Encoder Feature Dispersion in Sequence VAEs.
Teng Long
Yanshuai Cao
Variational autoencoders (VAEs) hold great potential for modelling text, as they could in theory separate high-level semantic and syntactic … (voir plus)properties from local regularities of natural language. Practically, however, VAEs with autoregressive decoders often suffer from posterior collapse, a phenomenon where the model learns to ignore the latent variables, causing the sequence VAE to degenerate into a language model. In this paper, we argue that posterior collapse is in part caused by the lack of dispersion in encoder features. We provide empirical evidence to verify this hypothesis, and propose a straightforward fix using pooling. This simple technique effectively prevents posterior collapse, allowing model to achieve significantly better data log-likelihood than standard sequence VAEs. Comparing to existing work, our proposed method is able to achieve comparable or superior performances while being more computationally efficient.
Approximate Planning and Learning for Partially Observed Systems
Effectiveness of quarantine and testing to prevent COVID-19 transmission from arriving travelers
Russell Wa
Explainability and Interpretability: Keys to Deep Medicine
Arash Shaban-Nejad
Martin Michalowski
Bisimulation metrics and norms for real-weighted automata
Borja Balle
Pascale Gourdeau
ComplexDataLab at W-NUT 2020 Task 2: Detecting Informative COVID-19 Tweets by Attending over Linked Documents
Kellin Pelrine
Jacob Danovitch
Albert Orozco Camacho
Given the global scale of COVID-19 and the flood of social media content related to it, how can we find informative discussions? We present … (voir plus)Gapformer, which effectively classifies content as informative or not. It reformulates the problem as graph classification, drawing on not only the tweet but connected webpages and entities. We leverage a pre-trained language model as well as the connections between nodes to learn a pooled representation for each document network. We show it outperforms several competitive baselines and present ablation studies supporting the benefit of the linked information. Code is available on Github.
Deconstructing Word Embedding Algorithms
Kian Kenyon-Dean
Edward Daniel Newell
Factual Error Correction for Abstractive Summarization Models
Meng Cao
Yue Dong
Jiapeng Wu
Neural abstractive summarization systems have achieved promising progress, thanks to the availability of large-scale datasets and models pre… (voir plus)-trained with self-supervised methods. However, ensuring the factual consistency of the generated summaries for abstractive summarization systems is a challenge. We propose a post-editing corrector module to address this issue by identifying and correcting factual errors in generated summaries. The neural corrector model is pre-trained on artificial examples that are created by applying a series of heuristic transformations on reference summaries. These transformations are inspired by an error analysis of state-of-the-art summarization model outputs. Experimental results show that our model is able to correct factual errors in summaries generated by other neural summarization models and outperforms previous models on factual consistency evaluation on the CNN/DailyMail dataset. We also find that transferring from artificial error correction to downstream settings is still very challenging.