We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
Estimating treatment effect for individuals with progressive multiple sclerosis using deep learning
Background The opioid epidemic in North America has been driven by an increase in the use and potency of prescription opioids, with ensuing … (see more)excessive opioid-related deaths. Internationally, there are lower rates of opioid-related mortality, possibly because of differences in prescribing and health system policies. Our aim was to compare opioid prescribing rates in patients without cancer, across 5 centers in 4 countries. In addition, we evaluated differences in the type, strength, and starting dose of medication and whether these characteristics changed over time. Methods and findings We conducted a retrospective multicenter cohort study of adults who are new users of opioids without prior cancer. Electronic health records and administrative health records from Boston (United States), Quebec and Alberta (Canada), United Kingdom, and Taiwan were used to identify patients between 2006 and 2015. Standard dosages in morphine milligram equivalents (MMEs) were calculated according to The Centers for Disease Control and Prevention. Age- and sex-standardized opioid prescribing rates were calculated for each jurisdiction. Of the 2,542,890 patients included, 44,690 were from Boston (US), 1,420,136 Alberta, 26,871 Quebec (Canada), 1,012,939 UK, and 38,254 Taiwan. The highest standardized opioid prescribing rates in 2014 were observed in Alberta at 66/1,000 persons compared to 52, 51, and 18/1,000 in the UK, US, and Quebec, respectively. The median MME/day (IQR) at initiation was highest in Boston at 38 (20 to 45); followed by Quebec, 27 (18 to 43); Alberta, 23 (9 to 38); UK, 12 (7 to 20); and Taiwan, 8 (4 to 11). Oxycodone was the first prescribed opioid in 65% of patients in the US cohort compared to 14% in Quebec, 4% in Alberta, 0.1% in the UK, and none in Taiwan. One of the limitations was that data were not available from all centers for the entirety of the 10-year period. Conclusions In this study, we observed substantial differences in opioid prescribing practices for non-cancer pain between jurisdictions. The preference to start patients on higher MME/day and more potent opioids in North America may be a contributing cause to the opioid epidemic.
Existing unsupervised document hashing methods are mostly established on generative models. Due to the difficulties of capturing long depend… (see more)ency structures, these methods rarely model the raw documents directly, but instead to model the features extracted from them (e.g. bag-of-words (BOW), TFIDF). In this paper, we propose to learn hash codes from BERT embeddings after observing their tremendous successes on downstream tasks. As a first try, we modify existing generative hashing models to accommodate the BERT embeddings. However, little improvement is observed over the codes learned from the old BOW or TFIDF features. We attribute this to the reconstruction requirement in the generative hashing, which will enforce irrelevant information that is abundant in the BERT embeddings also compressed into the codes. To remedy this issue, a new unsupervised hashing paradigm is further proposed based on the mutual information (MI) maximization principle. Specifically, the method first constructs appropriate global and local codes from the documents and then seeks to maximize their mutual information. Experimental results on three benchmark datasets demonstrate that the proposed method is able to generate hash codes that outperform existing ones learned from BOW features by a substantial margin.
2021-11-01
Findings of the Association for Computational Linguistics: EMNLP 2021 (published)
The design of widespread vision-and-language datasets and pre-trained encoders directly adopts, or draws inspiration from, the concepts and … (see more)images of ImageNet. While one can hardly overestimate how much this benchmark contributed to progress in computer vision, it is mostly derived from lexical databases and image queries in English, resulting in source material with a North American or Western European bias. Therefore, we devise a new protocol to construct an ImageNet-style hierarchy representative of more languages and cultures. In particular, we let the selection of both concepts and images be entirely driven by native speakers, rather than scraping them automatically. Specifically, we focus on a typologically diverse set of languages, namely, Indonesian, Mandarin Chinese, Swahili, Tamil, and Turkish. On top of the concepts and images obtained through this new protocol, we create a multilingual dataset for Multicultural Reasoning over Vision and Language (MaRVL) by eliciting statements from native speaker annotators about pairs of images. The task consists of discriminating whether each grounded statement is true or false. We establish a series of baselines using state-of-the-art models and find that their cross-lingual transfer performance lags dramatically behind supervised performance in English. These results invite us to reassess the robustness and accuracy of current state-of-the-art models beyond a narrow domain, but also open up new exciting challenges for the development of truly multilingual and multicultural systems.
2021-11-01
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (published)
Failure Mode and Effect Analysis (FMEA) has been used as an engineering risk assessment tool since 1949. FMEAs are effective in preemptively… (see more) identifying and addressing how a device or process might fail in operation and are often used in the design of high-risk technology applications such as military, automotive industry and medical devices. In this work, we explore whether FMEAs can serve as a risk assessment tool for machine learning practitioners, especially in deploying systems for high-risk applications (e.g. algorithms for recidivism assessment). In particular, we discuss how FMEAs can be used to identify social and ethical failures of Artificial Intelligent Systemss (AISs), recognizing that FMEAs have the potential to uncover a broader range of failures. We first propose a process for developing a Social FMEAs (So-FMEAs) by building on the existing FMEAs framework and a recently published definition of Social Failure Modes by Millar. We then demonstrate a simple proof-of-concept, So-FMEAs for the COMPAS algorithm, a risk assessment tool used by judges to make recidivism-related decisions for convicted individuals. Through this preliminary investigation, we illustrate how a traditional engineering risk management tool could be adapted for analyzing social and ethical failures of AIS. Engineers and designers of AISs can use this new approach to improve their system's design and perform due diligence with respect to potential ethical and social failures.
2021-10-28
2021 IEEE International Symposium on Ethics in Engineering, Science and Technology (ETHICS) (published)
Labeling data is often expensive and time-consuming, especially for tasks such as object detection and instance segmentation, which require … (see more)dense labeling of the image. While few-shot object detection is about training a model on novel (unseen) object classes with little data, it still requires prior training on many labeled examples of base (seen) classes. On the other hand, self-supervised methods aim at learning representations from unlabeled data which transfer well to downstream tasks such as object detection. Combining few-shot and self-supervised object detection is a promising research direction. In this survey, we review and characterize the most recent approaches on few-shot and self-supervised object detection. Then, we give our main takeaways and discuss future research directions. Project page: https://gabrielhuang.github.io/fsod-survey/.
Ultrareliable low-latency communication (URLLC), a major 5G new-radio (NR) use case, is the key enabler for applications with strict reliabi… (see more)lity and latency requirements. These applications necessitate the use of short-length and high-rate channel codes. Guessing random additive noise decoding (GRAND) is a recently proposed maximum likelihood (ML) decoding technique for these short-length and high-rate codes. Rather than decoding the received vector, GRAND tries to infer the noise that corrupted the transmitted codeword during transmission through the communication channel. As a result, GRAND can decode any code, structured or unstructured. GRAND has hard-input as well as soft-input variants. Among these variants, ordered reliability bits GRAND (ORBGRAND) is a soft-input variant that outperforms hard-input GRAND and is suitable for parallel hardware implementation. This work reports the first hardware architecture for ORBGRAND, which achieves an average throughput of up to 42.5 Gb/s for a code length of 128 at a target frame error rate (FER) of 10−7. Furthermore, the proposed hardware can be used to decode any code as long as the length and rate constraints are met. In comparison to the GRAND with ABandonment (GRANDAB), a hard-input variant of GRAND, the proposed architecture enhances decoding performance by at least 2 dB. When compared to the state-of-the-art fast dynamic successive cancellation flip decoder (Fast-DSCF) using a 5G polar code (PC) (128, 105), the proposed ORBGRAND VLSI implementation has
Random projection (RP) have recently emerged as popular techniques in the machine learning community for their ability in reducing the dimen… (see more)sion of very high-dimensional tensors. Following the work in [30], we consider a tensorized random projection relying on Tensor Train (TT) decomposition where each element of the core tensors is drawn from a Rademacher distribution. Our theoretical results reveal that the Gaussian low-rank tensor represented in compressed form in TT format in [30] can be replaced by a TT tensor with core elements drawn from a Rademacher distribution with the same embedding size. Experiments on synthetic data demonstrate that tensorized Rademacher RP can outperform the tensorized Gaussian RP studied in [30]. In addition, we show both theoretically and experimentally, that the tensorized RP in the Matrix Product Operator (MPO) format is not a Johnson-Lindenstrauss transform (JLT) and therefore not a well-suited random projection map
Given the vast number of repositories hosted on GitHub, project discovery and retrieval have become increasingly important for GitHub users.… (see more) Repository descriptions serve as one of the first points of contact for users who are accessing a repository. However, repository owners often fail to provide a high-quality description; instead, they use vague terms, the purpose of the repository is poorly explained, or the description is omitted entirely. In this work, we examine the current practice of writing GitHub repository descriptions. Our investigation leads to the proposal of the LSP (Language, Software technology, and Purpose) template to formulate good descriptions for GitHub repositories that are clear, concise, and informative. To understand the extent to which current automated techniques can support generating repository descriptions, we compare the performance of state-of-the-art text summarization methods on this task. Finally, our user study with GitHub users reveals that automated summarization can adequately be used for default description generation for GitHub repositories, while the descriptions which follow the LSP template offer the most effective instrument for communicating with GitHub users.