We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
GraphCG: Unsupervised Discovery of Steerable Factors in Graphs
Deep generative models have been extensively explored recently, especially for the graph data such as molecular graphs and point clouds. Yet… (see more), much less investigation has been carried out on understanding the learned latent space of deep graph generative models. Such understandings can open up a unified perspective and provide guidelines for essential tasks like controllable generation. In this paper, we first examine the representation space of the recent deep generative model trained for graph data, observing that the learned representation space is not perfectly disentangled. Based on this observation, we then propose an unsupervised method called GraphCG, which is model-agnostic and task-agnostic for discovering steerable factors in graph data. Specifically, GraphCG learns the semantic-rich directions via maximizing the corresponding mutual information, where the edited graph along the same direction will possess certain steerable factors. We conduct experiments on two types of graph data, molecular graphs and point clouds. Both the quantitative and qualitative results show the effectiveness of GraphCG for discovering steerable factors. The code will be public in the near future.
Mitigating the climate crisis requires a rapid transition towards lower-carbon energy. Catalyst materials play a crucial role in the electro… (see more)chemical reactions involved in numerous industrial processes key to this transition, such as renewable energy storage and electrofuel synthesis. To reduce the energy spent on such activities, we must quickly discover more efficient catalysts to drive electrochemical reactions. Machine learning (ML) holds the potential to efficiently model materials properties from large amounts of data, accelerating electrocatalyst design. The Open Catalyst Project OC20 dataset was constructed to that end. However, ML models trained on OC20 are still neither scalable nor accurate enough for practical applications. In this paper, we propose task-specific innovations applicable to most architectures, enhancing both computational efficiency and accuracy. This includes improvements in (1) the graph creation step, (2) atom representations, (3) the energy prediction head, and (4) the force prediction head. We describe these contributions, referred to as PhAST, and evaluate them thoroughly on multiple architectures. Overall, PhAST improves energy MAE by 4 to 42
INTRODUCTION
There is increasing interest for patient-to-provider telemedicine in pediatric acute care. The suitability of telemedicine (vir… (see more)tualizability) for visits in this setting has not been formally assessed. We estimated the proportion of in-person pediatric emergency department (PED) visits that were potentially virtualizable, and identified factors associated with virtualizable care.
METHODS
This was a retrospective analysis of in-person visits at the PED of a Canadian tertiary pediatric hospital (02/2018-12/2019). Three definitions of virtualizable care were developed: (1) a definition based on "resource use" classifying visits as virtualizable if they resulted in a home discharge, no diagnostic testing, and no return visit within 72 h; (2) a "diagnostic definition" based on primary ED diagnosis; and (3) a stringent "combined definition" by which visits were classified as virtualizable if they met both the resource use and diagnostic definitions. Multivariable logistic regression was used to identify factors associated with telemedicine suitability.
RESULTS
There were 130,535 eligible visits from 80,727 individual patients during the study period. Using the most stringent combined definition of telemedicine suitability, 37.9% (95% confidence interval (CI) 37.6%-38.2%) of in-person visits were virtualizable. Overnight visits (adjusted odds ratio (aOR) 1.16-1.37), non-Canadian citizenship (aOR 1.10-1.18), ethnocultural vulnerability (aOR 1.14-1.22), and a consultation for head trauma (aOR 3.50-4.60) were associated with higher telemedicine suitability across definitions.
DISCUSSION
There is a high potential for patient-to-provider telemedicine in the PED setting. Local patient and visit-level characteristics must be considered in the design of safe and inclusive telemedicine models for pediatric acute care.
With neural networks applied to safety-critical applications, it has become increasingly important to understand the defining features of de… (see more)cision-making. Therefore, the need to uncover the black boxes to rational representational space of these neural networks is apparent. Concept bottleneck model (CBM) encourages interpretability by predicting human-understandable concepts. They predict concepts from input images and then labels from concepts. Test time intervention, a salient feature of CBM, allows for human-model interactions. However, these interactions are prone to information leakage and can often be ineffective inappropriate communication with humans. We propose a novel uncertainty based strategy, \emph{SIUL: Single Interventional Uncertainty Learning} to select the interventions. Additionally, we empirically test the robustness of CBM and the effect of SIUL interventions under adversarial attack and distributional shift. Using SIUL, we observe that the interventions suggested lead to meaningful corrections along with mitigation of concept leakage. Extensive experiments on three vision datasets along with a histopathology dataset validate the effectiveness of our interventional learning.
With the latest advances in deep learning, several methods have been investigated for optimal learning settings in scenarios where the data … (see more)stream is continuous over time. However, training sparse networks in such settings has often been overlooked. In this paper, we explore the problem of training a neural network with a target sparsity in a particular case of online learning: the anytime learning at macroscale paradigm (ALMA). We propose a novel way of progressive pruning, referred to as \textit{Anytime Progressive Pruning} (APP); the proposed approach significantly outperforms the baseline dense and Anytime OSP models across multiple architectures and datasets under short, moderate, and long-sequence training. Our method, for example, shows an improvement in accuracy of
We propose a hierarchically structured variational inference model for accurately disentangling observable evidence of disease (e.g. brain l… (see more)esions or atrophy) from subject-specific anatomy in brain MRIs. With flexible, partially autoregressive priors, our model (1) addresses the subtle and fine-grained dependencies that typically exist between anatomical and pathological generating factors of an MRI to ensure the clinical validity of generated samples; (2) preserves and disentangles finer pathological details pertaining to a patient's disease state. Additionally, we experiment with an alternative training configuration where we provide supervision to a subset of latent units. It is shown that (1) a partially supervised latent space achieves a higher degree of disentanglement between evidence of disease and subject-specific anatomy; (2) when the prior is formulated with an autoregressive structure, knowledge from the supervision can propagate to the unsupervised latent units, resulting in more informative latent representations capable of modelling anatomy-pathology interdependencies.
Pretrained large generative language models have shown great performance on many tasks, but exhibit low compositional generalization abiliti… (see more)es. Scaling such models has been shown to improve their performance on various NLP tasks even just by conditioning them on a few examples to solve the task without any fine-tuning (also known as in-context learning). In this work, we look at the gap between the in-distribution (ID) and out-of-distribution (OOD) performance of such models in semantic parsing tasks with in-context learning. In the ID settings, the demonstrations are from the same split (\textit{test} or \textit{train}) that the model is being evaluated on, and in the OOD settings, they are from the other split. We look at how the relative generalization gap of in-context learning evolves as models are scaled up. We evaluate four model families, OPT, BLOOM, CodeGen and Codex on three semantic parsing datasets, CFQ, SCAN and GeoQuery with different number of exemplars, and observe a trend of decreasing relative generalization gap as models are scaled up.