We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
PaReco: patched clones and missed patches among the divergent variants of a software family
Re-using whole repositories as a starting point for new projects is often done by maintaining a variant fork parallel to the original. Howev… (see more)er, the common artifacts between both are not always kept up to date. As a result, patches are not optimally integrated across the two repositories, which may lead to sub-optimal maintenance between the variant and the original project. A bug existing in both repositories can be patched in one but not the other (we see this as a missed opportunity) or it can be manually patched in both probably by different developers (we see this as effort duplication). In this paper we present a tool (named PaReCo) which relies on clone detection to mine cases of missed opportunity and effort duplication from a pool of patches. We analyzed 364 (source to target) variant pairs with 8,323 patches resulting in a curated dataset containing 1,116 cases of effort duplication and 1,008 cases of missed opportunities. We achieve a precision of 91%, recall of 80%, accuracy of 88%, and F1-score of 85%. Furthermore, we investigated the time interval between patches and found out that, on average, missed patches in the target variants have been introduced in the source variants 52 weeks earlier. Consequently, PaReCo can be used to manage variability in “time” by automatically identifying interesting patches in later project releases to be backported to supported earlier releases.
Bayesian causal structure learning aims to learn a posterior distribution over directed acyclic graphs (DAGs), and the mechanisms that defin… (see more)e the relationship between parent and child variables. By taking a Bayesian approach, it is possible to reason about the uncertainty of the causal model. The notion of modelling the uncertainty over models is particularly crucial for causal structure learning since the model could be unidentifiable when given only a finite amount of observational data. In this paper, we introduce a novel method to jointly learn the structure and mechanisms of the causal model using Variational Bayes, which we call Variational Bayes-DAG-GFlowNet (VBG). We extend the method of Bayesian causal structure learning using GFlowNets to learn not only the posterior distribution over the structure, but also the parameters of a linear-Gaussian model. Our results on simulated data suggest that VBG is competitive against several baselines in modelling the posterior over DAGs and mechanisms, while offering several advantages over existing methods, including the guarantee to sample acyclic graphs, and the flexibility to generalize to non-linear causal mechanisms.
Existing eHealth Solutions for Older Adults Living With Neurocognitive Disorders (Mild and Major) or Dementia and Their Informal Caregivers: Protocol for an Environmental Scan
Background Dementia is one of the main public health priorities for current and future societies worldwide. Over the past years, eHealth sol… (see more)utions have added numerous promising solutions to enhance the health and wellness of people living with dementia-related cognitive problems and their primary caregivers. Previous studies have shown that an environmental scan identifies the knowledge-to-action gap meaningfully. This paper presents the protocol of an environmental scan to monitor the currently available eHealth solutions targeting dementia and other neurocognitive disorders against selected attributes. Objective This study aims to identify the characteristics of currently available eHealth solutions recommended for older adults with cognitive problems and their informal caregivers. To inform the recommendations regarding eHealth solutions for these people, it is important to obtain a comprehensive view of currently available technologies and document their outcomes and conditions of success. Methods We will perform an environmental scan of available eHealth solutions for older adults with cognitive impairment or dementia and their informal caregivers. Potential solutions will be initially identified from a previous systematic review. We will also conduct targeted searches for gray literature on Google and specialized websites covering the regions of Canada and Europe. Technological tools will be scanned based on a preformatted extraction grid. The relevance and efficiency based on the selected attributes will be assessed. Results We will prioritize relevant solutions based on the needs and preferences identified from a qualitative study among older adults with cognitive impairment or dementia and their informal caregivers. Conclusions This environmental scan will identify eHealth solutions that are currently available and scientifically appraised for older adults with cognitive impairment or dementia and their informal caregivers. This knowledge will inform the development of a decision support tool to assist older adults and their informal caregivers in their search for adequate eHealth solutions according to their needs and preferences based on trustable information. International Registered Report Identifier (IRRID) DERR1-10.2196/41015
A novel permuted fast successive-cancellation list decoding algorithm with fast Hadamard transform (FHT-FSCL) is presented. The proposed dec… (see more)oder initializes
2022-11-01
IEEE Transactions on Vehicular Technology (published)
Generalization is an important attribute of machine learning models, particularly for those that are to be deployed in a medical context, wh… (see more)ere unreliable predictions can have real world consequences. While the failure of models to generalize across datasets is typically attributed to a mismatch in the data distributions, performance gaps are often a consequence of biases in the "ground-truth" label annotations. This is particularly important in the context of medical image segmentation of pathological structures (e.g. lesions), where the annotation process is much more subjective, and affected by a number underlying factors, including the annotation protocol, rater education/experience, and clinical aims, among others. In this paper, we show that modeling annotation biases, rather than ignoring them, poses a promising way of accounting for differences in annotation style across datasets. To this end, we propose a generalized conditioning framework to (1) learn and account for different annotation styles across multiple datasets using a single model, (2) identify similar annotation styles across different datasets in order to permit their effective aggregation, and (3) fine-tune a fully trained model to a new annotation style with just a few samples. Next, we present an image-conditioning approach to model annotation styles that correlate with specific image features, potentially enabling detection biases to be more easily identified.
We articulate a vision for computer programming that includes pen-based computing, a paradigm we term notational programming. Notational pro… (see more)gramming blurs contexts: certain typewritten variables can be referenced in handwritten notation and vice-versa. To illustrate this paradigm, we developed an extension, Notate, to computational notebooks which allows users to open drawing canvases within lines of code. As a case study, we explore quantum programming and designed a notation, Qaw, that extends quantum circuit notation with abstraction features, such as variable-sized wire bundles and recursion. Results from a usability study with novices suggest that users find our core interaction of implicit cross-context references intuitive, but suggests further improvements to debugging infrastructure, interface design, and recognition rates. Throughout, we discuss questions raised by the notational paradigm, including a shift from ‘recognition’ of notations to ‘reconfiguration’ of practices and values around programming, and from ‘sketching’ to writing and drawing, or what we call ‘notating.’
2022-10-28
ACM Symposium on User Interface Software and Technology (published)
Notational Programming for Notebook Environments: A Case Study with Quantum Circuits
Ian A. Arawjo
Anthony DeArmas
Michael Roberts
Shrutarshi Basu
Tapan S. Parikh
We articulate a vision for computer programming that includes pen-based computing, a paradigm we term notational programming. Notational pro… (see more)gramming blurs contexts: certain typewritten variables can be referenced in handwritten notation and vice-versa. To illustrate this paradigm, we developed an extension, Notate, to computational notebooks which allows users to open drawing canvases within lines of code. As a case study, we explore quantum programming and designed a notation, Qaw, that extends quantum circuit notation with abstraction features, such as variable-sized wire bundles and recursion. Results from a usability study with novices suggest that users find our core interaction of implicit cross-context references intuitive, but suggests further improvements to debugging infrastructure, interface design, and recognition rates. Throughout, we discuss questions raised by the notational paradigm, including a shift from ‘recognition’ of notations to ‘reconfiguration’ of practices and values around programming, and from ‘sketching’ to writing and drawing, or what we call ‘notating.’
2022-10-28
ACM Symposium on User Interface Software and Technology (published)
We propose a general framework for policy representation for reinforcement learning tasks. This framework involves finding a low-dimensional… (see more) embedding of the policy on a reproducing kernel Hilbert space (RKHS). The usage of RKHS based methods allows us to derive strong theoretical guarantees on the expected return of the reconstructed policy. Such guarantees are typically lacking in black-box models, but are very desirable in tasks requiring stability and convergence guarantees. We conduct several experiments on classic RL domains. The results confirm that the policies can be robustly represented in a low-dimensional space while the embedded policy incurs almost no decrease in returns.
2022-10-27
Journal of Artificial Intelligence Research (published)