Learn how to leverage generative AI to support and improve your productivity at work. The next cohort will take place online on April 28 and 30, 2026, in French.
We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
The Secret to Better AI and Better Software (Is Requirements Engineering)
Nelly Bencomo
Jin L.C. Guo
Rachel Harrison
Hans-Martin Heyn
Tim Menzies
Recently, practitioners and researchers met to discuss the role of requirements, and AI and SE. We offer here notes on that fascinating disc… (see more)ussion. Also, have you considered writing for this column? This “SE for AI” column publishes commentaries on the growing field of SE for AI. Submissions are welcomed and encouraged (1,000–2,400 words, each figure and table counts as 250 words, try to use fewer than 12 references, and keep the discussion practitioner focused). Please submit your ideas to me at timm@ieee.org.—Tim Menzies
There is significant interest in using neuroimaging data to predict behavior. The predictive models are often interpreted by the computation… (see more) of feature importance, which quantifies the predictive relevance of an imaging feature. Tian and Zalesky (2021) suggest that feature importance estimates exhibit low test-retest reliability, pointing to a potential trade-off between prediction accuracy and feature importance reliability. This trade-off is counter-intuitive because both prediction accuracy and test-retest reliability reflect the reliability of brain-behavior relationships across independent samples. Here, we revisit the relationship between prediction accuracy and feature importance reliability in a large well-powered dataset across a wide range of behavioral measures. We demonstrate that, with a sufficient sample size, feature importance (operationalized as Haufe-transformed weights) can achieve fair to excellent test-retest reliability. More specifically, with a sample size of about 2600 participants, Haufe-transformed weights achieve average intra-class correlation coefficients of 0.75, 0.57 and 0.53 for cognitive, personality and mental health measures respectively. Haufe-transformed weights are much more reliable than original regression weights and univariate FC-behavior correlations. Intriguingly, feature importance reliability is strongly positively correlated with prediction accuracy across phenotypes. Within a particular behavioral domain, there was no clear relationship between prediction performance and feature importance reliability across regression algorithms. Finally, we show mathematically that feature importance reliability is necessary, but not sufficient, for low feature importance error. In the case of linear models, lower feature importance error leads to lower prediction error (up to a scaling by the feature covariance matrix). Overall, we find no fundamental trade-off between feature importance reliability and prediction accuracy.
Labeled datasets for agriculture are extremely spatially imbalanced. When developing algorithms for data-sparse regions, a previously explor… (see more)ed approach is to use transfer learning from data-rich regions. While standard transfer learning approaches typically leverage only direct inputs and outputs, geospatial imagery and agricultural data is rich in metadata that can inform transfer learning algorithms, such as the spatial coordinates of data-points. We build on previous work exploring use of meta-learning to crop type mapping in data-sparse regions and introduce task-informed meta-learning (TIML), an augmentation to model-agnostic meta-learning which takes advantage of this metadata. We apply TIML to the CropHarvest dataset, a global dataset of agricultural class labels paired with remote sensing data. In addition, we introduce the concept of forgetfulness when training meta-learning models on many similar tasks to mitigate memorization of training tasks. We find that TIML significantly improves average performance across the CropHarvest evaluation tasks compared to a range of benchmark models, measured using AUC ROC and F1 scores.
The intrinsic functional connectome can reveal how a lifetime of learning and lived experience is represented in the functional architecture… (see more) of the aging brain. We investigated whether network dedifferentiation, a hallmark of brain aging, reflects a global shift in network dynamics, or comprises network-specific changes that reflect the changing landscape of aging cognition. We implemented a novel multi-faceted strategy involving multi-echo fMRI acquisition and de-noising, individualized cortical parcellation, and multivariate (gradient and edge-level) functional connectivity methods. Twenty minutes of resting-state fMRI data and cognitive assessments were collected in younger (n=181) and older (n=120) adults. Dimensionality in the BOLD signal was lower for older adults, consistent with global network dedifferentiation. Functional connectivity gradients were largely age-invariant. In contrast, edge-level connectivity showed widespread changes with age, revealing discrete, network-specific dedifferentiation patterns. Visual and somatosensory regions were more integrated within the functional connectome; default and frontoparietal regions showed greater coupling; and the dorsal attention network was less differentiated from transmodal regions. Associations with cognition suggest that the formation and preservation of integrated, large-scale brain networks supports complex cognitive abilities. However, into older adulthood, the connectome is dominated by large-scale network disintegration, global dedifferentiation and network-specific dedifferentiation associated with age-related cognitive change.
Myelin is a dielectric material that wraps around the axons of nerve fibers to enable fast conduction of signals throughout the nervous syst… (see more)em. Loss of myelin can cause anywhere from minor interruption to complete disruption of nerve impulses in a range of neurodegenerative diseases such as multiple sclerosis and Parkinson’s disease. There is an ongoing debate in the myelin imaging community about which biomarker based on Magnetic Resonance Imaging (MRI) is more correlated with myelin. In this work, we implemented and compared several MRI-based myelin imaging techniques (quantitative magnetization transfer imaging, myelin water imaging, and proton density imaging) by evaluating their repeatability and their relation to large-scale histology in the ex vivo spinal cords of a rat, a dog, and a human. While there are studies investigating the relationship between pairs of them as well as with histology, to the best of our knowledge, this is the first study that implemented and compared all those methods at the same time to evaluate their reproducibility and their correlation with myelin. Qualitatively the contrasts were similar, and all techniques had comparable scan-rescan and correlations with histology. Surprisingly, the voxel-wise correlations between the various myelin measures were almost as high as the scan-rescan correlations. The correlations decreased when only white matter was considered, which could be due to the small dynamic range of the measurement, or due to artifacts related to the preparation and panoramic scanning of the tissue. We conclude that the myelin imaging techniques explored in this thesis exhibit similar specificity to myelin, yet the histological correlations suggest that more work is needed to determine the optimal myelin imaging protocol. The study also pointed out some potential miscalibrations during acquisitions as well as data processing that may lead to anywhere from minor to major impact on the accuracy of the results. These include B1 mapping, insufficient spoiling and variation of the predelay time. We have also standardized the data processing routines by upgrading qMTLab to qMRLab which adds several quantitative MR methods to the toolbox, such as standard T1 mapping and field mapping. In addition, the data of the dog spinal cord in this study will be published together with the analysis scripts to help the interested reader to reproduce the findings from this thesis.
We propose a new family of specifications called neural as specification , which uses the intrinsic information of neural networks — neu… (see more)ral activation patterns (NAP), rather than input data to specify the correctness and/or robustness of neural network predictions. We present a simple statistical approach to mining dominant neural activation patterns. We analyze NAPs from a statistical point of view and find that a single can cover a large number of training and testing data points whereas ad hoc data-as-specification only covers the given reference data point. To show the effectiveness of discovered NAPs, we formally important properties, as various types of misclassifications happen for a and is no-ambiguity between different We show that by using we can verify the prediction of the space , of the we is a and for abstract the state of each neuron to only activated and deactivated by leveraging NAPs. We would like to explore more refined abstractions such as { ( −∞ ] , (0 , 1] , (1 , ∞ ] } in future work.
Despite the prevalence of recent success in learning from static graphs, learning from time-evolving graphs remains an open challenge. In th… (see more)is work, we design new, more stringent evaluation procedures for link prediction specific to dynamic graphs, which reflect real-world considerations, to better compare the strengths and weaknesses of methods. First, we create two visualization techniques to understand the reoccurring patterns of edges over time and show that many edges reoccur at later time steps. Based on this observation, we propose a pure memorization-based baseline called EdgeBank. EdgeBank achieves surprisingly strong performance across multiple settings which highlights that the negative edges used in the current evaluation are easy. To sample more challenging negative edges, we introduce two novel negative sampling strategies that improve robustness and better match real-world applications. Lastly, we introduce six new dynamic graph datasets from a diverse set of domains missing from current benchmarks, providing new challenges and opportunities for future research. Our code repository is accessible at https://github.com/fpour/DGB.git.
2021-12-31
Advances in Neural Information Processing Systems 35 (NeurIPS 2022) (published)
In recent years, a growing number of deep model-based reinforcement learning (RL) methods have been introduced. The interest in deep model-b… (see more)ased RL is not surprising, given its many potential benefits, such as higher sample efficiency and the potential for fast adaption to changes in the environment. However, we demonstrate, using an improved version of the recently introduced Local Change Adaptation (LoCA) setup, that well-known model-based methods such as PlaNet and DreamerV2 perform poorly in their ability to adapt to local environmental changes. Combined with prior work that made a similar observation about the other popular model-based method, MuZero, a trend appears to emerge, suggesting that current deep model-based methods have serious limitations. We dive deeper into the causes of this poor performance, by identifying elements that hurt adaptive behavior and linking these to underlying techniques frequently used in deep model-based RL. We empirically validate these insights in the case of linear function approximation by demonstrating that a modified version of linear Dyna achieves effective adaptation to local changes. Furthermore, we provide detailed insights into the challenges of building an adaptive nonlinear model-based method, by experimenting with a nonlinear version of Dyna.
In this work we propose a principled evaluation framework for model-based optimisation to measure how well a generative model can extrapolat… (see more)e. We achieve this by interpreting the training and validation splits as draws from their respective ‘truncated’ ground truth distributions, where examples in the validation set contain scores much larger than those in the training set. Model selection is performed on the validation set for some prescribed validation metric. A major research question however is in determining what validation metric correlates best with the expected value of generated candidates with respect to the ground truth oracle; work towards answering this question can translate to large economic gains since it is expensive to evaluate the ground truth oracle in the real world. We compare various validation metrics for generative adversarial networks using our framework. We also discuss limitations with our framework with respect to existing datasets and how progress can be made to mitigate them. 1
Generative flow networks (GFlowNets) are a method for learning a stochastic policy for generating compositional objects, such as graphs or s… (see more)trings, from a given unnormalized density by sequences of actions, where many possible action sequences may lead to the same object. We find previously proposed learning objectives for GFlowNets, flow matching and detailed balance, which are analogous to temporal difference learning, to be prone to inefficient credit propagation across long action sequences. We thus propose a new learning objective for GFlowNets, trajectory balance, as a more efficient alternative to previously used objectives. We prove that any global minimizer of the trajectory balance objective can define a policy that samples exactly from the target distribution. In experiments on four distinct domains, we empirically demonstrate the benefits of the trajectory balance objective for GFlowNet convergence, diversity of generated samples, and robustness to long action sequences and large action spaces.
2021-12-31
Advances in Neural Information Processing Systems 35 (NeurIPS 2022) (published)