We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
The Secret to Better AI and Better Software (Is Requirements Engineering)
Recently, practitioners and researchers met to discuss the role of requirements, and AI and SE. We offer here notes on that fascinating disc… (see more)ussion. Also, have you considered writing for this column? This “SE for AI” column publishes commentaries on the growing field of SE for AI. Submissions are welcomed and encouraged (1,000–2,400 words, each figure and table counts as 250 words, try to use fewer than 12 references, and keep the discussion practitioner focused). Please submit your ideas to me at timm@ieee.org.—Tim Menzies
There is significant interest in using neuroimaging data to predict behavior. The predictive models are often interpreted by the computation… (see more) of feature importance, which quantifies the predictive relevance of an imaging feature. Tian and Zalesky (2021) suggest that feature importance estimates exhibit low test-retest reliability, pointing to a potential trade-off between prediction accuracy and feature importance reliability. This trade-off is counter-intuitive because both prediction accuracy and test-retest reliability reflect the reliability of brain-behavior relationships across independent samples. Here, we revisit the relationship between prediction accuracy and feature importance reliability in a large well-powered dataset across a wide range of behavioral measures. We demonstrate that, with a sufficient sample size, feature importance (operationalized as Haufe-transformed weights) can achieve fair to excellent test-retest reliability. More specifically, with a sample size of about 2600 participants, Haufe-transformed weights achieve average intra-class correlation coefficients of 0.75, 0.57 and 0.53 for cognitive, personality and mental health measures respectively. Haufe-transformed weights are much more reliable than original regression weights and univariate FC-behavior correlations. Intriguingly, feature importance reliability is strongly positively correlated with prediction accuracy across phenotypes. Within a particular behavioral domain, there was no clear relationship between prediction performance and feature importance reliability across regression algorithms. Finally, we show mathematically that feature importance reliability is necessary, but not sufficient, for low feature importance error. In the case of linear models, lower feature importance error leads to lower prediction error (up to a scaling by the feature covariance matrix). Overall, we find no fundamental trade-off between feature importance reliability and prediction accuracy.
The intrinsic functional connectome can reveal how a lifetime of learning and lived experience is represented in the functional architecture… (see more) of the aging brain. We investigated whether network dedifferentiation, a hallmark of brain aging, reflects a global shift in network dynamics, or comprises network-specific changes that reflect the changing landscape of aging cognition. We implemented a novel multi-faceted strategy involving multi-echo fMRI acquisition and de-noising, individualized cortical parcellation, and multivariate (gradient and edge-level) functional connectivity methods. Twenty minutes of resting-state fMRI data and cognitive assessments were collected in younger (n=181) and older (n=120) adults. Dimensionality in the BOLD signal was lower for older adults, consistent with global network dedifferentiation. Functional connectivity gradients were largely age-invariant. In contrast, edge-level connectivity showed widespread changes with age, revealing discrete, network-specific dedifferentiation patterns. Visual and somatosensory regions were more integrated within the functional connectome; default and frontoparietal regions showed greater coupling; and the dorsal attention network was less differentiated from transmodal regions. Associations with cognition suggest that the formation and preservation of integrated, large-scale brain networks supports complex cognitive abilities. However, into older adulthood, the connectome is dominated by large-scale network disintegration, global dedifferentiation and network-specific dedifferentiation associated with age-related cognitive change.
Titre: Title: Comparison of Myelin Imaging Techniques in Ex Vivo Spinal Cord Auteur:
Myelin is a dielectric material that wraps around the axons of nerve fibers to enable fast conduction of signals throughout the nervous syst… (see more)em. Loss of myelin can cause anywhere from minor interruption to complete disruption of nerve impulses in a range of neurodegenerative diseases such as multiple sclerosis and Parkinson’s disease. There is an ongoing debate in the myelin imaging community about which biomarker based on Magnetic Resonance Imaging (MRI) is more correlated with myelin. In this work, we implemented and compared several MRI-based myelin imaging techniques (quantitative magnetization transfer imaging, myelin water imaging, and proton density imaging) by evaluating their repeatability and their relation to large-scale histology in the ex vivo spinal cords of a rat, a dog, and a human. While there are studies investigating the relationship between pairs of them as well as with histology, to the best of our knowledge, this is the first study that implemented and compared all those methods at the same time to evaluate their reproducibility and their correlation with myelin. Qualitatively the contrasts were similar, and all techniques had comparable scan-rescan and correlations with histology. Surprisingly, the voxel-wise correlations between the various myelin measures were almost as high as the scan-rescan correlations. The correlations decreased when only white matter was considered, which could be due to the small dynamic range of the measurement, or due to artifacts related to the preparation and panoramic scanning of the tissue. We conclude that the myelin imaging techniques explored in this thesis exhibit similar specificity to myelin, yet the histological correlations suggest that more work is needed to determine the optimal myelin imaging protocol. The study also pointed out some potential miscalibrations during acquisitions as well as data processing that may lead to anywhere from minor to major impact on the accuracy of the results. These include B1 mapping, insufficient spoiling and variation of the predelay time. We have also standardized the data processing routines by upgrading qMTLab to qMRLab which adds several quantitative MR methods to the toolbox, such as standard T1 mapping and field mapping. In addition, the data of the dog spinal cord in this study will be published together with the analysis scripts to help the interested reader to reproduce the findings from this thesis.
Toward Next-Generation Artificial Intelligence: Catalyzing the NeuroAI Revolution
Despite the prevalence of recent success in learning from static graphs, learning from time-evolving graphs remains an open challenge. In th… (see more)is work, we design new, more stringent evaluation procedures for link prediction specific to dynamic graphs, which reflect real-world considerations, to better compare the strengths and weaknesses of methods. First, we create two visualization techniques to understand the reoccurring patterns of edges over time and show that many edges reoccur at later time steps. Based on this observation, we propose a pure memorization-based baseline called EdgeBank. EdgeBank achieves surprisingly strong performance across multiple settings which highlights that the negative edges used in the current evaluation are easy. To sample more challenging negative edges, we introduce two novel negative sampling strategies that improve robustness and better match real-world applications. Lastly, we introduce six new dynamic graph datasets from a diverse set of domains missing from current benchmarks, providing new challenges and opportunities for future research. Our code repository is accessible at https://github.com/fpour/DGB.git.
In this work we propose a principled evaluation framework for model-based optimisation to measure how well a generative model can extrapolat… (see more)e. We achieve this by interpreting the training and validation splits as draws from their respective ‘truncated’ ground truth distributions, where examples in the validation set contain scores much larger than those in the training set. Model selection is performed on the validation set for some prescribed validation metric. A major research question however is in determining what validation metric correlates best with the expected value of generated candidates with respect to the ground truth oracle; work towards answering this question can translate to large economic gains since it is expensive to evaluate the ground truth oracle in the real world. We compare various validation metrics for generative adversarial networks using our framework. We also discuss limitations with our framework with respect to existing datasets and how progress can be made to mitigate them. 1
Generative flow networks (GFlowNets) are a method for learning a stochastic policy for generating compositional objects, such as graphs or s… (see more)trings, from a given unnormalized density by sequences of actions, where many possible action sequences may lead to the same object. We find previously proposed learning objectives for GFlowNets, flow matching and detailed balance, which are analogous to temporal difference learning, to be prone to inefficient credit propagation across long action sequences. We thus propose a new learning objective for GFlowNets, trajectory balance, as a more efficient alternative to previously used objectives. We prove that any global minimizer of the trajectory balance objective can define a policy that samples exactly from the target distribution. In experiments on four distinct domains, we empirically demonstrate the benefits of the trajectory balance objective for GFlowNet convergence, diversity of generated samples, and robustness to long action sequences and large action spaces.