We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
PURPOSE
Advancing the development of 7 T MRI for spinal cord imaging is crucial for the enhanced diagnosis and monitoring of various neurode… (see more)generative diseases and traumas. However, a significant challenge at this field strength is the transmit field inhomogeneity. Such inhomogeneity is particularly problematic for imaging the small, deep anatomical structures of the cervical spinal cord, as it can cause uneven signal intensity and elevate the local specific absorption ratio, compromising image quality. This multisite study explores several RF shimming techniques in the cervical spinal cord.
METHODS
Data were collected from 5 participants between two 7 T sites with a custom 8Tx/20Rx parallel transmission coil. We explored two radiofrequency (RF) shimming approaches from an MRI vendor and four from an open-source toolbox, showcasing their ability to enhance transmit field and signal homogeneity along the cervical spinal cord.
RESULTS
The circularly polarized (CP), coefficient of variation (CoV), and specific absorption rate (SAR) efficiency shim modes showed the highest B1 + efficiency, and the vendor-based "patient" and "volume" modes showed the lowest B1 + efficiency. The coefficient of variation method produced the highest CSF/spinal cord contrast on T2*-weighted scans (ratio of 1.27 ± 0.03), and the lowest variation of that contrast along the superior-inferior axis.
CONCLUSION
The study's findings highlight the potential of RF shimming to advance 7 T MRI's clinical utility for central nervous system imaging by enabling more homogenous and efficient spinal cord imaging. Additionally, the research incorporates a reproducible Jupyter Notebook, enhancing the study's transparency and facilitating peer verification.
PURPOSE
Advancing the development of 7 T MRI for spinal cord imaging is crucial for the enhanced diagnosis and monitoring of various neurode… (see more)generative diseases and traumas. However, a significant challenge at this field strength is the transmit field inhomogeneity. Such inhomogeneity is particularly problematic for imaging the small, deep anatomical structures of the cervical spinal cord, as it can cause uneven signal intensity and elevate the local specific absorption ratio, compromising image quality. This multisite study explores several RF shimming techniques in the cervical spinal cord.
METHODS
Data were collected from 5 participants between two 7 T sites with a custom 8Tx/20Rx parallel transmission coil. We explored two radiofrequency (RF) shimming approaches from an MRI vendor and four from an open-source toolbox, showcasing their ability to enhance transmit field and signal homogeneity along the cervical spinal cord.
RESULTS
The circularly polarized (CP), coefficient of variation (CoV), and specific absorption rate (SAR) efficiency shim modes showed the highest B1 + efficiency, and the vendor-based "patient" and "volume" modes showed the lowest B1 + efficiency. The coefficient of variation method produced the highest CSF/spinal cord contrast on T2*-weighted scans (ratio of 1.27 ± 0.03), and the lowest variation of that contrast along the superior-inferior axis.
CONCLUSION
The study's findings highlight the potential of RF shimming to advance 7 T MRI's clinical utility for central nervous system imaging by enabling more homogenous and efficient spinal cord imaging. Additionally, the research incorporates a reproducible Jupyter Notebook, enhancing the study's transparency and facilitating peer verification.
PURPOSE
Advancing the development of 7 T MRI for spinal cord imaging is crucial for the enhanced diagnosis and monitoring of various neurode… (see more)generative diseases and traumas. However, a significant challenge at this field strength is the transmit field inhomogeneity. Such inhomogeneity is particularly problematic for imaging the small, deep anatomical structures of the cervical spinal cord, as it can cause uneven signal intensity and elevate the local specific absorption ratio, compromising image quality. This multisite study explores several RF shimming techniques in the cervical spinal cord.
METHODS
Data were collected from 5 participants between two 7 T sites with a custom 8Tx/20Rx parallel transmission coil. We explored two radiofrequency (RF) shimming approaches from an MRI vendor and four from an open-source toolbox, showcasing their ability to enhance transmit field and signal homogeneity along the cervical spinal cord.
RESULTS
The circularly polarized (CP), coefficient of variation (CoV), and specific absorption rate (SAR) efficiency shim modes showed the highest B1 + efficiency, and the vendor-based "patient" and "volume" modes showed the lowest B1 + efficiency. The coefficient of variation method produced the highest CSF/spinal cord contrast on T2*-weighted scans (ratio of 1.27 ± 0.03), and the lowest variation of that contrast along the superior-inferior axis.
CONCLUSION
The study's findings highlight the potential of RF shimming to advance 7 T MRI's clinical utility for central nervous system imaging by enabling more homogenous and efficient spinal cord imaging. Additionally, the research incorporates a reproducible Jupyter Notebook, enhancing the study's transparency and facilitating peer verification.
The availability of performant pre-trained models has led to a proliferation of fine-tuned expert models that are specialized to a particula… (see more)r domain or task. Model MoErging methods aim to recycle expert models to create an aggregate system with improved performance or generalization. A key component of MoErging methods is the creation of a router that decides which expert model(s) to use for a particular input or application. The promise, effectiveness, and large design space of MoErging has spurred the development of many new methods over the past few years. This rapid pace of development has made it challenging to compare different MoErging methods, which are rarely compared to one another and are often validated in different experimental setups. To remedy such gaps, we present a comprehensive survey of MoErging methods that includes a novel taxonomy for cataloging key design choices and clarifying suitable applications for each method. Apart from surveying MoErging research, we inventory software tools and applications that make use of MoErging. We additionally discuss related fields of study such as model merging, multitask learning, and mixture-of-experts models. Taken as a whole, our survey provides a unified overview of existing MoErging methods and creates a solid foundation for future work in this burgeoning field.
Deep learning for time-series anomaly detection (TSAD) has gained significant attention over the past decade. Despite the reported improveme… (see more)nts in several papers, the practical application of these models remains limited. Recent studies have cast doubt on these models, attributing their results to flawed evaluation techniques. However, the impact of initialization has largely been overlooked. This paper provides a critical analysis of the initialization effects on TSAD model performance. Our extensive experiments reveal that TSAD models are highly sensitive to hyperparameters such as window size, seed number, and normalization. This sensitivity often leads to significant variability in performance, which can be exploited to artificially inflate the reported efficacy of these models. We demonstrate that even minor changes in initialization parameters can result in performance variations that overshadow the claimed improvements from novel model architectures. Our findings highlight the need for rigorous evaluation protocols and transparent reporting of preprocessing steps to ensure the reliability and fairness of anomaly detection methods. This paper calls for a more cautious interpretation of TSAD advancements and encourages the development of more robust and transparent evaluation practices to advance the field and its practical applications.
Deep learning for time-series anomaly detection (TSAD) has gained significant attention over the past decade. Despite the reported improveme… (see more)nts in several papers, the practical application of these models remains limited. Recent studies have cast doubt on these models, attributing their results to flawed evaluation techniques. However, the impact of initialization has largely been overlooked. This paper provides a critical analysis of the initialization effects on TSAD model performance. Our extensive experiments reveal that TSAD models are highly sensitive to hyperparameters such as window size, seed number, and normalization. This sensitivity often leads to significant variability in performance, which can be exploited to artificially inflate the reported efficacy of these models. We demonstrate that even minor changes in initialization parameters can result in performance variations that overshadow the claimed improvements from novel model architectures. Our findings highlight the need for rigorous evaluation protocols and transparent reporting of preprocessing steps to ensure the reliability and fairness of anomaly detection methods. This paper calls for a more cautious interpretation of TSAD advancements and encourages the development of more robust and transparent evaluation practices to advance the field and its practical applications.
Deep learning for time-series anomaly detection (TSAD) has gained significant attention over the past decade. Despite the reported improveme… (see more)nts in several papers, the practical application of these models remains limited. Recent studies have cast doubt on these models, attributing their results to flawed evaluation techniques. However, the impact of initialization has largely been overlooked. This paper provides a critical analysis of the initialization effects on TSAD model performance. Our extensive experiments reveal that TSAD models are highly sensitive to hyperparameters such as window size, seed number, and normalization. This sensitivity often leads to significant variability in performance, which can be exploited to artificially inflate the reported efficacy of these models. We demonstrate that even minor changes in initialization parameters can result in performance variations that overshadow the claimed improvements from novel model architectures. Our findings highlight the need for rigorous evaluation protocols and transparent reporting of preprocessing steps to ensure the reliability and fairness of anomaly detection methods. This paper calls for a more cautious interpretation of TSAD advancements and encourages the development of more robust and transparent evaluation practices to advance the field and its practical applications.
High-content phenotypic screening, including high-content imaging (HCI), has gained popularity in the last few years for its ability to char… (see more)acterize novel therapeutics without prior knowledge of the protein target. When combined with deep learning techniques to predict and represent molecular-phenotype interactions, these advancements hold the potential to significantly accelerate and enhance drug discovery applications. This work focuses on the novel task of HCI-guided molecular design. Generative models for molecule design could be guided by HCI data, for example with a supervised model that links molecules to phenotypes of interest as a reward function. However, limited labeled data, combined with the high-dimensional readouts, can make training these methods challenging and impractical. We consider an alternative approach in which we leverage an unsupervised multimodal joint embedding to define a latent similarity as a reward for GFlowNets. The proposed model learns to generate new molecules that could produce phenotypic effects similar to those of the given image target, without relying on pre-annotated phenotypic labels. We demonstrate that the proposed method generates molecules with high morphological and structural similarity to the target, increasing the likelihood of similar biological activity, as confirmed by an independent oracle model.