We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
Two-stage Multiple-Model Compression Approach for Sampled Electrical Signals
This paper presents a two-stage Multiple-Model Compression (MMC) approach for sampled electrical waveforms. To limit latency, the processing… (see more) is window-based, with a window length commensurate to the electrical period. For each window, the first stage compares several parametric models to get a coarse representation of the samples. The second stage then compares different residual compression techniques to minimize the norm of the reconstruction error. The allocation of the rate budget among the two stages is optimized. The proposed MMC approach provides better signal-to-noise ratios than state-of-the-art solutions on periodic and transient waveforms.
Objective: Automatic and robust characterization of spinal cord shape from MRI images is relevant to assess the severity of spinal cord comp… (see more)ression in degenerative cervical myelopathy (DCM) and to guide therapeutic strategy. Despite its popularity, the maximum spinal cord compression (MSCC) index has practical limitations to objectively assess the severity of cord compression. Firstly, it is computed by normalizing the anteroposterior cord diameter by that above and below the level of compression, but it does not account for the fact that the spinal cord itself varies in size along the superior-inferior axis, making this MSCC sensitive to the level of compression. Secondly, spinal cord shape varies across individuals, making MSCC also sensitive to the size and shape of every individual. Thirdly, MSCC is typically computed by the expert-rater on a single sagittal slice, which is time-consuming and prone to inter-rater variability. In this study, we propose a fully automatic pipeline to compute MSCC. Methods: We extended the traditional MSCC (based on the anteroposterior diameter) to other shape metrics (transverse diameter, area, eccentricity, and solidity), and proposed a normalization strategy using a database of healthy adults (n=203) to address the variability of the spinal cord anatomy between individuals. We validated the proposed method in a cohort of DCM patients (n=120) with manually derived morphometric measures and predicted the therapeutic decision (operative/conservative) using a stepwise binary logistic regression including demographics, clinical scores, and electrophysiological assessment. Results: The automatic and normalized MSCC measures significantly correlated with clinical scores and predicted the therapeutic decision with higher accuracy than the manual MSCC. Results show that the sensory dysfunction of the upper extremities (mJOA subscore), the presence of myelopathy and the proposed MRI-based normalized morphometric measures were significant predictors of the therapeutic decision. The model yielded an area under the curve of the receiver operating characteristic of 80%. Conclusion: The study introduced an automatic method for computation of normalized MSCC measures of cord compression from MRI scans, which is an important step towards better informed therapeutic decisions in DCM patients. The method is open-source and available in the Spinal Cord Toolbox v6.0.
As AI systems become more advanced, companies and regulators will make difficult decisions about whether it is safe to train and deploy them… (see more). To prepare for these decisions, we investigate how developers could make a 'safety case,' which is a structured rationale that AI systems are unlikely to cause a catastrophe. We propose a framework for organizing a safety case and discuss four categories of arguments to justify safety: total inability to cause a catastrophe, sufficiently strong control measures, trustworthiness despite capability to cause harm, and -- if AI systems become much more powerful -- deference to credible AI advisors. We evaluate concrete examples of arguments in each category and outline how arguments could be combined to justify that AI systems are safe to deploy.
As AI systems become more advanced, companies and regulators will make difficult decisions about whether it is safe to train and deploy them… (see more). To prepare for these decisions, we investigate how developers could make a 'safety case,' which is a structured rationale that AI systems are unlikely to cause a catastrophe. We propose a framework for organizing a safety case and discuss four categories of arguments to justify safety: total inability to cause a catastrophe, sufficiently strong control measures, trustworthiness despite capability to cause harm, and -- if AI systems become much more powerful -- deference to credible AI advisors. We evaluate concrete examples of arguments in each category and outline how arguments could be combined to justify that AI systems are safe to deploy.
Disentanglement aims to recover meaningful latent ground-truth factors from the observed distribution solely, and is formalized through the… (see more) theory of identifiability. The identifiability of independent latent factors is proven to be impossible in the unsupervised i.i.d. setting under a general nonlinear map from factors to observations. In this work, however, we demonstrate that it is possible to recover quantized latent factors under a generic nonlinear diffeomorphism. We only assume that the latent factors have independent discontinuities in their density, without requiring the factors to be statistically independent. We introduce this novel form of identifiability, termed quantized factor identifiability, and provide a comprehensive proof of the recovery of the quantized factors.
2024-03-15
Proceedings of the Third Conference on Causal Learning and Reasoning (published)