GPAI Report & Policy Guide: Towards Substantive Equality in AI
Join us at Mila on November 26 for the launch of the report and policy guide that outlines actionable recommendations for building inclusive AI ecosystems.
We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
Language Models Can Reduce Asymmetry in Information Markets
Task errors are used to learn and refine motor skills. We investigated how task assistance influences learned neural representations using B… (see more)rain-Computer Interfaces (BCIs), which map neural activity into movement via a decoder. We analyzed motor cortex activity as monkeys practiced BCI with a decoder that adapted to improve or maintain performance over days. Population dimensionality remained constant or increased with learning, counter to trends with non-adaptive BCIs. Yet, over time, task information was contained in a smaller subset of neurons or population modes. Moreover, task information was ultimately stored in neural modes that occupied a small fraction of the population variance. An artificial neural network model suggests the adaptive decoders contribute to forming these compact neural representations. Our findings show that assistive decoders manipulate error information used for long-term learning computations, like credit assignment, which informs our understanding of motor learning and has implications for designing real-world BCIs.
The integration of diverse clinical modalities such as medical imaging and the tabular data obtained by the patients' Electronic Health Reco… (see more)rds (EHRs) is a crucial aspect of modern healthcare. The integrative analysis of multiple sources can provide a comprehensive understanding of a patient's condition and can enhance diagnoses and treatment decisions. Deep Neural Networks (DNNs) consistently showcase outstanding performance in a wide range of multimodal tasks in the medical domain. However, the complex endeavor of effectively merging medical imaging with clinical, demographic and genetic information represented as numerical tabular data remains a highly active and ongoing research pursuit. We present a novel framework based on hypernetworks to fuse clinical imaging and tabular data by conditioning the image processing on the EHR's values and measurements. This approach aims to leverage the complementary information present in these modalities to enhance the accuracy of various medical applications. We demonstrate the strength and the generality of our method on two different brain Magnetic Resonance Imaging (MRI) analysis tasks, namely, brain age prediction conditioned by subject's sex, and multiclass Alzheimer's Disease (AD) classification conditioned by tabular data. We show that our framework outperforms both single-modality models and state-of-the-art MRI-tabular data fusion methods. The code, enclosed to this manuscript will be made publicly available.
The combinatorial pricing problem (CPP) is a bilevel problem in which the leader maximizes their revenue by imposing tolls on certain items … (see more)that they can control. Based on the tolls set by the leader, the follower selects a subset of items corresponding to an optimal solution of a combinatorial optimization problem. To accomplish the leader's goal, the tolls need to be sufficiently low to discourage the follower from choosing the items offered by the competitors. In this paper, we derive a single-level reformulation for the CPP by rewriting the follower's problem as a longest path problem using a dynamic programming model, and then taking its dual and applying strong duality. We proceed to solve the reformulation in a dynamic fashion with a cutting plane method. We apply this methodology to 2 distinct dynamic programming models, namely, a novel formulation designated as selection diagram and the well-known decision diagram. We also produce numerical results to evaluate their performances across 3 different specializations of the CPP and a closely related problem that is the knapsack interdiction problem. Our results showcase the potential of the 2 proposed reformulations over the natural value function approach, expanding the set of tools to solve combinatorial bilevel programs.
This paper presents a two-stage Multiple-Model Compression (MMC) approach for sampled electrical waveforms. To limit latency, the processing… (see more) is window-based, with a window length commensurate to the electrical period. For each window, the first stage compares several parametric models to get a coarse representation of the samples. The second stage then compares different residual compression techniques to minimize the norm of the reconstruction error. The allocation of the rate budget among the two stages is optimized. The proposed MMC approach provides better signal-to-noise ratios than state-of-the-art solutions on periodic and transient waveforms.
Objective: Automatic and robust characterization of spinal cord shape from MRI images is relevant to assess the severity of spinal cord comp… (see more)ression in degenerative cervical myelopathy (DCM) and to guide therapeutic strategy. Despite its popularity, the maximum spinal cord compression (MSCC) index has practical limitations to objectively assess the severity of cord compression. Firstly, it is computed by normalizing the anteroposterior cord diameter by that above and below the level of compression, but it does not account for the fact that the spinal cord itself varies in size along the superior-inferior axis, making this MSCC sensitive to the level of compression. Secondly, spinal cord shape varies across individuals, making MSCC also sensitive to the size and shape of every individual. Thirdly, MSCC is typically computed by the expert-rater on a single sagittal slice, which is time-consuming and prone to inter-rater variability. In this study, we propose a fully automatic pipeline to compute MSCC. Methods: We extended the traditional MSCC (based on the anteroposterior diameter) to other shape metrics (transverse diameter, area, eccentricity, and solidity), and proposed a normalization strategy using a database of healthy adults (n=203) to address the variability of the spinal cord anatomy between individuals. We validated the proposed method in a cohort of DCM patients (n=120) with manually derived morphometric measures and predicted the therapeutic decision (operative/conservative) using a stepwise binary logistic regression including demographics, clinical scores, and electrophysiological assessment. Results: The automatic and normalized MSCC measures significantly correlated with clinical scores and predicted the therapeutic decision with higher accuracy than the manual MSCC. Results show that the sensory dysfunction of the upper extremities (mJOA subscore), the presence of myelopathy and the proposed MRI-based normalized morphometric measures were significant predictors of the therapeutic decision. The model yielded an area under the curve of the receiver operating characteristic of 80%. Conclusion: The study introduced an automatic method for computation of normalized MSCC measures of cord compression from MRI scans, which is an important step towards better informed therapeutic decisions in DCM patients. The method is open-source and available in the Spinal Cord Toolbox v6.0.
As AI systems become more advanced, companies and regulators will make difficult decisions about whether it is safe to train and deploy them… (see more). To prepare for these decisions, we investigate how developers could make a 'safety case,' which is a structured rationale that AI systems are unlikely to cause a catastrophe. We propose a framework for organizing a safety case and discuss four categories of arguments to justify safety: total inability to cause a catastrophe, sufficiently strong control measures, trustworthiness despite capability to cause harm, and -- if AI systems become much more powerful -- deference to credible AI advisors. We evaluate concrete examples of arguments in each category and outline how arguments could be combined to justify that AI systems are safe to deploy.