GPAI Report & Policy Guide: Towards Substantive Equality in AI
Join us at Mila on November 26 for the launch of the report and policy guide that outlines actionable recommendations for building inclusive AI ecosystems.
We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
Improving Robustness and Reliability in Medical Image Classification with Latent-Guided Diffusion and Nested-Ensembles
Problem definition: In this paper, we study a predisaster relief network design problem with uncertain demands. The aim is to determine the … (see more)prepositioning and reallocation of relief supplies. Motivated by the call of the International Federation of Red Cross and Red Crescent Societies (IFRC) to leave no one behind, we consider three important practical aspects of humanitarian operations: shortages, equity, and uncertainty. Methodology/results: We first employ a form of robust satisficing measure, which we call the shortage severity measure, to evaluate the severity of the shortage caused by uncertain demand in a context with limited distribution information. Because shortages often raise concerns about equity, we then formulate a mixed-integer lexicographic optimization problem with nonconvex objectives and design a new branch-and-bound algorithm to identify the exact solution. We also propose two approaches for identifying optimal postdisaster adaptable resource reallocation: an exact approach and a conservative approximation that is more computationally efficient. Our case study considers the 2010 Yushu earthquake, which occurred in northwestern China, and demonstrates the value of our methodology in mitigating geographical inequities and reducing shortages. Managerial implications: In our case study, we show that (i) incorporating equity in both predisaster deployment and postdisaster reallocation can produce substantially more equitable shortage prevention strategies while sacrificing only a reasonable amount of total shortage; (ii) increasing donations/budgets may not necessarily alleviate the shortage suffered by the most vulnerable individuals if equity is not fully considered; and (iii) exploiting disaster magnitude information when quantifying uncertainty can help alleviate geographical inequities caused by uncertain relief demands. Funding: This work was supported by the Natural Sciences and Engineering Research Council of Canada [Grant RGPIN-2016-05208], the National Natural Science Foundation of China [Grants 71971154, 72010107004, 72091214, and 72122015], and the Canada Research Chairs [Grant CRC-2018-00105]. Supplemental Material: The online appendices are available at https://doi.org/10.1287/msom.2023.1230 .
2023-10-23
Manufacturing & Service Operations Management (published)
Planning for many manipulation tasks, such as using tools or assembling parts, often requires both symbolic and geometric reasoning. Task an… (see more)d Motion Planning (TAMP) algorithms typically solve these problems by conducting a tree search over high-level task sequences while checking for kinematic and dynamic feasibility. While performant, most existing algorithms are highly inefficient as their time complexity grows exponentially with the number of possible actions and objects. Additionally, they only find a single solution to problems in which many feasible plans may exist. To address these limitations, we propose a novel algorithm called Stein Task and Motion Planning (STAMP) that leverages parallelization and differentiable simulation to efficiently search for multiple diverse plans. STAMP relaxes discrete-and-continuous TAMP problems into continuous optimization problems that can be solved using variational inference. Our algorithm builds upon Stein Variational Gradient Descent, a gradient-based variational inference algorithm, and parallelized differentiable physics simulators on the GPU to efficiently obtain gradients for inference. Further, we employ imitation learning to introduce action abstractions that reduce the inference problem to lower dimensions. We demonstrate our method on two TAMP problems and empirically show that STAMP is able to: 1) produce multiple diverse plans in parallel; and 2) search for plans more efficiently compared to existing TAMP baselines.
Spinal cord segmentation is clinically relevant and is notably used to compute spinal cord cross-sectional area (CSA) for the diagnosis and … (see more)monitoring of cord compression or neurodegenerative diseases such as multiple sclerosis. While several semi and automatic methods exist, one key limitation remains: the segmentation depends on the MRI contrast, resulting in different CSA across contrasts. This is partly due to the varying appearance of the boundary between the spinal cord and the cerebrospinal fluid that depends on the sequence and acquisition parameters. This contrast-sensitive CSA adds variability in multi-center studies where protocols can vary, reducing the sensitivity to detect subtle atrophies. Moreover, existing methods enhance the CSA variability by training one model per contrast, while also producing binary masks that do not account for partial volume effects. In this work, we present a deep learning-based method that produces soft segmentations of the spinal cord. Using the Spine Generic Public Database of healthy participants (
Synthetic biology holds great promise for bioengineering applications such as environmental bioremediation, probiotic formulation, and produ… (see more)ction of renewable biofuels. Humans’ capacity to design biological systems from scratch is limited by their sheer size and complexity. We introduce a framework for training a machine learning model to learn the basic genetic principles underlying the gene composition of bacterial genomes. Our variational autoencoder model, DeepGenomeVector, was trained to take as input corrupted bacterial genetic blueprints (i.e. complete gene sets, henceforth ‘genome vectors’) in which most genes had been “removed”, and re-create the original. The resulting model effectively captures the complex dependencies in genomic networks, as evaluated by both qualitative and quantitative metrics. An in-depth functional analysis of a generated gene vector shows that its encoded pathways are interconnected and nearly complete. On the test set, where the model’s ability to re-generate the original, uncorrupted genome vector was evaluated, an AUC score of 0.98 and an F1 score of 0.82 provide support for the model’s ability to generate diverse, high-quality genome vectors. This work showcases the power of machine learning approaches for synthetic biology and highlights the possibility that just as humans can design an AI that animates a robot, AIs may one day be able to design a genomic blueprint that animates a carbon-based cell. SIGNIFICANCE STATEMENT Genomes serve as the blueprints for life, encoding complex networks of genes whose products must seamlessly interact to result in living organisms. In this work, we develop a framework for training a machine learning algorithm to learn the basic genetic principles that underlie genome composition. This innovation may eventually lead to improvements in the genome design process, increasing the speed and reliability of designs while decreasing cost. It further suggests that AI agents may one day have the potential to design blueprints for carbon-based life.
Recent advances in using language models to obtain cross-modal audio-text representations have overcome the limitations of conventional trai… (see more)ning approaches that use predefined labels. This has allowed the community to make progress in tasks like zero-shot classification, which would otherwise not be possible. However, learning such representations requires a large amount of human-annotated audio-text pairs. In this paper, we study unsupervised approaches to improve the learning framework of such representations with unpaired text and audio. We explore domain-unspecific and domain-specific curation methods to create audio-text pairs that we use to further improve the model. We also show that when domain-specific curation is used in conjunction with a soft-labeled contrastive loss, we are able to obtain significant improvement in terms of zero-shot classification performance on downstream sound event classification or acoustic scene classification tasks.
2023-10-22
2023 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA) (published)