We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
Reproducibility and Evolution of Diffusion Mri Measurements Within the Cervical Spinal Cord in Multiple Sclerosis
In Multiple Sclerosis (MS), there is a large discrepancy between the clinical observations and how the pathology is exhibited on brain image… (see more)s, this is known as the clinical-radiological paradox. One of the hypotheses is that the clinical deficit may be more related to the spinal cord damage than the number or location of lesions in the brain. Therefore, investigating how the spinal cord is damaged becomes an acute challenge to better understand and overcome this paradox. Diffusion MRI is known to provide quantitative figures of neuronal degeneration and axonal loss, in the brain as well as in the spinal cord. In this paper, we propose to investigate how diffusion MRI metrics vary in the different cervical regions with the progression of the disease. We first study the reproducibility of diffusion MRI on healthy volunteers with a test-retest procedure using both standard diffusion tensor imaging (DTI) and multi-compartment Ball-and-Stick models. Then, based on the test re-test quantitative calibration, we provide quantitative figures of pathology evolution between M0 and M12 in the cervical spine on a set of 31 MS patients, exhibiting how the pathology damage spans in the cervical spinal cord.
2022-03-28
2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI) (published)
Forgetting is a normal process in healthy brains, and evidence suggests that the mammalian brain forgets more than is required based on limi… (see more)tations of mnemonic capacity. Episodic memories, in particular, are liable to be forgotten over time. Researchers have hypothesized that it may be beneficial for decision making to forget episodic memories over time. Reinforcement learning offers a normative framework in which to test such hypotheses. Here, we show that a reinforcement learning agent that uses an episodic memory cache to find rewards in maze environments can forget a large percentage of older memories without any performance impairments, if they utilize mnemonic representations that contain structural information about space. Moreover, we show that some forgetting can actually provide a benefit in performance compared to agents with unbounded memories. Our analyses of the agents show that forgetting reduces the influence of outdated information and states which are not frequently visited on the policies produced by the episodic control system. These results support the hypothesis that some degree of forgetting can be beneficial for decision making, which can help to explain why the brain forgets more than is required by capacity limitations.
2022-03-25
Frontiers in Computational Neuroscience (published)
Current deep learning approaches have shown good in-distribution performance but struggle in out-of-distribution settings. This is especiall… (see more)y true in the case of tasks involving abstract relations like recognizing rules in sequences, as required in many intelligence tests. In contrast, our brains are remarkably flexible at such tasks, an attribute that is likely linked to anatomical constraints on computations. Inspired by this, recent work has explored how enforcing that relational representations remain distinct from sensory representations can help artificial systems. Building on this work, we further explore and formalize the advantages afforded by ``partitioned'' representations of relations and sensory details. We investigate inductive biases that ensure abstract relations are learned and represented distinctly from sensory data across several neural network architectures and show that they outperform existing architectures on out-of-distribution generalization for various relational tasks. These results show that partitioning relational representations from other information streams may be a simple way to augment existing network architectures' robustness when performing relational computations.
We propose INFERNO, a method to infer object-centric representations of visual scenes without annotations.
Our method decomposes a scene int… (see more)o multiple objects, with each object having a structured representation that disentangles its shape, appearance and pose.
Each object representation defines a localized neural radiance field used to generate 2D views of the scene through differentiable rendering.
Our model is subsequently trained by minimizing a reconstruction loss between inputs and corresponding rendered scenes.
We empirically show that INFERNO discovers objects in a scene without supervision.
We also validate the interpretability of the learned representations by manipulating inferred scenes and showing the corresponding effect in the rendered output.
Finally, we demonstrate the usefulness of our 3D object representations in a visual reasoning task using the CATER dataset.
Like humans devoid of imagination, current machine learning systems lack the ability to adapt to new, unexpected situations by foreseeing th… (see more)em, which makes them unable to solve new tasks by analogical reasoning. In this work, we introduce a new compositional imagination framework that improves a model's ability to generalize. One of the key components of our framework is object-centric inductive biases that enables models to perceive the environment as a series of objects, properties, and transformations. By composing these key ingredients, it is possible to generate new unseen tasks that, when used to train the model, improve generalization. Experiments on a simplified version of the Abstraction and Reasoning Corpus (ARC) demonstrate the effectiveness of our framework.
We present a technique for zero-shot generation of a 3D model using only a target text prompt. Without any 3D supervision our method deforms… (see more) the control shape of a limit subdivided surface along with its texture map and normal map to obtain a 3D asset that corresponds to the input text prompt and can be easily deployed into games or modeling applications. We rely only on a pre-trained CLIP model that compares the input text prompt with differentiably rendered images of our 3D model. While previous works have focused on stylization or required training of generative models we perform optimization on mesh parameters directly to generate shape, texture or both. To constrain the optimization to produce plausible meshes and textures we introduce a number of techniques using image augmentations and the use of a pretrained prior that generates CLIP image embeddings given a text embedding.
Early T-cell development is precisely controlled by E proteins, that indistinguishably include HEB/TCF12 and E2A/TCF3 transcription factors,… (see more) together with NOTCH1 and pre-T cell receptor (TCR) signalling. Importantly, perturbations of early T-cell regulatory networks are implicated in leukemogenesis. NOTCH1 gain of function mutations invariably lead to T-cell acute lymphoblastic leukemia (T-ALL), whereas inhibition of E proteins accelerates leukemogenesis. Thus, NOTCH1, pre-TCR, E2A and HEB functions are intertwined, but how these pathways contribute individually or synergistically to leukemogenesis remain to be documented. To directly address these questions, we leveraged Cd3e-deficient mice in which pre-TCR signaling and progression through β-selection is abrogated to dissect and decouple the roles of pre-TCR, NOTCH1, E2A and HEB in SCL/TAL1-induced T-ALL, via the use of Notch1 gain of function transgenic (Notch1ICtg) and Tcf12+/- or Tcf3+/- heterozygote mice. As a result, we now provide evidence that both HEB and E2A restrain cell proliferation at the β-selection checkpoint while the clonal expansion of SCL-LMO1-induced pre-leukemic stem cells in T-ALL is uniquely dependent on Tcf12 gene dosage. At the molecular level, HEB protein levels are decreased via proteasomal degradation at the leukemic stage, pointing to a reversible loss of function mechanism. Moreover, in SCL-LMO1-induced T-ALL, loss of one Tcf12 allele is sufficient to bypass pre-TCR signaling which is required for Notch1 gain of function mutations and for progression to T-ALL. In contrast, Tcf12 monoallelic deletion does not accelerate Notch1IC-induced T-ALL, indicating that Tcf12 and Notch1 operate in the same pathway. Finally, we identify a tumor suppressor gene set downstream of HEB, exhibiting significantly lower expression levels in pediatric T-ALL compared to B-ALL and brain cancer samples, the three most frequent pediatric cancers. In summary, our results indicate a tumor suppressor function of HEB/TCF12 in T-ALL to mitigate cell proliferation controlled by NOTCH1 in pre-leukemic stem cells and prevent NOTCH1-driven progression to T-ALL.
Early T-cell development is precisely controlled by E proteins, that indistinguishably include HEB/TCF12 and E2A/TCF3 transcription factors,… (see more) together with NOTCH1 and pre-T cell receptor (TCR) signalling. Importantly, perturbations of early T-cell regulatory networks are implicated in leukemogenesis. NOTCH1 gain of function mutations invariably lead to T-cell acute lymphoblastic leukemia (T-ALL), whereas inhibition of E proteins accelerates leukemogenesis. Thus, NOTCH1, pre-TCR, E2A and HEB functions are intertwined, but how these pathways contribute individually or synergistically to leukemogenesis remain to be documented. To directly address these questions, we leveraged Cd3e-deficient mice in which pre-TCR signaling and progression through β-selection is abrogated to dissect and decouple the roles of pre-TCR, NOTCH1, E2A and HEB in SCL/TAL1-induced T-ALL, via the use of Notch1 gain of function transgenic (Notch1ICtg) and Tcf12+/- or Tcf3+/- heterozygote mice. As a result, we now provide evidence that both HEB and E2A restrain cell proliferation at the β-selection checkpoint while the clonal expansion of SCL-LMO1-induced pre-leukemic stem cells in T-ALL is uniquely dependent on Tcf12 gene dosage. At the molecular level, HEB protein levels are decreased via proteasomal degradation at the leukemic stage, pointing to a reversible loss of function mechanism. Moreover, in SCL-LMO1-induced T-ALL, loss of one Tcf12 allele is sufficient to bypass pre-TCR signaling which is required for Notch1 gain of function mutations and for progression to T-ALL. In contrast, Tcf12 monoallelic deletion does not accelerate Notch1IC-induced T-ALL, indicating that Tcf12 and Notch1 operate in the same pathway. Finally, we identify a tumor suppressor gene set downstream of HEB, exhibiting significantly lower expression levels in pediatric T-ALL compared to B-ALL and brain cancer samples, the three most frequent pediatric cancers. In summary, our results indicate a tumor suppressor function of HEB/TCF12 in T-ALL to mitigate cell proliferation controlled by NOTCH1 in pre-leukemic stem cells and prevent NOTCH1-driven progression to T-ALL.
Continual Learning (CL) research typically focuses on tackling the phenomenon of catastrophic forgetting in neural networks. Catastrophic fo… (see more)rgetting is associated with an abrupt loss of knowledge previously learned by a model when the task, or more broadly the data distribution, being trained on changes. In supervised learning problems this forgetting, resulting from a change in the model's representation, is typically measured or observed by evaluating the decrease in old task performance. However, a model's representation can change without losing knowledge about prior tasks. In this work we consider the concept of representation forgetting, observed by using the difference in performance of an optimal linear classifier before and after a new task is introduced. Using this tool we revisit a number of standard continual learning benchmarks and observe that, through this lens, model representations trained without any explicit control for forgetting often experience small representation forgetting and can sometimes be comparable to methods which explicitly control for forgetting, especially in longer task sequences. We also show that representation forgetting can lead to new insights on the effect of model capacity and loss function used in continual learning. Based on our results, we show that a simple yet competitive approach is to learn representations continually with standard supervised contrastive learning while constructing prototypes of class samples when queried on old samples.11The code to reproduce our results is publicly available at: https://github.com/rezazzr/Probing-Representation-Forgetting
Graphics Processing Units (GPUs) are notoriously hard to optimize for manually. What is needed are good automatic code generators and optimi… (see more)zers. Accelerate, Futhark and Lift demonstrated that a functional approach is well suited for this challenge. Lift, for instance, uses a system of rewrite rules with a multi-stage approach. Algorithmic optimizations are first explored, followed by hardware-specific optimizations such as using shared memory and mapping parallelism. While the algorithmic exploration leads to correct transformed programs by construction, it is not necessarily true for the latter phase. Exploiting shared memory and mapping parallelism while ensuring correct synchronization is a delicate balancing act, and is hard to encode in a rewrite system. Currently, Lift relies on heuristics with ad-hoc mechanisms to check for correctness. Although this practical approach eventually produces high-performance code, it is not an ideal state of affairs. This paper proposes to extract parallelization constraints automatically from a functional IR and use a solver to identify valid rewriting. Using a convolutional neural network on a mobile GPU as a use case, this approach matches the performance of the ARM Compute Library GEMM convolution and the TVM-generated kernel consuming between 2.7x and 3.6x less memory on average. Furthermore, a speedup of 12x is achieved over the ARM Compute Library direct convolution implementation.
2022-03-18
Proceedings of the 31st ACM SIGPLAN International Conference on Compiler Construction (published)