We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
Evolution of cell size control is canalized towards adders or sizers by cell cycle structure and selective pressures
Cell size is controlled to be within a specific range to support physiological function. To control their size, cells use diverse mechanisms… (see more) ranging from ‘sizers’, in which differences in cell size are compensated for in a single cell division cycle, to ‘adders’, in which a constant amount of cell growth occurs in each cell cycle. This diversity raises the question why a particular cell would implement one rather than another mechanism? To address this question, we performed a series of simulations evolving cell size control networks. The size control mechanism that evolved was influenced by both cell cycle structure and specific selection pressures. Moreover, evolved networks recapitulated known size control properties of naturally occurring networks. If the mechanism is based on a G1 size control and an S/G2/M timer, as found for budding yeast and some human cells, adders likely evolve. But, if the G1 phase is significantly longer than the S/G2/M phase, as is often the case in mammalian cells in vivo, sizers become more likely. Sizers also evolve when the cell cycle structure is inverted so that G1 is a timer, while S/G2/M performs size control, as is the case for the fission yeast S. pombe. For some size control networks, cell size consistently decreases in each cycle until a burst of cell cycle inhibitor drives an extended G1 phase much like the cell division cycle of the green algae Chlamydomonas. That these size control networks evolved such self-organized criticality shows how the evolution of complex systems can drive the emergence of critical processes.
We propose Masked Siamese Networks (MSN), a self-supervised learning framework for learning image representations. Our approach matches the … (see more)representation of an image view containing randomly masked patches to the representation of the original unmasked image. This self-supervised pre-training strategy is particularly scalable when applied to Vision Transformers since only the unmasked patches are processed by the network. As a result, MSNs improve the scalability of joint-embedding architectures, while producing representations of a high semantic level that perform competitively on low-shot image classification. For instance, on ImageNet-1K, with only 5,000 annotated images, our base MSN model achieves 72.4% top-1 accuracy, and with 1% of ImageNet-1K labels, we achieve 75.7% top-1 accuracy, setting a new state-of-the-art for self-supervised learning on this benchmark. Our code is publicly available.
We propose Masked Siamese Networks (MSN), a self-supervised learning framework for learning image representations. Our approach matches the … (see more)representation of an image view containing randomly masked patches to the representation of the original unmasked image. This self-supervised pre-training strategy is particularly scalable when applied to Vision Transformers since only the unmasked patches are processed by the network. As a result, MSNs improve the scalability of joint-embedding architectures, while producing representations of a high semantic level that perform competitively on low-shot image classification. For instance, on ImageNet-1K, with only 5,000 annotated images, our base MSN model achieves 72.4% top-1 accuracy, and with 1% of ImageNet-1K labels, we achieve 75.7% top-1 accuracy, setting a new state-of-the-art for self-supervised learning on this benchmark. Our code is publicly available.
Background Mobile health tools can support shared decision-making. We developed a computer-based decision aid (DA) to help pregnant women an… (see more)d their partners make informed, value-congruent decisions regarding prenatal screening for trisomy. Objective This study aims to assess the usability and usefulness of computer-based DA among pregnant women, clinicians, and policy makers. Methods For this mixed methods sequential explanatory study, we planned to recruit a convenience sample of 45 pregnant women, 45 clinicians from 3 clinical sites, and 15 policy makers. Eligible women were aged >18 years and >16 weeks pregnant or had recently given birth. Eligible clinicians and policy makers were involved in prenatal care. We asked the participants to navigate a computer-based DA. We asked the women about the usefulness of the DA and their self-confidence in decision-making. We asked all participants about usability, quality, acceptability, satisfaction with the content of the DA, and collected sociodemographic data. We explored participants’ reactions to the computer-based DA and solicited suggestions. Our interview guide was based on the Mobile App Rating Scale. We performed descriptive analyses of the quantitative data and thematic deductive and inductive analyses of the qualitative data for each participant category. Results A total of 45 pregnant women, 14 clinicians, and 8 policy makers participated. Most pregnant women were aged between 25 and 34 years (34/45, 75%) and White (42/45, 94%). Most clinicians were aged between 35 and 44 years (5/14, 36%) and women (11/14, 79%), and all were White (14/14, 100%); the largest proportion of policy makers was aged between 45 and 54 years (4/8, 50%), women (5/8, 62%), and White (8/8, 100%). The mean usefulness score for preparing for decision-making for women was 80/100 (SD 13), and the mean self-efficacy score was 88/100 (SD 11). The mean usability score was 84/100 (SD 14) for pregnant women, 77/100 (SD 14) for clinicians, and 79/100 (SD 23) for policy makers. The mean global score for quality was 80/100 (SD 9) for pregnant women, 72/100 (SD 12) for clinicians, and 80/100 (SD 9) for policy makers. Regarding acceptability, participants found the amount of information just right (52/66, 79%), balanced (58/66, 88%), useful (38/66, 58%), and sufficient (50/66, 76%). The mean satisfaction score with the content was 84/100 (SD 13) for pregnant women, 73/100 (SD 16) for clinicians, and 73/100 (SD 20) for policy makers. Participants thought the DA could be more engaging (eg, more customizable) and suggested strategies for implementation, such as incorporating it into clinical guidelines. Conclusions Pregnant women, clinicians, and policy makers found the DA usable and useful. The next steps are to incorporate user suggestions for improving engagement and implementing the computer-based DA in clinical practice.
The documentation practice for machine-learned (ML) models often falls short of established practices for traditional software, which impede… (see more)s model accountability and inadvertently abets inappropriate or misuse of models. Recently, model cards, a proposal for model documentation, have attracted notable attention, but their impact on the actual practice is unclear. In this work, we systematically study the model documentation in the field and investigate how to encourage more responsible and accountable documentation practice. Our analysis of publicly available model cards reveals a substantial gap between the proposal and the practice. We then design a tool named DocML aiming to (1) nudge the data scientists to comply with the model cards proposal during the model development, especially the sections related to ethics, and (2) assess and manage the documentation quality. A lab study reveals the benefit of our tool towards long-term documentation quality and accountability.
New neurons are continuously generated in the subgranular zone of the dentate gyrus throughout adulthood. These new neurons gradually integr… (see more)ate into hippocampal circuits, forming new naïve synapses. Viewed from this perspective, these new neurons may represent a significant source of ‘wiring’ noise in hippocampal networks. In machine learning, such noise injection is commonly used as a regularization technique. Regularization techniques help prevent overfitting training data, and allow models to generalize learning to new, unseen data. Using a computational modeling approach, here we ask whether a neurogenesis-like process similarly acts as a regularizer, facilitating generalization in a category learning task. In a convolutional neural network (CNN) trained on the CIFAR-10 object recognition dataset, we modeled neurogenesis as a replacement/turnover mechanism, where weights for a randomly chosen small subset of neurons in a chosen hidden layer were re-initialized to new values as the model learned to categorize 10 different classes of objects. We found that neurogenesis enhanced generalization on unseen test data compared to networks with no neurogenesis. Moreover, neurogenic networks either outperformed or performed similarly to networks with conventional noise injection (i.e., dropout, weight decay, and neural noise). These results suggest that neurogenesis can enhance generalization in hippocampal learning through noise-injection, expanding on the roles that neurogenesis may have in cognition. Author Summary In deep neural networks, various forms of noise injection are used as regularization techniques to prevent overfitting and promote generalization on unseen test data. Here, we were interested in whether adult neurogenesis– the lifelong production of new neurons in the hippocampus– might similarly function as a regularizer in the brain. We explored this question computationally, assessing whether implementing a neurogenesis-like process in a hidden layer within a convolutional neural network trained in a category learning task would prevent overfitting and promote generalization. We found that neurogenesis regularization was as least as effective as, or more effective than, conventional regularizers (i.e., dropout, weight decay and neural noise) in improving model performance. These results suggest that optimal levels of hippocampal neurogenesis may improve memory-guided decision making by preventing overfitting, thereby promoting the formation of more generalized memories that can be applied in a broader range of circumstances. We outline how these predictions may be evaluated behaviorally in rodents with altered hippocampal neurogenesis.