A Hitchhiker's Guide to Geometric GNNs for 3D Atomic Systems
Alexandre AGM Duval
Simon V. Mathis
Chaitanya K. Joshi
Victor Schmidt
Santiago Miret
Fragkiskos D. Malliaros
Taco Cohen
Pietro Lio’
Michael M. Bronstein
Efficient Graphics Representation with Differentiable Indirection
Sayantan Datta
Carl Marshall
Zhao Dong
Zhengqin Li
We introduce differentiable indirection – a novel learned primitive that employs differentiable multi-scale lookup tables as an effective … (voir plus)substitute for traditional compute and data operations across the graphics pipeline. We demonstrate its flexibility on a number of graphics tasks, i.e., geometric and image representation, texture mapping, shading, and radiance field representation. In all cases, differentiable indirection seamlessly integrates into existing architectures, trains rapidly, and yields both versatile and efficient results.
Explorable Mesh Deformation Subspaces from Unstructured 3D Generative Models
Arman Maesumi
Paul Guerrero
Vladimir Kim
Matthew Fisher
Siddhartha Chaudhuri
Daniel Ritchie
Model Breadcrumbs: Scaling Multi-Task Model Merging with Sparse Masks
MohammadReza Davari
The rapid development of AI systems has been greatly influenced by the emergence of foundation models. A common approach for targeted proble… (voir plus)ms involves fine-tuning these pre-trained foundation models for specific target tasks, resulting in a rapid spread of models fine-tuned across a diverse array of tasks. This work focuses on the problem of merging multiple fine-tunings of the same foundation model derived from a spectrum of auxiliary tasks. We introduce a new simple method, Model Breadcrumbs, which consists of a sparsely defined weight set that guides model adaptation within the weight space of a pre-trained model. These breadcrumbs are constructed by subtracting the weights from a pre-trained model before and after fine-tuning, followed by a sparsification process that eliminates weight outliers and negligible perturbations. Our experiments demonstrate the effectiveness of Model Breadcrumbs to simultaneously improve performance across multiple tasks. This contribution aligns with the evolving paradigm of updatable machine learning, reminiscent of the collaborative principles underlying open-source software development, fostering a community-driven effort to reliably update machine learning models. Our method is shown to be more efficient and unlike previous proposals does not require hyperparameter tuning for each new task added. Through extensive experimentation involving various models, tasks, and modalities we establish that integrating Model Breadcrumbs offers a simple, efficient, and highly effective approach for constructing multi-task models and facilitating updates to foundation models.
Model Breadcrumbs: Scaling Multi-Task Model Merging with Sparse Masks
MohammadReza Davari
The rapid development of AI systems has been greatly influenced by the emergence of foundation models. A common approach for targeted proble… (voir plus)ms involves fine-tuning these pre-trained foundation models for specific target tasks, resulting in a rapid spread of models fine-tuned across a diverse array of tasks. This work focuses on the problem of merging multiple fine-tunings of the same foundation model derived from a spectrum of auxiliary tasks. We introduce a new simple method, Model Breadcrumbs, which consists of a sparsely defined weight set that guides model adaptation within the weight space of a pre-trained model. These breadcrumbs are constructed by subtracting the weights from a pre-trained model before and after fine-tuning, followed by a sparsification process that eliminates weight outliers and negligible perturbations. Our experiments demonstrate the effectiveness of Model Breadcrumbs to simultaneously improve performance across multiple tasks. This contribution aligns with the evolving paradigm of updatable machine learning, reminiscent of the collaborative principles underlying open-source software development, fostering a community-driven effort to reliably update machine learning models. Our method is shown to be more efficient and unlike previous proposals does not require hyperparameter tuning for each new task added. Through extensive experimentation involving various models, tasks, and modalities we establish that integrating Model Breadcrumbs offers a simple, efficient, and highly effective approach for constructing multi-task models and facilitating updates to foundation models.
Model Breadcrumbs: Scaling Multi-Task Model Merging with Sparse Masks
MohammadReza Davari
Lagrangian Properties and Control of Soft Robots Modeled with Discrete Cosserat Rods
Lekan Molu
Shaoru Chen
The characteristic ``in-plane"bending associated with soft robots' deformation make them preferred over rigid robots in sophisticated manipu… (voir plus)lation and movement tasks. Executing such motion strategies to precision in soft deformable robots and structures is however fraught with modeling and control challenges given their infinite degrees-of-freedom. Imposing \textit{piecewise constant strains} (PCS) across (discretized) Cosserat microsolids on the continuum material however, their dynamics become amenable to tractable mathematical analysis. While this PCS model handles the characteristic difficult-to-model ``in-plane"bending well, its Lagrangian properties are not exploited for control in literature neither is there a rigorous study on the dynamic performance of multisection deformable materials for ``in-plane"bending that guarantees steady-state convergence. In this sentiment, we first establish the PCS model's structural Lagrangian properties. Second, we exploit these for control on various strain goal states. Third, we benchmark our hypotheses against an Octopus-inspired robot arm under different constant tip loads. These induce non-constant ``in-plane"deformation and we regulate strain states throughout the continuum in these configurations. Our numerical results establish convergence to desired equilibrium throughout the continuum in all of our tests. Within the bounds here set, we conjecture that our methods can find wide adoption in the control of cable- and fluid-driven multisection soft robotic arms; and may be extensible to the (learning-based) control of deformable agents employed in simulated, mixed, or augmented reality.
Filtering Pixel Latent Variables for Unmixing Noisy and Undersampled Volumetric Images
Catherine Bouchard
Andréanne Deschênes
Vincent Boulanger
Jean-Michel Bellavance
Julia Chabbert
Alexy Pelletier-Rioux
Flavie Lavoie-Cardinal
Harnessing Predictive Modeling and Software Analytics in the Age of LLM-Powered Software Development (Invited Talk)
Harnessing Predictive Modeling and Software Analytics in the Age of LLM-Powered Software Development (Invited Talk)
In the rapidly evolving landscape of software development, Large Language Models (LLM) have emerged as powerful tools that can significantly… (voir plus) impact the way software code is written, reviewed, and optimized, making them invaluable resources for programmers. They offer developers the ability to leverage pre-trained knowledge and tap into vast code repositories, enabling faster development cycles and reducing the time spent on repetitive or mundane coding tasks. However, while these models offer substantial benefits, their adoption also presents multiple challenges. For example, they might generate code snippets that are syntactically correct but functionally flawed, requiring human review and validation. Moreover, the ethical considerations surrounding these models, such as biases in the training data, should be carefully addressed to ensure fair and inclusive software development practices. This talk will provide an overview and reflection on some of these challenges, present some preliminary solutions, and discuss opportunities for predictive models and data analytics.
Unmixing Optical Signals from Undersampled Volumetric Measurements by Filtering the Pixel Latent Variables
Catherine Bouchard
Andréanne Deschênes
Vincent Boulanger
Jean-Michel Bellavance
Julia Chabbert
Alexy Pelletier-Rioux
Flavie Lavoie-Cardinal
Unmixing Optical Signals from Undersampled Volumetric Measurements by Filtering the Pixel Latent Variables
Catherine Bouchard
Andréanne Deschênes
Vincent Boulanger
Jean-Michel Bellavance
Julia Chabbert
Alexy Pelletier-Rioux
Flavie Lavoie-Cardinal
The development of signal unmixing algorithms is essential for leveraging multimodal datasets acquired through a wide array of scientific im… (voir plus)aging technologies, including hyperspectral or time-resolved acquisitions. In experimental physics, enhancing the spatio-temporal resolution or expanding the number of detection channels often leads to diminished sampling rate and signal-to-noise ratio (SNR), significantly affecting the efficacy of signal unmixing algorithms. We propose Latent Unmixing, a new approach which applies band-pass filters to the latent space of a multi-dimensional convolutional neural network to disentangle overlapping signal components. It enables better isolation and quantification of individual signal contributions, especially in the context of undersampled distributions. Using multi-dimensional convolution kernels to process all dimensions simultaneously enhances the network's ability to extract information from adjacent pixels, and time- or spectral-bins. This approach enables more effective separation of components in cases where individual pixels do not provide clear, well-resolved information. We showcase the method's practical use in experimental physics through two test cases that highlight the versatility of our approach: fluorescence lifetime microscopy and mode decomposition in optical fibers. The latent unmixing method extracts valuable information from complex signals that cannot be resolved by standard methods. It opens new possibilities in optics and photonics for multichannel separations at an increased sampling rate.