Lagrangian Properties and Control of Soft Robots Modeled with Discrete Cosserat Rods
Lekan Molu
Shaoru Chen
The characteristic ``in-plane"bending associated with soft robots' deformation make them preferred over rigid robots in sophisticated manipu… (see more)lation and movement tasks. Executing such motion strategies to precision in soft deformable robots and structures is however fraught with modeling and control challenges given their infinite degrees-of-freedom. Imposing \textit{piecewise constant strains} (PCS) across (discretized) Cosserat microsolids on the continuum material however, their dynamics become amenable to tractable mathematical analysis. While this PCS model handles the characteristic difficult-to-model ``in-plane"bending well, its Lagrangian properties are not exploited for control in literature neither is there a rigorous study on the dynamic performance of multisection deformable materials for ``in-plane"bending that guarantees steady-state convergence. In this sentiment, we first establish the PCS model's structural Lagrangian properties. Second, we exploit these for control on various strain goal states. Third, we benchmark our hypotheses against an Octopus-inspired robot arm under different constant tip loads. These induce non-constant ``in-plane"deformation and we regulate strain states throughout the continuum in these configurations. Our numerical results establish convergence to desired equilibrium throughout the continuum in all of our tests. Within the bounds here set, we conjecture that our methods can find wide adoption in the control of cable- and fluid-driven multisection soft robotic arms; and may be extensible to the (learning-based) control of deformable agents employed in simulated, mixed, or augmented reality.
Maximum flow-based formulation for the optimal location of electric vehicle charging stations
Pierre-Luc Parent
Miguel F. Anjos
Ribal Atallah
With the increasing effects of climate change, the urgency to step away from fossil fuels is greater than ever before. Electric vehicles (EV… (see more)s) are one way to diminish these effects, but their widespread adoption is often limited by the insufficient availability of charging stations. In this work, our goal is to expand the infrastructure of EV charging stations, in order to provide a better quality of service in terms of user satisfaction (and availability of charging stations). Specifically, our focus is directed towards urban areas. We first propose a model for the assignment of EV charging demand to stations, framing it as a maximum flow problem. This model is the basis for the evaluation of user satisfaction with a given charging infrastructure. Secondly, we incorporate the maximum flow model into a mixed‐integer linear program, where decisions on the opening of new stations and on the expansion of their capacity through additional outlets is accounted for. We showcase our methodology for the city of Montreal, demonstrating the scalability of our approach to handle real‐world scenarios. We conclude that considering both spacial and temporal variations in charging demand is meaningful when solving realistic instances.
Learning to combine top-down context and feed-forward representations under ambiguity with apical and basal dendrites
Nizar Islah
Guillaume Etter
Mashbayar Tugsbayar
Tugce Gurbuz
Filtering Pixel Latent Variables for Unmixing Noisy and Undersampled Volumetric Images
Catherine Bouchard
Andréanne Deschênes
Vincent Boulanger
Jean-Michel Bellavance
Julia Chabbert
Alexy Pelletier-Rioux
Flavie Lavoie-Cardinal
Harnessing Predictive Modeling and Software Analytics in the Age of LLM-Powered Software Development (Invited Talk)
Harnessing Predictive Modeling and Software Analytics in the Age of LLM-Powered Software Development (Invited Talk)
In the rapidly evolving landscape of software development, Large Language Models (LLM) have emerged as powerful tools that can significantly… (see more) impact the way software code is written, reviewed, and optimized, making them invaluable resources for programmers. They offer developers the ability to leverage pre-trained knowledge and tap into vast code repositories, enabling faster development cycles and reducing the time spent on repetitive or mundane coding tasks. However, while these models offer substantial benefits, their adoption also presents multiple challenges. For example, they might generate code snippets that are syntactically correct but functionally flawed, requiring human review and validation. Moreover, the ethical considerations surrounding these models, such as biases in the training data, should be carefully addressed to ensure fair and inclusive software development practices. This talk will provide an overview and reflection on some of these challenges, present some preliminary solutions, and discuss opportunities for predictive models and data analytics.
Unmixing Optical Signals from Undersampled Volumetric Measurements by Filtering the Pixel Latent Variables
Catherine Bouchard
Andréanne Deschênes
Vincent Boulanger
Jean-Michel Bellavance
Julia Chabbert
Alexy Pelletier-Rioux
Flavie Lavoie-Cardinal
Pretrainable Geometric Graph Neural Network for Antibody Affinity Maturation
Huiyu Cai
Zuobai Zhang
Mingkai Wang
Bozitao Zhong
Yanling Wu
Tianlei Ying
In the realm of antibody therapeutics development, increasing the binding affinity of an antibody to its target antigen is a crucial task. T… (see more)his paper presents GearBind, a pretrainable deep neural network designed to be effective for in silico affinity maturation. Leveraging multi-level geometric message passing alongside contrastive pretraining on protein structural data, GearBind capably models the complex interplay of atom-level interactions within protein complexes, surpassing previous state-of-the-art approaches on SKEMPI v2 in terms of Pearson correlation, mean absolute error (MAE) and root mean square error (RMSE). In silico experiments elucidate that pretraining helps GearBind become sensitive to mutation-induced binding affinity changes and reflective of amino acid substitution tendency. Using an ensemble model based on pretrained GearBind, we successfully optimize the affinity of CR3022 to the spike (S) protein of the SARS-CoV-2 Omicron strain. Our strategy yields a high success rate with up to 17-fold affinity increase. GearBind proves to be an effective tool in narrowing the search space for in vitro antibody affinity maturation, underscoring the utility of geometric deep learning and adept pre-training in macromolecule interaction modeling.
Pretrainable Geometric Graph Neural Network for Antibody Affinity Maturation
Huiyu Cai
Zuobai Zhang
Mingkai Wang
Bozitao Zhong
Yanling Wu
Tianlei Ying
Language Model Alignment with Elastic Reset
Michael Noukhovitch
Samuel Lavoie
Florian Strub
Finetuning language models with reinforcement learning (RL), e.g. from human feedback (HF), is a prominent method for alignment. But optimiz… (see more)ing against a reward model can improve on reward while degrading performance in other areas, a phenomenon known as reward hacking, alignment tax, or language drift. First, we argue that commonly-used test metrics are insufficient and instead measure how different algorithms tradeoff between reward and drift. The standard method modified the reward with a Kullback-Lieber (KL) penalty between the online and initial model. We propose Elastic Reset, a new algorithm that achieves higher reward with less drift without explicitly modifying the training objective. We periodically reset the online model to an exponentially moving average (EMA) of itself, then reset the EMA model to the initial model. Through the use of an EMA, our model recovers quickly after resets and achieves higher reward with less drift in the same number of steps. We demonstrate that fine-tuning language models with Elastic Reset leads to state-of-the-art performance on a small scale pivot-translation benchmark, outperforms all baselines in a medium-scale RLHF-like IMDB mock sentiment task and leads to a more performant and more aligned technical QA chatbot with LLaMA-7B. Code available at github.com/mnoukhov/elastic-reset.
Parameter-Efficient Transfer Learning of Audio Spectrogram Transformers
Umberto Cappellazzo
Daniele Falavigna
Alessio Brutti
Parameter-efficient transfer learning (PETL) methods have emerged as a solid alternative to the standard full fine-tuning approach. They onl… (see more)y train a few extra parameters for each downstream task, without sacrificing performance and dispensing with the issue of storing a copy of the pre-trained model for each task. For audio classification tasks, the Audio Spectrogram Transformer (AST) model shows impressive results. However, surprisingly, how to efficiently adapt it to several downstream tasks has not been tackled before. In this paper, we bridge this gap and present a detailed investigation of common PETL methods for the adaptation of the AST model to audio/speech tasks. Furthermore, we propose a new adapter design that exploits the convolution module of the Conformer model, leading to superior performance over the standard PETL approaches and surpassing or achieving performance parity with full fine-tuning by updating only 0.29% of the parameters. Finally, we provide ablation studies revealing that our proposed adapter: 1) proves to be effective in few-shot efficient transfer learning, 2) attains optimal results regardless of the amount of the allocated parameters, and 3) can be applied to other pre-trained models. Our code is available at https:/github.com/umbertocappellazzo/PETL_AST.
Bug characterization in machine learning-based systems
Mohammad Mehdi Morovati
Amin Nikanjam
Florian Tambon
Z. Jiang