Existing Digital Health Technology Index Summary Report for Older Adults Living with Neurocognitive Disorders (Mild and Major) and Their Informal Caregivers: An Environmental Scan
Ambily Jose
Maxime Sasseville
Ellen Gorus
Anik Giguère
Anne Bourbonnais
Clémence Balley
Ronald Buyl
Marie-Pierre Gagnon
Digital health has added numerous promising solutions to enhance the health and wellness of people with neurocognitive disorders (NCDs) and … (voir plus)their informal caregivers. (1) Background: It is important to obtain a comprehensive view of currently available technologies, their outcomes, and conditions of success to inform recommendations regarding digital health solutions for people with NCDs and their caregivers. This environmental scan was performed to identify the features of existing digital health solutions relevant to the targeted population. This work reviews currently available digital health solutions and their related characteristics to develop a decision support tool for older adults living with mild or major neurocognitive disorders and their informal caregivers. This knowledge will aid the development of a decision support tool to assist older adults and their informal caregivers in their search for adequate digital health solutions according to their needs and preferences based on trustable information. (2) Methods: We conducted an environmental scan to identify digital health solutions from a systematic review and targeted searches in the grey literature covering the regions of Canada and Europe. Technological tools were scanned based on a preformatted extraction grid. We assessed their relevance based on selected attributes and summarized the findings. (3) Results: We identified 100 available digital health solutions. The majority (56%) were not specific to NCDs. Only 28% provided scientific evidence of their effectiveness. Remote patient care, movement tracking, and cognitive exercises were the most common purposes of digital health solutions. Most solutions were presented as decision aid tools, pill dispensers, apps, web, or a combination of these platforms. (4) Conclusions: This environmental scan allowed for identifying current digital health solutions for older adults with mild or major neurocognitive disorders and their informal caregivers. Findings from the environmental scan highlight the need for additional approaches to strengthen digital health interventions for the well-being of older adults with mild and major NCDs and their informal and formal healthcare providers.
DASB -- Discrete Audio and Speech Benchmark
Pooneh Mousavi
Luca Della Libera
Jarod Duret
Artem Ploujnikov
Discrete audio tokens have recently gained considerable attention for their potential to connect audio and language processing, enabling the… (voir plus) creation of modern multimodal large language models. Ideal audio tokens must effectively preserve phonetic and semantic content along with paralinguistic information, speaker identity, and other details. While several types of audio tokens have been recently proposed, identifying the optimal tokenizer for various tasks is challenging due to the inconsistent evaluation settings in existing studies. To address this gap, we release the Discrete Audio and Speech Benchmark (DASB), a comprehensive leaderboard for benchmarking discrete audio tokens across a wide range of discriminative tasks, including speech recognition, speaker identification and verification, emotion recognition, keyword spotting, and intent classification, as well as generative tasks such as speech enhancement, separation, and text-to-speech. Our results show that, on average, semantic tokens outperform compression tokens across most discriminative and generative tasks. However, the performance gap between semantic tokens and standard continuous representations remains substantial, highlighting the need for further research in this field.
Language Model-In-The-Loop: Data Optimal Approach to Recommend Actions in Text Games
Arjun V Sudhakar
Prasanna Parthasarathi
Janarthanan Rajendran
Large Language Models (LLMs) have demonstrated superior performance in language understanding benchmarks. A recent use case for LLMs involve… (voir plus)s training decision-making agents over textual information. The existing approach leverages LLM's linguistic priors for action candidate recommendations in text games, i.e., to operate without environment-provided actions. However, adapting LLMs to specific games/tasks requires a massive amount of annotated human gameplay. Moreover, in the existing approach, the language model was kept frozen during an agent's training process, which limits learning from in-game knowledge about the world. Hence, we explore strategies to adapt the language model for candidate recommendation with in-game transition in an online learning fashion to mitigate reliance on human-annotated gameplays, which are costly to acquire. In this paper, we propose in-game transition selection methods to adapt the LLM in the loop, reducing the dependency on using human-annotated gameplays while improving performance and convergence. Our method demonstrates a 53% relative improvement in average game score over the previous state-of-the-art model, achieving more than twice the convergence rate in a full-annotated dataset setting. Furthermore, even with only 10% of human annotation, we surpassed the 100\% state-of-the-art performance benchmark.
APPL: A Prompt Programming Language for Harmonious Integration of Programs and Large Language Model Prompts
Honghua Dong
Qidong Su
Yubo Gao
Zhaoyu Li
Yangjun Ruan
Gennady G. Pekhimenko
Chris J. Maddison
Large Language Models (LLMs) have become increasingly capable of handling diverse tasks with the aid of well-crafted prompts and integration… (voir plus) of external tools, but as task complexity rises, the workflow involving LLMs can be complicated and thus challenging to implement and maintain. To address this challenge, we propose APPL, A Prompt Programming Language that acts as a bridge between computer programs and LLMs, allowing seamless embedding of prompts into Python functions, and vice versa. APPL provides an intuitive and Python-native syntax, an efficient parallelized runtime with asynchronous semantics, and a tracing module supporting effective failure diagnosis and replaying without extra costs. We demonstrate that APPL programs are intuitive, concise, and efficient through three representative scenarios: Chain-of-Thought with self-consistency (CoT-SC), ReAct tool use agent, and multi-agent chat. Experiments on three parallelizable workflows further show that APPL can effectively parallelize independent LLM calls, with a significant speedup ratio that almost matches the estimation.
Functional Acceleration for Policy Mirror Descent
Veronica Chelu
Functional Acceleration for Policy Mirror Descent
Veronica Chelu
We apply functional acceleration to the Policy Mirror Descent (PMD) general family of algorithms, which cover a wide range of novel and fund… (voir plus)amental methods in Reinforcement Learning (RL). Leveraging duality, we propose a momentum-based PMD update. By taking the functional route, our approach is independent of the policy parametrization and applicable to large-scale optimization, covering previous applications of momentum at the level of policy parameters as a special case. We theoretically analyze several properties of this approach and complement with a numerical ablation study, which serves to illustrate the policy optimization dynamics on the value polytope, relative to different algorithmic design choices in this space. We further characterize numerically several features of the problem setting relevant for functional acceleration, and lastly, we investigate the impact of approximation on their learning mechanics.
GAPS phase II: development and pilot results of the global assessment in pediatric surgery, an evidence-based pediatric surgical capacity assessment tool for low-resource settings.
Yasmine Yousef
Sarah Cairo
Etienne St-Louis
Laura F. Goodman
Doulia M. Hamad
Robert Baird
Emily R. Smith
Sherif Emil
Jean Martin Laberge
Mohamed Abdelmalak
Zipporah Gathuy
Faye Evans
Maryam Ghavami Adel
Ki K. Bertille
Milind Chitnis
Leecarlo Millano
Peter Nthumba
Sergio d’Agostino
Bruno Cigliano
Luis Enrique Zea-Salazar … (voir 4 de plus)
Emmanuel Ameh
Doruk Ozgediz
Elena Guadagno
Handling Delay in Reinforcement Learning Caused by Parallel Computations of Neurons
Ivan Anokhin
Rishav
Stephen Chung
Biological neural networks operate in parallel, a feature that sets them apart from artificial neural networks and can significantly enhance… (voir plus) inference speed. However, this parallelism introduces challenges: when each neuron operates asynchronously with a fixed execution time, an
Realtime Reinforcement Learning: Towards Rapid Asynchronous Deployment of Large Models
Matthew D Riemer
Gopeshh Subbaraj
Realtime environments change even as agents perform action inference and learning, thus requiring high interaction frequencies to effectivel… (voir plus)y minimize long-term regret. However, recent advances in machine learning involve larger neural networks with longer inference times, raising questions about their applicability in realtime systems where reaction time is crucial. We present an analysis of lower bounds on regret in realtime environments to show that minimizing long-term regret is generally impossible within the typical sequential interaction and learning paradigm, but often becomes possible when sufficient asynchronous compute is available. We propose novel algorithms for staggering asynchronous inference processes to ensure that actions are taken at consistent time intervals, and demonstrate that use of models with high action inference times is only constrained by the environment's effective stochasticity over the inference horizon, and not by action frequency. Our analysis shows that the number of inference processes needed scales linearly with increasing inference times while enabling use of models that are multiple orders of magnitude larger than existing approaches when learning from a realtime simulation of Game Boy games such as Pokemon and Tetris.
A deeper look at depth pruning of LLMs
Shoaib Ahmed Siddiqui
Xin Dong
Greg Heinrich
Thomas Breuel
Jan Kautz
Pavlo Molchanov
Large Language Models (LLMs) are not only resource-intensive to train but even more costly to deploy in production. Therefore, recent work h… (voir plus)as attempted to prune blocks of LLMs based on cheap proxies for estimating block importance, effectively removing 10% of blocks in well-trained LLaMa-2 and Mistral 7b models without any significant degradation of downstream metrics. In this paper, we explore different block importance metrics by considering adaptive metrics such as Shapley value in addition to static ones explored in prior work. We show that *adaptive metrics exhibit a trade-off in performance between tasks i.e., improvement on one task may degrade performance on the other due to differences in the computed block influences*. Furthermore, we extend this analysis from a complete block to individual self-attention and feed-forward layers, highlighting the propensity of the self-attention layers to be more amendable to pruning, even allowing ***removal of upto 33% of the self-attention layers without incurring any performance degradation on MMLU for Mistral 7b*** (significant reduction in costly maintenance of KV-cache). Finally, we look at simple performance recovery techniques to emulate the pruned layers by training lightweight additive bias or low-rank linear adapters. *Performance recovery using emulated updates avoids performance degradation for the initial blocks (up to 5% absolute improvement on MMLU)*, which is either competitive or superior to the learning-based technique.
Insect Identification in the Wild: The AMI Dataset
Aditya Jain
Fagner Cunha
M. Bunsen
Juan Sebasti'an Canas
L. Pasi
N. Pinoy
Flemming Helsing
JoAnne Russo
Marc Botham
Michael Sabourin
Jonathan Fr'echette
Alexandre Anctil
Yacksecari Lopez
Eduardo Navarro
Filonila Perez Pimentel
Ana Cecilia Zamora
José Alejandro Ramirez Silva
Jonathan Gagnon
T. August
Kim Bjerge … (voir 8 de plus)
Alba Gomez Segura
Marc B'elisle
Yves Basset
K. P. McFarland
David Roy
Toke Thomas Høye
Maxim Larriv'ee
Insects represent half of all global biodiversity, yet many of the world's insects are disappearing, with severe implications for ecosystems… (voir plus) and agriculture. Despite this crisis, data on insect diversity and abundance remain woefully inadequate, due to the scarcity of human experts and the lack of scalable tools for monitoring. Ecologists have started to adopt camera traps to record and study insects, and have proposed computer vision algorithms as an answer for scalable data processing. However, insect monitoring in the wild poses unique challenges that have not yet been addressed within computer vision, including the combination of long-tailed data, extremely similar classes, and significant distribution shifts. We provide the first large-scale machine learning benchmarks for fine-grained insect recognition, designed to match real-world tasks faced by ecologists. Our contributions include a curated dataset of images from citizen science platforms and museums, and an expert-annotated dataset drawn from automated camera traps across multiple continents, designed to test out-of-distribution generalization under field conditions. We train and evaluate a variety of baseline algorithms and introduce a combination of data augmentation techniques that enhance generalization across geographies and hardware setups.
A machine learning pipeline for automated insect monitoring
Aditya Jain
Fagner Cunha
M. Bunsen
L. Pasi
Anna Viklund
Maxim Larriv'ee
Climate change and other anthropogenic factors have led to a catastrophic decline in insects, endangering both biodiversity and the ecosyste… (voir plus)m services on which human society depends. Data on insect abundance, however, remains woefully inadequate. Camera traps, conventionally used for monitoring terrestrial vertebrates, are now being modified for insects, especially moths. We describe a complete, open-source machine learning-based software pipeline for automated monitoring of moths via camera traps, including object detection, moth/non-moth classification, fine-grained identification of moth species, and tracking individuals. We believe that our tools, which are already in use across three continents, represent the future of massively scalable data collection in entomology.