Implications of conscious AI in primary healthcare
The conversation about consciousness of artificial intelligence (AI) is an ongoing topic since 1950s. Despite the numerous applications of A… (see more)I identified in healthcare and primary healthcare, little is known about how a conscious AI would reshape its use in this domain. While there is a wide range of ideas as to whether AI can or cannot possess consciousness, a prevailing theme in all arguments is uncertainty. Given this uncertainty and the high stakes associated with the use of AI in primary healthcare, it is imperative to be prepared for all scenarios including conscious AI systems being used for medical diagnosis, shared decision-making and resource management in the future. This commentary serves as an overview of some of the pertinent evidence supporting the use of AI in primary healthcare and proposes ideas as to how consciousnesses of AI can support or further complicate these applications. Given the scarcity of evidence on the association between consciousness of AI and its current state of use in primary healthcare, our commentary identifies some directions for future research in this area including assessing patients’, healthcare workers’ and policy-makers’ attitudes towards consciousness of AI systems in primary healthcare settings.
Iterative Graph Self-Distillation
Hanlin Zhang
Shuai Lin
Weiyang Liu
Pan Zhou
Xiaodan Liang
Eric P. Xing
Recently, there has been increasing interest in the challenge of how to discriminatively vectorize graphs. To address this, we propose a met… (see more)hod called Iterative Graph Self-Distillation (IGSD) which learns graph-level representation in an unsupervised manner through instance discrimination using a self-supervised contrastive learning approach. IGSD involves a teacher-student distillation process that uses graph diffusion augmentations and constructs the teacher model using an exponential moving average of the student model. The intuition behind IGSD is to predict the teacher network representation of the graph pairs under different augmented views. As a natural extension, we also apply IGSD to semi-supervised scenarios by jointly regularizing the network with both supervised and self-supervised contrastive loss. Finally, we show that fine-tuning the IGSD-trained models with self-training can further improve graph representation learning. Empirically, we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings, which well validates the superiority of IGSD.
Neural network prediction of the effect of thermomechanical controlled processing on mechanical properties
Sushant Sinha
Denzel Guye
Xiaoping Ma
Kashif Rehman
S. Yue
Novel community data in ecology-properties and prospects.
Florian Hartig
Nerea Abrego
Alex Bush
Jonathan M. Chase
G. Guillera‐Arroita
M. Leibold
Otso T. Ovaskainen
Loïc Pellissier
Maximilian Pichler
Giovanni Poggiato
Sara Si-moussi
Wilfried Thuiller
Duarte S Viana
D. Warton
Damaris Zurell
Douglas W. Yu
Reply to: Model uncertainty obscures major driver of soil carbon
Feng Tao
Benjamin Z. Houlton
Serita D. Frey
Johannes Lehmann
Stefano Manzoni
Yuanyuan Huang
Lifen Jiang
Umakant Mishra
Bruce A. Hungate
Michael W. I. Schmidt
Markus Reichstein
Nuno Carvalhais
Philippe Ciais
Ying-Ping Wang
Bernhard Ahrens
Gustaf Hugelius
Xingjie Lu
Zheng Shi
Kostiantyn Viatkin … (see 15 more)
K. Viatkin
Ronald Vargas
Yusuf Yigini
Christian Omuto
Ashish A. Malik
Guillermo Peralta
Rosa Cuevas-Corona
Luciano E. Di Paolo
Isabel Luotto
Cuijuan Liao
Yi-Shuang Liang
Yixin Liang
Vinisa S. Saynes
Xiaomeng Huang
Yiqi Luo
Revisiting Dynamic Evaluation: Online Adaptation for Large Language Models
Amal Rannen-Triki
Jorg Bornschein
Marcus Hutter
Andr'as Gyorgy
Alexandre Galashov
Yee Whye Teh
Michalis K. Titsias
We consider the problem of online fine tuning the parameters of a language model at test time, also known as dynamic evaluation. While it is… (see more) generally known that this approach improves the overall predictive performance, especially when considering distributional shift between training and evaluation data, we here emphasize the perspective that online adaptation turns parameters into temporally changing states and provides a form of context-length extension with memory in weights, more in line with the concept of memory in neuroscience. We pay particular attention to the speed of adaptation (in terms of sample efficiency),sensitivity to the overall distributional drift, and the computational overhead for performing gradient computations and parameter updates. Our empirical study provides insights on when online adaptation is particularly interesting. We highlight that with online adaptation the conceptual distinction between in-context learning and fine tuning blurs: both are methods to condition the model on previously observed tokens.
Socially Assistive Robots for patients with Alzheimer's Disease: A scoping review.
Vania Karami
Mark J. Yaffe
Genevieve Gore
Socially Assistive Robots for patients with Alzheimer's Disease: A scoping review.
Vania Karami
Mark J. Yaffe
Genevieve Gore
Socially Assistive Robots for patients with Alzheimer's Disease: A scoping review.
Vania Karami
Mark J. Yaffe
Genevieve Gore
Socially Assistive Robots for patients with Alzheimer's Disease: A scoping review.
Vania Karami
Mark J. Yaffe
Genevieve Gore
Sources of richness and ineffability for phenomenally conscious states
Xu Ji
Eric Elmoznino
George Deane
Axel Constant
Jonathan Simon
Substitution of dietary monounsaturated fatty acids from olive oil for saturated fatty acids from lard increases LDL apolipoprotein B-100 fractional catabolic rate in subjects with dyslipidemia associated with insulin resistance: a randomized controlled trial.
Louis-Charles Desjardins
Francis Brière
André J Tremblay
Maryka Rancourt-Bouchard
Jean-Philippe Drouin-Chartier
Valéry Lemelin
Amélie Charest
Ernst J Schaefer
Benoit Lamarche
Patrick Couture