Combining Confidence Elicitation and Sample-based Methods for Uncertainty Quantification in Misinformation Mitigation
Mauricio Rivera
Kellin Pelrine
Comparing GPT-4 and Open-Source Language Models in Misinformation Mitigation
Tyler Vergho
Kellin Pelrine
Recent large language models (LLMs) have been shown to be effective for misinformation detection. However, the choice of LLMs for experiment… (see more)s varies widely, leading to uncertain conclusions. In particular, GPT-4 is known to be strong in this domain, but it is closed source, potentially expensive, and can show instability between different versions. Meanwhile, alternative LLMs have given mixed results. In this work, we show that Zephyr-7b presents a consistently viable alternative, overcoming key limitations of commonly used approaches like Llama-2 and GPT-3.5. This provides the research community with a solid open-source option and shows open-source models are gradually catching up on this task. We then highlight how GPT-3.5 exhibits unstable performance, such that this very widely used model could provide misleading results in misinformation detection. Finally, we validate new tools including approaches to structured output and the latest version of GPT-4 (Turbo), showing they do not compromise performance, thus unlocking them for future research and potentially enabling more complex pipelines for misinformation mitigation.
Comparing GPT-4 and Open-Source Language Models in Misinformation Mitigation
Tyler Vergho
Kellin Pelrine
Recent large language models (LLMs) have been shown to be effective for misinformation detection. However, the choice of LLMs for experiment… (see more)s varies widely, leading to uncertain conclusions. In particular, GPT-4 is known to be strong in this domain, but it is closed source, potentially expensive, and can show instability between different versions. Meanwhile, alternative LLMs have given mixed results. In this work, we show that Zephyr-7b presents a consistently viable alternative, overcoming key limitations of commonly used approaches like Llama-2 and GPT-3.5. This provides the research community with a solid open-source option and shows open-source models are gradually catching up on this task. We then highlight how GPT-3.5 exhibits unstable performance, such that this very widely used model could provide misleading results in misinformation detection. Finally, we validate new tools including approaches to structured output and the latest version of GPT-4 (Turbo), showing they do not compromise performance, thus unlocking them for future research and potentially enabling more complex pipelines for misinformation mitigation.
Comparing GPT-4 and Open-Source Language Models in Misinformation Mitigation
Tyler Vergho
Kellin Pelrine
Recent large language models (LLMs) have been shown to be effective for misinformation detection. However, the choice of LLMs for experiment… (see more)s varies widely, leading to uncertain conclusions. In particular, GPT-4 is known to be strong in this domain, but it is closed source, potentially expensive, and can show instability between different versions. Meanwhile, alternative LLMs have given mixed results. In this work, we show that Zephyr-7b presents a consistently viable alternative, overcoming key limitations of commonly used approaches like Llama-2 and GPT-3.5. This provides the research community with a solid open-source option and shows open-source models are gradually catching up on this task. We then highlight how GPT-3.5 exhibits unstable performance, such that this very widely used model could provide misleading results in misinformation detection. Finally, we validate new tools including approaches to structured output and the latest version of GPT-4 (Turbo), showing they do not compromise performance, thus unlocking them for future research and potentially enabling more complex pipelines for misinformation mitigation.
Laplacian Change Point Detection for Single and Multi-view Dynamic Graphs
Shenyang Huang
Samy Coulombe
Yasmeen Hitti
Dynamic graphs are rich data structures that are used to model complex relationships between entities over time. In particular, anomaly dete… (see more)ction in temporal graphs is crucial for many real-world applications such as intrusion identification in network systems, detection of ecosystem disturbances, and detection of epidemic outbreaks. In this article, we focus on change point detection in dynamic graphs and address three main challenges associated with this problem: (i) how to compare graph snapshots across time, (ii) how to capture temporal dependencies, and (iii) how to combine different views of a temporal graph. To solve the above challenges, we first propose Laplacian Anomaly Detection (LAD) which uses the spectrum of graph Laplacian as the low dimensional embedding of the graph structure at each snapshot. LAD explicitly models short-term and long-term dependencies by applying two sliding windows. Next, we propose MultiLAD, a simple and effective generalization of LAD to multi-view graphs. MultiLAD provides the first change point detection method for multi-view dynamic graphs. It aggregates the singular values of the normalized graph Laplacian from different views through the scalar power mean operation. Through extensive synthetic experiments, we show that (i) LAD and MultiLAD are accurate and outperforms state-of-the-art baselines and their multi-view extensions by a large margin, (ii) MultiLAD’s advantage over contenders significantly increases when additional views are available, and (iii) MultiLAD is highly robust to noise from individual views. In five real-world dynamic graphs, we demonstrate that LAD and MultiLAD identify significant events as top anomalies such as the implementation of government COVID-19 interventions which impacted the population mobility in multi-view traffic networks.
Personalized inference for neurostimulation with meta-learning: a case study of vagus nerve stimulation
Ximeng Mao
Yao-Chuan Chang
Stavros Zanos
DINOv2: Learning Robust Visual Features without Supervision
Maxime Oquab
Timothée Darcet
Théo Moutakanni
Huy V. Vo
Marc Szafraniec
Vasil Khalidov
Pierre Fernandez
Daniel HAZIZA
Francisco Massa
Alaaeldin El-Nouby
Mahmoud Assran
Nicolas Ballas
Wojciech Galuba
Russell Howes
Po-Yao Huang
Shang-Wen Li
Ishan Misra
Vasu Sharma
Gabriel Synnaeve … (see 8 more)
Hu Xu 0001
Hu Xu
Huijiao Xu
Herve Jegou
Julien Mairal
Patrick Labatut
Armand Joulin
Piotr Bojanowski
The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar fo… (see more)undation models in computer vision. These models could greatly simplify the use of images in any system by producing all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources. We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size. Most of the technical contributions aim at accelerating and stabilizing the training at scale. In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature. In terms of models, we train a ViT model with 1B parameters and distill it into a series of smaller models that surpass the best available all-purpose features, OpenCLIP on most of the benchmarks at image and pixel levels.
Signatures of Co-evolution and Co-regulation in the CYP3A and CYP4F Genes in Humans
Alex Richard-St-Hilaire
Isabel Gamache
Justin Pelletier
Jean-Christophe Grenier
Raphael Poujol
Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse Actions, Interventions and Sparse Temporal Dependencies
Sébastien Lachapelle
Pau Rodriguez
Yash Sharma
Katie Everett
Rémi LE PRIOL
Alexandre Lacoste
The Past, Present, and Future of the Brain Imaging Data Structure (BIDS)
Russell A. Poldrack
Christopher J. Markiewicz
Stefan Appelhoff
Yoni K. Ashar
Tibor Auer
Sylvain Baillet
Shashank Bansal
Leandro Beltrachini
Christian G. Benar
C. Bénar
Giacomo Bertazzoli
Suyash Bhogawar
Ross W. Blair
Marta Bortoletto
Mathieu Boudreau
Teon L. Brooks
Vince D. Calhoun
Filippo Maria Castelli
Patricia Clement
Alexander L. Cohen … (see 100 more)
Sasha D’Ambrosio
Gilles de Hollander
María de la Iglesia-Vayá
Alejandro de la Vega
Arnaud Delorme
Orrin Devinsky
Dejan Draschkow
Eugene Paul Duff
E. Duff
Elizabeth DuPre
Eric Earl
Oscar Esteban
Franklin W. Feingold
Guillaume Flandin
Anthony Galassi
Giuseppe Gallitto
Melanie Ganz
Rémi Gau
James Gholam
Sulagna Dia Ghosh
Satrajit S. Ghosh
Alessio Giacomel
Ashley G. Gillman
Padraig Gleeson
Alexandre Gramfort
Samuel Guay
Giacomo Guidali
Yaroslav O. Halchenko
Daniel A. Handwerker
Nell Hardcastle
Peer Herholz
Dora Hermes
Christopher J. Honey
C. Honey
Robert B. Innis
Horea-Ioan Ioanas
Andrew Jahn
Agah Karakuzu
David B. Keator
Gregory Kiar
Balint Kincses
Angela R. Laird
Jonathan C. Lau
Alberto Lazari
Jon Haitz Legarreta
Adam Li
Xiangrui Li
Bradley C. Love
Hanzhang Lu
Eleonora Marcantoni
Camille Maumet
Giacomo Mazzamuto
Steven L. Meisler
Mark Mikkelsen
Henk Mutsaerts
Thomas E. Nichols
Aki Nikolaidis
Gustav Nilsonne
Guiomar Niso
Martin Norgaard
Thomas W. Okell
Robert Oostenveld
Eduard Ort
Patrick J. Park
Mateusz Pawlik
Cyril R. Pernet
Franco Pestilli
Jan Petr
Christophe Phillips
Jean-Baptiste Poline
Luca Pollonini
P. Raamana
Pradeep Reddy Raamana
Petra Ritter
Gaia Rizzo
Kay A. Robbins
Alexander P. Rockhill
Christine Rogers
Ariel Rokem
Chris Rorden
Alexandre Routier
Jose Manuel Saborit-Torres
Taylor Salo
Michael Schirner
Robert E. Smith
Tamas Spisak
Julia Sprenger
Nicole C. Swann
Martin Szinte
Sylvain Takerkart
Bertrand Thirion
Adam G. Thomas
Sajjad Torabian
Gael Varoquaux
Bradley Voytek
Julius Welzel
Martin Wilson
Tal Yarkoni
Krzysztof J. Gorgolewski
DyG2Vec: Efficient Representation Learning for Dynamic Graphs
Mohammad Alomrani
Mahdi Biparva
Yingxue Zhang
Temporal graph neural networks have shown promising results in learning inductive representations by automatically extracting temporal patte… (see more)rns. However, previous works often rely on complex memory modules or inefficient random walk methods to construct temporal representations. To address these limitations, we present an efficient yet effective attention-based encoder that leverages temporal edge encodings and window-based subgraph sampling to generate task-agnostic embeddings. Moreover, we propose a joint-embedding architecture using non-contrastive SSL to learn rich temporal embeddings without labels. Experimental results on 7 benchmark datasets indicate that on average, our model outperforms SoTA baselines on the future link prediction task by 4.23% for the transductive setting and 3.30% for the inductive setting while only requiring 5-10x less training/inference time. Lastly, different aspects of the proposed framework are investigated through experimental analysis and ablation studies. The code is publicly available at https://github.com/huawei-noah/noah-research/tree/master/graph_atlas.
CO emission predictions in municipal solid waste incineration based on reduced depth features and long short-term memory optimization
Runyu Zhang
Heng Xia
Xiaotong Pan
Wen Yu
JunFei Qiao