Learn how to leverage generative AI to support and improve your productivity at work. The next cohort will take place online on April 28 and 30, 2026, in French.
We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
On the Challenges and Opportunities in Generative AI
We investigate the robustness of Neural Ratio Estimators (NREs) and Neural Posterior Estimators (NPEs) to distributional shifts in the conte… (see more)xt of measuring the abundance of dark matter subhalos using strong gravitational lensing data. While these data-driven inference frameworks can be accurate on test data from the same distribution as the training sets, in real applications, it is expected that simulated training data and true observational data will differ in their distributions. We explore the behavior of a trained NRE and trained sequential NPEs to estimate the population-level parameters of dark matter subhalos from a large sample of images of strongly lensed galaxies with test data presenting distributional shifts within and beyond the bounds of the training distribution in the nuisance parameters (e.g., the background source morphology). While our results show that NREs and NPEs perform well when tested perfectly in distribution, they exhibit significant biases when confronted with slight deviations from the examples seen in the training distribution. This indicates the necessity for caution when applying NREs and NPEs to real astrophysical data, where high-dimensional underlying distributions are not perfectly known.
RadiSeq: a single- and bulk-cell whole-genome DNA sequencing simulator for radiation-damaged cell models
Felix Mathew
Luc Galarneau
J. Kildea
Objective To build and validate a simulation framework to perform single-cell and bulk-cell whole genome sequencing simulation of radiation-… (see more)exposed Monte Carlo cell models to assist radiation genomics studies. Approach Sequencing the genomes of radiation-damaged cells can provide useful insight into radiation action for radiobiology research. However, carrying out post-irradiation sequencing experiments can often be challenging, expensive, and time-consuming. Although computational simulations have the potential to provide solutions to these experimental challenges, and aid in designing optimal experiments, the absence of tools currently limits such application. Monte Carlo toolkits exist to simulate radiation exposures of cell models but there are no tools to simulate single- and bulk-cell sequencing of cell models containing radiation-damaged DNA. Therefore, we aimed to develop a Monte Carlo simulation framework to address this gap by designing a tool capable of simulating sequencing processes for radiation-damaged cells. Main Results We developed RadiSeq – a multi-threaded whole-genome DNA sequencing simulator written in C++. RadiSeq can be used to simulate Illumina sequencing of radiation-damaged cell models produced by Monte Carlo simulations. RadiSeq has been validated through comparative analysis, where simulated data were matched against experimentally obtained data, demonstrating reasonable agreement between the two. Additionally, it comes with numerous features designed to closely resemble actual whole-genome sequencing. RadiSeq is also highly customizable with a single input parameter file. Significance RadiSeq enables the research community to perform complex simulations of radiation-exposed DNA sequencing, supporting the optimization, planning, and validation of costly and time-intensive radiation biology experiments. This framework provides a powerful tool for advancing radiation genomics research.
Foundation models based on large language models (LLMs) have shown great success in handling various tasks and modalities. However, adapting… (see more) these models for general-purpose audio-language tasks is challenging due to differences in acoustic environments and task variations. In this work, we introduce LiSTEN Learning Soft Token Embeddings for Neural Audio LLMs), a framework for adapting LLMs to speech and audio tasks. LiSTEN uses a dynamic prompt selection strategy with learnable key-value pairs, allowing the model to balance general and task-specific knowledge while avoiding overfitting in a multitask setting. Our approach reduces dependence on large-scale ASR or captioning datasets, achieves competitive performance with fewer trainable parameters, and simplifies training by using a single-stage process. Additionally, LiSTEN enhances interpretability by analyzing the diversity and overlap of selected prompts across different tasks.
The online information ecosystem enables influence campaigns of unprecedented scale and impact. We urgently need empirically grounded approa… (see more)ches to counter the growing threat of malicious campaigns, now amplified by generative AI. But, developing defenses in real-world settings is impractical. Social system simulations with agents modelled using Large Language Models (LLMs) are a promising alternative approach and a growing area of research. However, existing simulators lack features needed to capture the complex information-sharing dynamics of platform-based social networks. To bridge this gap, we present SandboxSocial, a new simulator that includes several key innovations, mainly: (1) a virtual social media platform (modelled as Mastodon and mirrored in an actual Mastodon server) that enables a realistic setting in which agents interact; (2) an adapter that uses real-world user data to create more grounded agents and social media content; and (3) multi-modal capabilities that enable our agents to interact using both text and images---just as humans do on social media. We make the simulator more useful to researchers by providing measurement and analysis tools that track simulation dynamics and compute evaluation metrics to compare experimental results.
2025-08-15
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence (published)
The proliferation of misinformation poses a significant threat to society, exacerbated by the capabilities of generative AI. This demo paper… (see more) introduces Veracity, an open-source AI system designed to empower individuals to combat misinformation through transparent and accessible fact-checking. Veracity leverages the synergy between Large Language Models (LLMs) and web retrieval agents to analyze user-submitted claims and provide grounded veracity assessments with intuitive explanations. Key features include multilingual support, numerical scoring of claim veracity, and an interactive interface inspired by familiar messaging applications. This paper will showcase Veracity's ability to not only detect misinformation but also explain its reasoning, fostering media literacy and promoting a more informed society.
2025-08-15
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence (published)
The leading AI companies are increasingly focused on building generalist AI agents -- systems that can autonomously plan, act, and pursue go… (see more)als across almost all tasks that humans can perform. Despite how useful these systems might be, unchecked AI agency poses significant risks to public safety and security, ranging from misuse by malicious actors to a potentially irreversible loss of human control. We discuss how these risks arise from current AI training methods. Indeed, various scenarios and experiments have demonstrated the possibility of AI agents engaging in deception or pursuing goals that were not specified by human operators and that conflict with human interests, such as self-preservation. Following the precautionary principle, we see a strong need for safer, yet still useful, alternatives to the current agency-driven trajectory. Accordingly, we propose as a core building block for further advances the development of a non-agentic AI system that is trustworthy and safe by design, which we call Scientist AI. This system is designed to explain the world from observations, as opposed to taking actions in it to imitate or please humans. It comprises a world model that generates theories to explain data and a question-answering inference machine. Both components operate with an explicit notion of uncertainty to mitigate the risks of overconfident predictions. In light of these considerations, a Scientist AI could be used to assist human researchers in accelerating scientific progress, including in AI safety. In particular, our system can be employed as a guardrail against AI agents that might be created despite the risks involved. Ultimately, focusing on non-agentic AI may enable the benefits of AI innovation while avoiding the risks associated with the current trajectory. We hope these arguments will motivate researchers, developers, and policymakers to favor this safer path.
As an increasing realization, many behavioral relationships are interwoven with inherent variations in human populations. Presently, there i… (see more)s no clarity in the biomedical community on which sources of population variation are most dominant. The recent advent of population-scale cohorts like the Adolescent Brain Cognitive DevelopmentSM Study (ABCD Study®) are now offering unprecedented depth and width of phenotype profiling that potentially explains interfamily differences. Here, we leveraged a deep learning framework (conditional variational autoencoder) on the totality of the ABCD Study® phenome (8,902 candidate phenotypes in 11,875 participants) to identify and characterize major sources of population stratification. 80% of the top 5 sources of explanatory stratifications were driven by distinct combinations of 202 available socioeconomic status (SES) measures; each in conjunction with a unique set of non-overlapping social and environmental factors. Several sources of variation across this cohort flagged geographies marked by material poverty interlocked with mental health and behavioral correlates. Deprivation emerged in another top stratification in relation to urbanicity and its ties to immigrant and racial and ethnic minoritized groups. Conversely, two other major sources of population variation were both driven by indicators of privilege: one highlighted measures of access to educational opportunity and income tied to healthy home environments and good behavior, the other profiled individuals of European ancestry leading advantaged lifestyles in desirable neighborhoods in terms of location and air quality. Overall, the disclosed social stratifications underscore the importance of treating SES as a multidimensional construct and recognizing its ties into social determinants of health.