We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
Abstract 6324: Antagonism-enforced braking system to enhance CAR T cell therapeutic specificity
Chimeric Antigen Receptor (CAR) T cell immunotherapy represents a breakthrough in the treatment of hematological malignancies. However, the … (see more)rarity of cell surface protein targets that are specific to cancerous but not vital healthy tissue has hindered its broad application to solid tumor treatment. While new logic-gated CAR designs have shown reduced toxicity against healthy tissues, the generalizability of such approaches across tumors remains unclear. Here, we harness a universal characteristic of endogenous T cell receptors (TCRs), their ability to discriminate between self and non-self ligands through inhibition of response against self (weak) antigens, to develop a broadly applicable method of enhancing immunotherapeutic precision. We hypothesized that this discriminatory mechanism, known as antagonism, would apply across receptors, allowing for a transfer of specificity from TCRs onto CARs. We therefore systematically mapped out the responses of CAR T cells to joint TCR and CAR stimulations. We first engineered murine T cells with an ovalbumin-specific TCR to express a CAR targeting murine CD19 and discovered that the expression of a strong TCR antigen on CD19+ leukemia enhanced CAR T killing. Importantly though, the presence of a weak TCR antigen antagonized CAR T responses, assessed by in vitro multiplexed dynamic profiling as well as in vivo cytotoxicity. We developed a mathematical model based on cross-receptor inhibitory coupling that accurately predicted the extent of TCR/CAR antagonism across a wide range of immunological settings. This model was validated in a CD19+ B16 mouse melanoma model showing that TCR/CAR antagonism decreased the infiltration of a tumor-reactive T cell cluster, while TCR/CAR agonism enhanced infiltration of this T cell cluster. We then applied our quantitative knowledge of TCR/CAR crosstalk to design an Antagonism-Enforced Braking System (AEBS) for CAR T cell therapy. This was assessed in a model system using a CAR targeting the tyrosine-protein kinase erbB-2 (HER2) together with a hedgehog acyltransferase (HHAT) peptide-specific TCR that binds strongly to mutated tumor neoantigen while retaining weak affinity for the wild-type self-antigen on healthy tissue. We established a humanized in vivo model of CAR T function and found that AEBS CAR T cells maintained high anti-tumor activity against a human lung adenocarcinoma (PC9) but notably, their anti-tissue cytotoxicity against human bronchial epithelial cells (BEAS-2B) was minimized. AEBS CAR T cells therefore sharpen the discriminatory power of synthetic anti-tumor lymphocytes. Our work highlights a novel mechanism by which TCRs can enforce CAR T cell specificity, with practical implications for the rational design of future anti-leukemia immunotherapies.
Citation Format: Taisuke Kondo, François X. Bourassa, Sooraj Achar, Justyn DuSold, Pablo Cespedes, Madison Wahlsten, Audun Kvalvaag, Guillaume Gaud, Paul Love, Michael Dustin, Gregoire Altan-Bonnet, Paul François, Naomi Taylor. Antagonism-enforced braking system to enhance CAR T cell therapeutic specificity [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2024; Part 1 (Regular Abstracts); 2024 Apr 5-10; San Diego, CA. Philadelphia (PA): AACR; Cancer Res 2024;84(6_Suppl):Abstract nr 6324.
Text-to-image diffusion models have been shown to suffer from sample-level memorization, possibly reproducing near-perfect replica of images… (see more) that they are trained on, which may be undesirable. To remedy this issue, we develop the first differentially private (DP) retrieval-augmented generation algorithm that is capable of generating high-quality image samples while providing provable privacy guarantees. Specifically, we assume access to a text-to-image diffusion model trained on a small amount of public data, and design a DP retrieval mechanism to augment the text prompt with samples retrieved from a private retrieval dataset. Our \emph{differentially private retrieval-augmented diffusion model} (DP-RDM) requires no fine-tuning on the retrieval dataset to adapt to another domain, and can use state-of-the-art generative models to generate high-quality image samples while satisfying rigorous DP guarantees. For instance, when evaluated on MS-COCO, our DP-RDM can generate samples with a privacy budget of
This work addresses the buyer's inspection paradox for information markets. The paradox is that buyers need to access information to determi… (see more)ne its value, while sellers need to limit access to prevent theft. To study this, we introduce an open-source simulated digital marketplace where intelligent agents, powered by language models, buy and sell information on behalf of external participants. The central mechanism enabling this marketplace is the agents' dual capabilities: they not only have the capacity to assess the quality of privileged information but also come equipped with the ability to forget. This ability to induce amnesia allows vendors to grant temporary access to proprietary information, significantly reducing the risk of unauthorized retention while enabling agents to accurately gauge the information's relevance to specific queries or tasks. To perform well, agents must make rational decisions, strategically explore the marketplace through generated sub-queries, and synthesize answers from purchased information. Concretely, our experiments (a) uncover biases in language models leading to irrational behavior and evaluate techniques to mitigate these biases, (b) investigate how price affects demand in the context of informational goods, and (c) show that inspection and higher budgets both lead to higher quality outcomes.
This work addresses the buyer's inspection paradox for information markets. The paradox is that buyers need to access information to determi… (see more)ne its value, while sellers need to limit access to prevent theft. To study this, we introduce an open-source simulated digital marketplace where intelligent agents, powered by language models, buy and sell information on behalf of external participants. The central mechanism enabling this marketplace is the agents' dual capabilities: they not only have the capacity to assess the quality of privileged information but also come equipped with the ability to forget. This ability to induce amnesia allows vendors to grant temporary access to proprietary information, significantly reducing the risk of unauthorized retention while enabling agents to accurately gauge the information's relevance to specific queries or tasks. To perform well, agents must make rational decisions, strategically explore the marketplace through generated sub-queries, and synthesize answers from purchased information. Concretely, our experiments (a) uncover biases in language models leading to irrational behavior and evaluate techniques to mitigate these biases, (b) investigate how price affects demand in the context of informational goods, and (c) show that inspection and higher budgets both lead to higher quality outcomes.
Task errors are used to learn and refine motor skills. We investigated how task assistance influences learned neural representations using B… (see more)rain-Computer Interfaces (BCIs), which map neural activity into movement via a decoder. We analyzed motor cortex activity as monkeys practiced BCI with a decoder that adapted to improve or maintain performance over days. Population dimensionality remained constant or increased with learning, counter to trends with non-adaptive BCIs. Yet, over time, task information was contained in a smaller subset of neurons or population modes. Moreover, task information was ultimately stored in neural modes that occupied a small fraction of the population variance. An artificial neural network model suggests the adaptive decoders contribute to forming these compact neural representations. Our findings show that assistive decoders manipulate error information used for long-term learning computations, like credit assignment, which informs our understanding of motor learning and has implications for designing real-world BCIs.
Recent progress in large language models (LLMs) has led to their widespread adoption in various domains. However, these advancements have al… (see more)so introduced additional safety risks and raised concerns regarding their detrimental impact on already marginalized populations. Despite growing mitigation efforts to develop safety safeguards, such as supervised safety-oriented fine-tuning and leveraging safe reinforcement learning from human feedback, multiple concerns regarding the safety and ingrained biases in these models remain. Furthermore, previous work has demonstrated that models optimized for safety often display exaggerated safety behaviors, such as a tendency to refrain from responding to certain requests as a precautionary measure. As such, a clear trade-off between the helpfulness and safety of these models has been documented in the literature. In this paper, we further investigate the effectiveness of safety measures by evaluating models on already mitigated biases. Using the case of Llama 2 as an example, we illustrate how LLMs' safety responses can still encode harmful assumptions. To do so, we create a set of non-toxic prompts, which we then use to evaluate Llama models. Through our new taxonomy of LLMs responses to users, we observe that the safety/helpfulness trade-offs are more pronounced for certain demographic groups which can lead to quality-of-service harms for marginalized populations.