We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
MAPL: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot Prompting
Large pre-trained models have proved to be remarkable zero- and (prompt-based) few-shot learners in unimodal vision and language tasks. We p… (see more)ropose MAPL, a simple and parameter-efficient method that reuses frozen pre-trained unimodal models and leverages their strong generalization capabilities in multimodal vision-language (VL) settings. MAPL learns a lightweight mapping between the representation spaces of unimodal models using aligned image-text data, and can generalize to unseen VL tasks from just a few in-context examples. The small number of trainable parameters makes MAPL effective at low-data and in-domain learning. Moreover, MAPL’s modularity enables easy extension to other pre-trained models. Extensive experiments on several visual question answering and image captioning benchmarks show that MAPL achieves superior or competitive performance compared to similar methods while training orders of magnitude fewer parameters. MAPL can be trained in just a few hours using modest computational resources and public datasets. We release our code and pre-trained model weights at https://github.com/oscmansan/mapl.
2023-05-01
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics (published)
Protective effectiveness of previous SARS-CoV-2 infection and hybrid immunity against the omicron variant and severe disease: a systematic review and meta-regression
Testing and verification of autonomous systems is critically important. In the context of SBFT 2023 CPS testing tool competition, we present… (see more) our tool RIGAA for generating virtual roads to test an autonomous vehicle lane keeping assist system. RIGAA combines reinforcement learning as well as evolutionary search to generate test scenarios. It has achieved the second highest final score among 5 other submitted tools.
2023-05-01
2023 IEEE/ACM International Workshop on Search-Based and Fuzz Testing (SBFT) (published)
Chimeric antigen receptor (CAR) T cells, are created by extracting T cells from a cancer patient, engineering them to express a CAR targetin… (see more)g a tumor specific molecule, then reintroducing them back into the patient. A patient’s T cells contain their own endogenous T cell receptors (TCRs) however, which could potentially interact with the exogenous CAR inserted into the cell. In this study, we examine how TCR and CAR signals interact upon CAR-T activation. We show that weak TCR stimulation can reduce (antagonize) or increase overall CAR-T response, both in vitro and in vivo, across multiple tumor models, in both mouse and human T cells. We further show that the behavior of these TCR/CAR interactions can be manipulated by changing various characteristics of the TCR, CAR, and associated ligands. While this behavior is complex, we show that it can be described by a single mathematical model based on the adaptive kinetic proofreading scheme of ligand discrimination. We conclude by presenting potential applications for cancer immunotherapy.
Intramural Research Program of the National Cancer Institute