Portrait de Linfeng Du

Linfeng Du

Doctorat - McGill
Superviseur⋅e principal⋅e
Sujets de recherche
Grands modèles de langage (LLM)
LLMs augmentés par récupération
Prévision des séries temporelles

Publications

Support-Proximity Augmented Diffusion Estimation for Offline Black-Box Optimization
Yonghan Yang
Bowei He
Can Chen
Xue Liu
Offline black-box optimization aims to discover novel designs with high property scores using only a static dataset, a task fundamentally ch… (voir plus)allenged by the out-of-distribution (OOD) extrapolation problem. Existing approaches typically bifurcate into inverse methods, which struggle with the ill-posed nature of mapping scores to designs, and forward methods, which often lack the distributional expressivity to quantify uncertainty effectively. In this work, we propose \textbf{SPADE} (\textbf{S}upport-\textbf{P}roximity \textbf{A}ugmented \textbf{D}iffusion \textbf{E}stimation), a novel framework that reimagines forward surrogate modeling through the lens of conditional generative modeling. SPADE models the forward likelihood
Optimizing User Profiles via Contextual Bandits for Retrieval-Augmented LLM Personalization
Zichen Zhao
Fuyuan Lyu
Xiuying Chen
Jikun Kang
Xue Liu
Large Language Models (LLMs) excel at general-purpose tasks, yet adapting their responses to individual users remains challenging. Retrieval… (voir plus) augmentation provides a lightweight alternative to fine-tuning by conditioning LLMs on user history records, and existing approaches typically select these records based on semantic relevance. We argue that relevance serves as an unreliable proxy for utility: a record may be semantically similar to a query yet fail to improve generation quality or even degrade it due to redundancy or conflicting information. To bridge this gap, we propose PURPLE, a contextual bandit framework that oPtimizes UseR Profiles for Llm pErsonalization. In contrast to a greedy selection of the most relevant records, PURPLE treats profile construction as a set generation process and utilizes a Plackett-Luce ranking model to capture complex inter-record dependencies. By training with dense feedback provided by the likelihood of the reference response, our method aligns retrieval directly with generation quality. Extensive experiments on nine personalization tasks demonstrate that PURPLE consistently outperforms strong heuristic and retrieval-augmented baselines in both effectiveness and efficiency, establishing a principled and scalable solution for optimizing user profiles.