Portrait of Nour Shaheen

Nour Shaheen

Lab Representative
Master's Research
Supervisor
Co-supervisor
Research Topics
Deep Learning
Foundation Models
Natural Language Processing

Biography

Nour is in her second year of her research Master's at Polytechnique Montreal, under the supervision of Prof. Amine Mhedhbi and Prof. Sarath Chandar. Her research focuses on tabular foundation models and model merging methods. She is very passionate about science, good coffee, and working in an environment where people genuinely enjoy being there.

Publications

Towards Optimizing SQL Generation via LLM Routing
Mohammadhossein Malekpour
Text-to-SQL enables users to interact with databases through natural language, simplifying access to structured data. Although highly capabl… (see more)e large language models (LLMs) achieve strong accuracy for complex queries, they incur unnecessary latency and dollar cost for simpler ones. In this paper, we introduce the first LLM routing approach for Text-to-SQL, which dynamically selects the most cost-effective LLM capable of generating accurate SQL for each query. We present two routing strategies (score- and classification-based) that achieve accuracy comparable to the most capable LLM while reducing costs. We design the routers for ease of training and efficient inference. In our experiments, we highlight a practical and explainable accuracy-cost trade-off on the BIRD dataset.
Towards Optimizing SQL Generation via LLM Routing
Mohammadhossein Malekpour
Text-to-SQL enables users to interact with databases through natural language, simplifying access to structured data. Although highly capabl… (see more)e large language models (LLMs) achieve strong accuracy for complex queries, they incur unnecessary latency and dollar cost for simpler ones. In this paper, we introduce the first LLM routing approach for Text-to-SQL, which dynamically selects the most cost-effective LLM capable of generating accurate SQL for each query. We present two routing strategies (score- and classification-based) that achieve accuracy comparable to the most capable LLM while reducing costs. We design the routers for ease of training and efficient inference. In our experiments, we highlight a practical and explainable accuracy-cost trade-off on the BIRD dataset.