Portrait of James Clark

James Clark

Associate Academic Member
Full Professor, McGill University
Research Topics
Applied AI
Computational Photography
Computer Vision
Generative Models

Biography

James Clark is a professor in the Department of Electrical and Computer Engineering at McGill University and a member (and former director) of the McGill Research Centre for Intelligent Machines. He is also an associate member of the Bensadoun School for Retail Management, where he is co-director of the Retail Innovation Lab. He is an associate academic member of Mila – Quebec Artificial Intelligence Institute and a collaborating associate member of the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT). He is a co-principal investigator for the FQRNT Strategic Cluster REPARTI – “Systèmes cyberphysiques et intelligence machine matérialisée”, where he leads the “Perception” axis.

Clark’s current research activities include development of efficient machine learning methods for edge devices, applications of AI to retail business, image quality assessment for automotive display design, and studying visual attention in 3D environments.

In 2014, in recognition of his accomplishments throughout his research career, he was presented with the Canadian Image Processing and Pattern Recognition Society (CIPPRS/ACTIRF) Award for Research Excellence. He was the lead General Chair for the 2021 International Conference on Computer Vision (ICCV) and currently serves on the ICCV/CVPR conference steering committee, which oversees the running of the two most important conferences in the computer vision field.

Current Students

PhD - McGill University
PhD - McGill University
Master's Research - McGill University
Co-supervisor :
Master's Research - McGill University
PhD - McGill University
Master's Research - McGill University

Publications

Parameter Efficient Fine-tuning of Transformer-Based Language Models Using Dataset Pruning
Sayed Mohammadreza Tayaranian Hosseini
Seyyed Hasan Mozafari
Brett Meyer
The widespread use of transformer-based language models is in part owed to their ease of adaptation to various tasks. Fine-tuning is a metho… (see more)d of adapting pre-trained language models to a downstream task. The resource requirements for fine-tuning, although still less than pre-training, has been increasing due to the significant growth in the number of parameters of language models. Parameter efficient fine-tuning methods limit the set of model parameters that are updated during fine-tuning, leading to reductions in both memory usage and fine-tuning time. Dataset pruning is another method of efficient fine-tuning which removes training data points, thus reducing training time, while maintaining the evaluation performance of the fine-tuned model. In this work, we apply dataset pruning on top of parameter efficient fine-tuning to further reduce the hardware requirements of the fine-tuning. Our approach benefits from lower memory usage of parameter efficient methods while addressing their long fine-tuning time with dataset pruning. On average, our proposed method uses 22% of the fine-tuning dataset while updating only 0.5% of model parameters. As a result, while achieving an evaluation performance similar to full fine-tuning, our method reduces the peak memory usage of the fine-tuning by 40% and its wall clock time by 83%.