The Next-Generation Computing Platforms for High-Performance and High-Efficiency Training of Machine Learning Models (Next ML Train) project is a multi-institution collaboration between ML and hardware researchers to develop novel combinations of software and hardware that can achieve faster and/or more energy-efficient training of different deep learning models, including transformers, recurrent neural networks and graph neural networks. As part of this project, we are looking for a postdoctoral fellow to lead the development of novel hardware platforms that are jointly designed with ML training algorithms.
– Analyze state-of-the-art ML algorithms from the perspective of hardware implementation.
– Using that knowledge, develop new VLSI accelerator architectures to solve existing energy-efficiency bottlenecks.
– Collaborate with ML and hardware experts to achieve groundbreaking results.
– Participate in organizing a major research project with several professors and a top-tier industrial partner.
– Assist in the supervision of Ph.D. and Master’s students towards the publication of the results in top conferences and journals.
The ideal candidate should have:
– A Ph.D. degree in Electrical or Computer Engineering, or related field,
– Strong background in digital system design for deep learning,
– Prior exposure to Stochastic Gradient Descent training,
– Publications in top academic conferences and journals,
– Good writing skills in English.
Duration: 1 year, extendible.
Start date: Can start immediately.
How to Apply:
Interested applicants should submit a CV with a list of publications, a cover letter, and the name of at least two references to email@example.com and firstname.lastname@example.org with the subject “NextMLTrain postdoc application”.