no code implementations • 1 Apr 2024 • Achintya Kundu, Fabian Lim, Aaron Chew, Laura Wynter, Penny Chong, Rhui Dih Lee
Supernet training of LLMs is of great interest in industrial applications as it confers the ability to produce a palette of smaller models at constant cost, regardless of the number of models (of different size / latency) produced.
no code implementations • 27 Mar 2023 • Achintya Kundu, Laura Wynter, Rhui Dih Lee, Luis Angel Bathen
Hence, we propose Transfer-Once-For-All (TOFA) for supernet-style training on small data sets with constant computational training cost over any number of edge deployment scenarios.
no code implementations • 14 Sep 2020 • Achintya Kundu, Pengqian Yu, Laura Wynter, Shiau Hong Lim
We present a class of methods for robust, personalized federated learning, called Fed+, that unifies many federated learning algorithms.
no code implementations • NeurIPS 2010 • Achintya Kundu, Vikram Tankasali, Chiranjib Bhattacharyya, Aharon Ben-Tal
We present several provably convergent iterative algorithms, where each iteration requires either an SVM or a Multiple Kernel Learning (MKL) solver for m > 1 case.