Search Results for author: Achintya Kundu

Found 4 papers, 0 papers with code

Efficiently Distilling LLMs for Edge Applications

no code implementations1 Apr 2024 Achintya Kundu, Fabian Lim, Aaron Chew, Laura Wynter, Penny Chong, Rhui Dih Lee

Supernet training of LLMs is of great interest in industrial applications as it confers the ability to produce a palette of smaller models at constant cost, regardless of the number of models (of different size / latency) produced.

Transfer-Once-For-All: AI Model Optimization for Edge

no code implementations27 Mar 2023 Achintya Kundu, Laura Wynter, Rhui Dih Lee, Luis Angel Bathen

Hence, we propose Transfer-Once-For-All (TOFA) for supernet-style training on small data sets with constant computational training cost over any number of edge deployment scenarios.

Model Optimization Neural Architecture Search

Robustness and Personalization in Federated Learning: A Unified Approach via Regularization

no code implementations14 Sep 2020 Achintya Kundu, Pengqian Yu, Laura Wynter, Shiau Hong Lim

We present a class of methods for robust, personalized federated learning, called Fed+, that unifies many federated learning algorithms.

Personalized Federated Learning

Efficient algorithms for learning kernels from multiple similarity matrices with general convex loss functions

no code implementations NeurIPS 2010 Achintya Kundu, Vikram Tankasali, Chiranjib Bhattacharyya, Aharon Ben-Tal

We present several provably convergent iterative algorithms, where each iteration requires either an SVM or a Multiple Kernel Learning (MKL) solver for m > 1 case.

Cannot find the paper you are looking for? You can Submit a new open access paper.