no code implementations • 15 Mar 2024 • Ziteng Sun, Jae Hun Ro, Ahmad Beirami, Ananda Theertha Suresh
To the best of our knowledge, our work is the first to establish improvement over speculative decoding through a better draft verification algorithm.
no code implementations • 12 Mar 2024 • Jae Hun Ro, Srinadh Bhojanapalli, Zheng Xu, Yanxiang Zhang, Ananda Theertha Suresh
Cross-device federated learning (FL) is a technique that trains a model on data distributed across typically millions of edge devices without data leaving the devices.
no code implementations • NeurIPS 2023 • Ziteng Sun, Ananda Theertha Suresh, Jae Hun Ro, Ahmad Beirami, Himanshu Jain, Felix Yu
We show that the optimal draft selection algorithm (transport plan) can be computed via linear programming, whose best-known runtime is exponential in $k$.
no code implementations • FL4NLP (ACL) 2022 • Jae Hun Ro, Theresa Breiner, Lara McConnaughey, Mingqing Chen, Ananda Theertha Suresh, Shankar Kumar, Rajiv Mathews
Most studies in cross-device federated learning focus on small models, due to the server-client communication and on-device computation bottlenecks.
no code implementations • 9 Mar 2022 • Ananda Theertha Suresh, Ziteng Sun, Jae Hun Ro, Felix Yu
We show that applying the proposed protocol as sub-routine in distributed optimization algorithms leads to better convergence rates.
no code implementations • 1 Feb 2022 • Jae Hun Ro, Felix Stahlberg, Ke wu, Shankar Kumar
Text normalization, or the process of transforming text into a consistent, canonical form, is crucial for speech applications such as text-to-speech synthesis (TTS).
1 code implementation • 4 Aug 2021 • Jae Hun Ro, Ananda Theertha Suresh, Ke wu
Federated learning is a machine learning technique that enables training across decentralized data.