no code implementations • 7 Mar 2024 • Keshav Santhanam, Deepti Raghavan, Muhammad Shahir Rahman, Thejas Venkatesh, Neha Kunjal, Pratiksha Thaker, Philip Levis, Matei Zaharia
We present ALTO, a network orchestrator for efficiently serving compound AI systems such as pipelines of language models.
no code implementations • 5 Mar 2024 • Pratiksha Thaker, Yash Maurya, Virginia Smith
Recent work has demonstrated that fine-tuning is a promising approach to `unlearn' concepts from large language models.
no code implementations • 24 Dec 2023 • Pratiksha Thaker, Amrith Setlur, Zhiwei Steven Wu, Virginia Smith
Motivated by the recent empirical success of incorporating public data into differentially private learning, we theoretically investigate how a shared representation learned from public data can improve private learning.
1 code implementation • 17 Dec 2022 • Kevin Kuo, Pratiksha Thaker, Mikhail Khodak, John Nguyen, Daniel Jiang, Ameet Talwalkar, Virginia Smith
In this work, we perform the first systematic study on the effect of noisy evaluation in federated hyperparameter tuning.