1 code implementation • NeurIPS 2017 • Young Hun Jung, Jack Goetz, Ambuj Tewari
Recent work has extended the theoretical analysis of boosting algorithms to multiclass problems and to online settings.
no code implementations • 23 Oct 2017 • Young Hun Jung, Ambuj Tewari
We consider the multi-label ranking approach to multi-label learning.
no code implementations • NeurIPS 2019 • Jacob Abernethy, Young Hun Jung, Chansoo Lee, Audra McMillan, Ambuj Tewari
In this paper, we use differential privacy as a lens to examine online learning in both full and partial information settings.
no code implementations • 11 Oct 2018 • Young Hun Jung, Ambuj Tewari
We propose a general algorithm template that represents random perturbation based algorithms and identify several perturbation distributions that lead to strong regret bounds.
1 code implementation • 11 Oct 2018 • Daniel T. Zhang, Young Hun Jung, Ambuj Tewari
We propose an unbiased estimate of the loss using a randomized prediction, allowing the model to update its weak learners with limited information.
1 code implementation • NeurIPS 2019 • Young Hun Jung, Ambuj Tewari
These problems have been studied well from the optimization perspective, where the goal is to efficiently find a near-optimal policy when system parameters are known.
no code implementations • 12 Oct 2019 • Young Hun Jung, Marc Abeille, Ambuj Tewari
Restless bandit problems assume time-varying reward distributions of the arms, which adds flexibility to the model but makes the analysis more challenging.
no code implementations • 24 Oct 2019 • Vinod Raman, Daniel T. Zhang, Young Hun Jung, Ambuj Tewari
We present online boosting algorithms for multilabel ranking with top-k feedback, where the learner only receives information about the top k items from the ranking it provides.
no code implementations • NeurIPS 2020 • Young Hun Jung, Baekjin Kim, Ambuj Tewari
First, we show that private learnability implies online learnability in both settings.
no code implementations • 29 Sep 2021 • Jayanth Reddy Regatti, Aniket Anand Deshmukh, Young Hun Jung, Frank Cheng, Abhishek Gupta, Urun Dogan
We address this performance gap with a policy transfer algorithm which first trains a teacher agent using the offline dataset where features are fully available, and then transfers this knowledge to a student agent that only uses the resource-constrained features.
no code implementations • 7 Oct 2021 • Jayanth Reddy Regatti, Aniket Anand Deshmukh, Frank Cheng, Young Hun Jung, Abhishek Gupta, Urun Dogan
We address this performance gap with a policy transfer algorithm which first trains a teacher agent using the offline dataset where features are fully available, and then transfers this knowledge to a student agent that only uses the resource-constrained features.
1 code implementation • 27 Oct 2021 • Joseph J. Pfeiffer III, Denis Charles, Davis Gilton, Young Hun Jung, Mehul Parsana, Erik Anderson
We introduce a secure multi-party compute (MPC) protocol that utilizes "helper" parties to train models, so that once data leaves the browser, no downstream system can individually construct a complete picture of the user activity.