no code implementations • 29 Feb 2024 • Zhe Feng, Christopher Liaw, Zixin Zhou
In this work, we investigate the online learning problem of revenue maximization in ad auctions, where the seller needs to learn the click-through rates (CTRs) of each ad candidate and charge the price of the winner through a pay-per-click manner.
no code implementations • 7 Sep 2023 • Mohammad Afzali, Hassan Ashtiani, Christopher Liaw
We study the problem of estimating mixtures of Gaussians under the constraint of differential privacy (DP).
no code implementations • 7 Mar 2023 • Jamil Arbas, Hassan Ashtiani, Christopher Liaw
We study the problem of privately estimating the parameters of $d$-dimensional Gaussian Mixture Models (GMMs) with $k$ components.
no code implementations • 16 Feb 2023 • Yang Cai, Zhe Feng, Christopher Liaw, Aranyak Mehta
We propose a new Markov Decision Process (MDP) model for ad auctions to capture the user response to the quality of ads, with the objective of maximizing the long-term discounted revenue.
no code implementations • 1 Jun 2022 • Victor Sanches Portella, Christopher Liaw, Nicholas J. A. Harvey
Finally, we design an anytime continuous-time algorithm with regret matching the optimal fixed-time rate when the gains are independent Brownian Motions; in many settings, this is the most difficult case.
no code implementations • 22 Nov 2021 • Hassan Ashtiani, Christopher Liaw
As another application of our framework, we provide the first polynomial time $(\varepsilon, \delta)$-DP algorithm for robust learning of (unrestricted) Gaussians with sample complexity $\widetilde{O}(d^{3. 5})$.
no code implementations • NeurIPS 2021 • Ishaq Aden-Ali, Hassan Ashtiani, Christopher Liaw
We show that if $\mathcal{F}$ is privately list-decodable, then we can privately learn mixtures of distributions in $\mathcal{F}$.
no code implementations • NeurIPS 2020 • Nicholas Harvey, Christopher Liaw, Tasuku Soma
- For monotone submodular maximization subject to a matroid, we give an efficient algorithm which achieves a (1 − c/e − ε)-regret of O(√kT ln(n/k)) where n is the size of the ground set, k is the rank of the matroid, ε > 0 is a constant, and c is the average curvature.
no code implementations • 20 Feb 2020 • Nicholas J. A. Harvey, Christopher Liaw, Edwin Perkins, Sikander Randhawa
In the fixed-time setting, where the time horizon is known in advance, algorithms that achieve the optimal regret are known when there are two, three, or four experts or when the number of experts is large.
no code implementations • 2 Sep 2019 • Nicholas J. A. Harvey, Christopher Liaw, Sikander Randhawa
We consider a simple, non-uniform averaging strategy of Lacoste-Julien et al. (2011) and prove that it achieves the optimal $O(1/T)$ convergence rate with high probability.
no code implementations • ICLR 2019 • Weiwei Kong, Christopher Liaw, Aranyak Mehta, D. Sivakumar
This paper introduces a novel framework for learning algorithms to solve online combinatorial optimization problems.
no code implementations • 13 Dec 2018 • Nicholas J. A. Harvey, Christopher Liaw, Yaniv Plan, Sikander Randhawa
We prove that after $T$ steps of stochastic gradient descent, the error of the final iterate is $O(\log(T)/T)$ with high probability.
no code implementations • NeurIPS 2018 • Hassan Ashtiani, Shai Ben-David, Nicholas Harvey, Christopher Liaw, Abbas Mehrabian, Yaniv Plan
We prove that ϴ(k d^2 / ε^2) samples are necessary and sufficient for learning a mixture of k Gaussians in R^d, up to error ε in total variation distance.
no code implementations • 14 Oct 2017 • Hassan Ashtiani, Shai Ben-David, Nick Harvey, Christopher Liaw, Abbas Mehrabian, Yaniv Plan
We prove that $\tilde{\Theta}(k d^2 / \varepsilon^2)$ samples are necessary and sufficient for learning a mixture of $k$ Gaussians in $\mathbb{R}^d$, up to error $\varepsilon$ in total variation distance.