Search Results for author: Christopher Liaw

Found 14 papers, 0 papers with code

Improved Online Learning Algorithms for CTR Prediction in Ad Auctions

no code implementations29 Feb 2024 Zhe Feng, Christopher Liaw, Zixin Zhou

In this work, we investigate the online learning problem of revenue maximization in ad auctions, where the seller needs to learn the click-through rates (CTRs) of each ad candidate and charge the price of the winner through a pay-per-click manner.

Click-Through Rate Prediction

Mixtures of Gaussians are Privately Learnable with a Polynomial Number of Samples

no code implementations7 Sep 2023 Mohammad Afzali, Hassan Ashtiani, Christopher Liaw

We study the problem of estimating mixtures of Gaussians under the constraint of differential privacy (DP).

Polynomial Time and Private Learning of Unbounded Gaussian Mixture Models

no code implementations7 Mar 2023 Jamil Arbas, Hassan Ashtiani, Christopher Liaw

We study the problem of privately estimating the parameters of $d$-dimensional Gaussian Mixture Models (GMMs) with $k$ components.

User Response in Ad Auctions: An MDP Formulation of Long-Term Revenue Optimization

no code implementations16 Feb 2023 Yang Cai, Zhe Feng, Christopher Liaw, Aranyak Mehta

We propose a new Markov Decision Process (MDP) model for ad auctions to capture the user response to the quality of ads, with the objective of maximizing the long-term discounted revenue.

Continuous Prediction with Experts' Advice

no code implementations1 Jun 2022 Victor Sanches Portella, Christopher Liaw, Nicholas J. A. Harvey

Finally, we design an anytime continuous-time algorithm with regret matching the optimal fixed-time rate when the gains are independent Brownian Motions; in many settings, this is the most difficult case.

Private and polynomial time algorithms for learning Gaussians and beyond

no code implementations22 Nov 2021 Hassan Ashtiani, Christopher Liaw

As another application of our framework, we provide the first polynomial time $(\varepsilon, \delta)$-DP algorithm for robust learning of (unrestricted) Gaussians with sample complexity $\widetilde{O}(d^{3. 5})$.

Privately Learning Mixtures of Axis-Aligned Gaussians

no code implementations NeurIPS 2021 Ishaq Aden-Ali, Hassan Ashtiani, Christopher Liaw

We show that if $\mathcal{F}$ is privately list-decodable, then we can privately learn mixtures of distributions in $\mathcal{F}$.

Improved Algorithms for Online Submodular Maximization via First-order Regret Bounds

no code implementations NeurIPS 2020 Nicholas Harvey, Christopher Liaw, Tasuku Soma

- For monotone submodular maximization subject to a matroid, we give an efficient algorithm which achieves a (1 − c/e − ε)-regret of O(√kT ln(n/k)) where n is the size of the ground set, k is the rank of the matroid, ε > 0 is a constant, and c is the average curvature.

Optimal anytime regret with two experts

no code implementations20 Feb 2020 Nicholas J. A. Harvey, Christopher Liaw, Edwin Perkins, Sikander Randhawa

In the fixed-time setting, where the time horizon is known in advance, algorithms that achieve the optimal regret are known when there are two, three, or four experts or when the number of experts is large.

Vocal Bursts Valence Prediction

Simple and optimal high-probability bounds for strongly-convex stochastic gradient descent

no code implementations2 Sep 2019 Nicholas J. A. Harvey, Christopher Liaw, Sikander Randhawa

We consider a simple, non-uniform averaging strategy of Lacoste-Julien et al. (2011) and prove that it achieves the optimal $O(1/T)$ convergence rate with high probability.

Tight Analyses for Non-Smooth Stochastic Gradient Descent

no code implementations13 Dec 2018 Nicholas J. A. Harvey, Christopher Liaw, Yaniv Plan, Sikander Randhawa

We prove that after $T$ steps of stochastic gradient descent, the error of the final iterate is $O(\log(T)/T)$ with high probability.

Nearly tight sample complexity bounds for learning mixtures of Gaussians via sample compression schemes

no code implementations NeurIPS 2018 Hassan Ashtiani, Shai Ben-David, Nicholas Harvey, Christopher Liaw, Abbas Mehrabian, Yaniv Plan

We prove that ϴ(k d^2 / ε^2) samples are necessary and sufficient for learning a mixture of k Gaussians in R^d, up to error ε in total variation distance.

Near-optimal Sample Complexity Bounds for Robust Learning of Gaussians Mixtures via Compression Schemes

no code implementations14 Oct 2017 Hassan Ashtiani, Shai Ben-David, Nick Harvey, Christopher Liaw, Abbas Mehrabian, Yaniv Plan

We prove that $\tilde{\Theta}(k d^2 / \varepsilon^2)$ samples are necessary and sufficient for learning a mixture of $k$ Gaussians in $\mathbb{R}^d$, up to error $\varepsilon$ in total variation distance.

Cannot find the paper you are looking for? You can Submit a new open access paper.