Search Results for author: Huy L. Nguyen

Found 20 papers, 0 papers with code

Adaptive Accelerated (Extra-)Gradient Methods with Variance Reduction

no code implementations28 Jan 2022 Zijian Liu, Ta Duy Nguyen, Alina Ene, Huy L. Nguyen

To address this problem, we propose two novel adaptive VR algorithms: Adaptive Variance Reduced Accelerated Extra-Gradient (AdaVRAE) and Adaptive Variance Reduced Accelerated Gradient (AdaVRAG).

Locally Private $k$-Means Clustering with Constant Multiplicative Approximation and Near-Optimal Additive Error

no code implementations31 May 2021 Anamay Chaturvedi, Matthew Jones, Huy L. Nguyen

Recent work on this problem in the locally private setting achieves constant multiplicative approximation with additive error $\tilde{O} (n^{1/2 + a} \cdot k \cdot \max \{\sqrt{d}, \sqrt{k} \})$ and proves a lower bound of $\Omega(\sqrt{n})$ on the additive error for any solution with a constant number of rounds.

Projection-Free Bandit Optimization with Privacy Guarantees

no code implementations22 Dec 2020 Alina Ene, Huy L. Nguyen, Adrian Vladu

We design differentially private algorithms for the bandit convex optimization problem in the projection-free setting.

Adaptive and Universal Algorithms for Variational Inequalities with Optimal Convergence

no code implementations15 Oct 2020 Alina Ene, Huy L. Nguyen

We show that our algorithms are universal and simultaneously achieve the optimal convergence rates in the non-smooth, smooth, and stochastic settings.

A note on differentially private clustering with large additive error

no code implementations28 Sep 2020 Huy L. Nguyen

In this note, we describe a simple approach to obtain a differentially private algorithm for k-clustering with nearly the same multiplicative factor as any non-private counterpart at the cost of a large polynomial additive error.

Fair and Useful Cohort Selection

no code implementations4 Sep 2020 Konstantina Bairaktari, Paul Langton, Huy L. Nguyen, Niklas Smedemark-Margulies, Jonathan Ullman

A challenge in fair algorithm design is that, while there are compelling notions of individual fairness, these notions typically do not satisfy desirable composition properties, and downstream applications based on fair classifiers might not preserve fairness.

Fairness

Parallel Algorithm for Non-Monotone DR-Submodular Maximization

no code implementations ICML 2020 Alina Ene, Huy L. Nguyen

For any desired accuracy $\epsilon$, our algorithm achieves a $1/e - \epsilon$ approximation using $O(\log{n} \log(1/\epsilon) / \epsilon^3)$ parallel rounds of function evaluations.

Efficient Private Algorithms for Learning Large-Margin Halfspaces

no code implementations24 Feb 2019 Huy L. Nguyen, Jonathan Ullman, Lydia Zakynthinou

We present new differentially private algorithms for learning a large-margin halfspace.

A Parallel Double Greedy Algorithm for Submodular Maximization

no code implementations4 Dec 2018 Alina Ene, Huy L. Nguyen, Adrian Vladu

We study parallel algorithms for the problem of maximizing a non-negative submodular function.

Improved Algorithms for Collaborative PAC Learning

no code implementations NeurIPS 2018 Huy L. Nguyen, Lydia Zakynthinou

We study a recent model of collaborative PAC learning where $k$ players with $k$ different tasks collaborate to learn a single classifier that works for all tasks.

PAC learning

Decomposable Submodular Function Minimization: Discrete and Continuous

no code implementations NeurIPS 2017 Alina Ene, Huy L. Nguyen, László A. Végh

This paper investigates connections between discrete and continuous approaches for decomposable submodular function minimization.

Approximate Near Neighbors for General Symmetric Norms

no code implementations18 Nov 2016 Alexandr Andoni, Huy L. Nguyen, Aleksandar Nikolov, Ilya Razenshteyn, Erik Waingarten

We show that every symmetric normed space admits an efficient nearest neighbor search data structure with doubly-logarithmic approximation.

A Reduction for Optimizing Lattice Submodular Functions with Diminishing Returns

no code implementations27 Jun 2016 Alina Ene, Huy L. Nguyen

A function $f: \mathbb{Z}_+^E \rightarrow \mathbb{R}_+$ is DR-submodular if it satisfies $f({\bf x} + \chi_i) -f ({\bf x}) \ge f({\bf y} + \chi_i) - f({\bf y})$ for all ${\bf x}\le {\bf y}, i\in E$.

Heavy hitters via cluster-preserving clustering

no code implementations5 Apr 2016 Kasper Green Larsen, Jelani Nelson, Huy L. Nguyen, Mikkel Thorup

Our main innovation is an efficient reduction from the heavy hitters to a clustering problem in which each heavy hitter is encoded as some form of noisy spectral cluster in a much bigger graph, and the goal is to identify every cluster.

A New Framework for Distributed Submodular Maximization

no code implementations14 Jul 2015 Rafael da Ponte Barbosa, Alina Ene, Huy L. Nguyen, Justin Ward

A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as constrained submodular maximization problems.

Document Summarization

Communication Lower Bounds for Statistical Estimation Problems via a Distributed Data Processing Inequality

no code implementations24 Jun 2015 Mark Braverman, Ankit Garg, Tengyu Ma, Huy L. Nguyen, David P. Woodruff

We study the tradeoff between the statistical error and communication cost of distributed statistical estimation problems in high dimensions.

The Power of Randomization: Distributed Submodular Maximization on Massive Datasets

no code implementations9 Feb 2015 Rafael da Ponte Barbosa, Alina Ene, Huy L. Nguyen, Justin Ward

A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as constrained submodular maximization problems.

Document Summarization

Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions

no code implementations9 Feb 2015 Alina Ene, Huy L. Nguyen

Submodular function minimization is a fundamental optimization problem that arises in several applications in machine learning and computer vision.

On Communication Cost of Distributed Statistical Estimation and Dimensionality

no code implementations NeurIPS 2014 Ankit Garg, Tengyu Ma, Huy L. Nguyen

We conjecture that the tradeoff between communication and squared loss demonstrated by this protocol is essentially optimal up to logarithmic factor.

Cannot find the paper you are looking for? You can Submit a new open access paper.