You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 25 May 2023 • Fabian Spaeh, Alina Ene, Huy L. Nguyen

Constrained $k$-submodular maximization is a general framework that captures many discrete optimization problems such as ad allocation, influence maximization, personalized recommendation, and many others.

1 code implementation • 25 Mar 2023 • Dongyue Li, Huy L. Nguyen, Hongyang R. Zhang

This problem is computationally challenging since the number of subsets grows exponentially with the number of source tasks; efficient heuristics for subset selection does not always capture the relationship between task subsets and multitask learning performances.

no code implementations • 3 Oct 2022 • Alina Ene, Huy L. Nguyen

In this work, we describe a generic approach to show convergence with high probability for stochastic convex optimization.

no code implementations • 29 Sep 2022 • Zijian Liu, Ta Duy Nguyen, Alina Ene, Huy L. Nguyen

Finally, we give new accelerated adaptive algorithms and their convergence guarantee in the deterministic setting with explicit dependency on the problem parameters, improving upon the asymptotic rate shown in previous works.

no code implementations • 29 Sep 2022 • Zijian Liu, Ta Duy Nguyen, Thien Hang Nguyen, Alina Ene, Huy L. Nguyen

There, STORM utilizes recursive momentum to achieve the VR effect and is then later made fully adaptive in STORM+ [Levy et al., '21], where full-adaptivity removes the requirement for obtaining certain problem-specific parameters such as the smoothness of the objective and bounds on the variance and norm of the stochastic gradients in order to set the step size.

1 code implementation • International Conference on Machine Learning 2022 • Alina Ene, Huy L. Nguyen

Maximizing a monotone k-submodular function subject to cardinality constraints is a general model for several applications ranging from influence maximization with multiple products to sensor placement with multiple sensor types and online ad allocation.

no code implementations • 28 Jan 2022 • Zijian Liu, Ta Duy Nguyen, Alina Ene, Huy L. Nguyen

To address this problem, we propose two novel adaptive VR algorithms: Adaptive Variance Reduced Accelerated Extra-Gradient (AdaVRAE) and Adaptive Variance Reduced Accelerated Gradient (AdaVRAG).

no code implementations • 31 May 2021 • Anamay Chaturvedi, Matthew Jones, Huy L. Nguyen

Recent work on this problem in the locally private setting achieves constant multiplicative approximation with additive error $\tilde{O} (n^{1/2 + a} \cdot k \cdot \max \{\sqrt{d}, \sqrt{k} \})$ and proves a lower bound of $\Omega(\sqrt{n})$ on the additive error for any solution with a constant number of rounds.

no code implementations • 22 Dec 2020 • Alina Ene, Huy L. Nguyen, Adrian Vladu

We design differentially private algorithms for the bandit convex optimization problem in the projection-free setting.

no code implementations • 15 Oct 2020 • Alina Ene, Huy L. Nguyen

We show that our algorithms are universal and simultaneously achieve the optimal convergence rates in the non-smooth, smooth, and stochastic settings.

no code implementations • 28 Sep 2020 • Huy L. Nguyen

In this note, we describe a simple approach to obtain a differentially private algorithm for k-clustering with nearly the same multiplicative factor as any non-private counterpart at the cost of a large polynomial additive error.

no code implementations • 4 Sep 2020 • Konstantina Bairaktari, Paul Langton, Huy L. Nguyen, Niklas Smedemark-Margulies, Jonathan Ullman

A challenge in fair algorithm design is that, while there are compelling notions of individual fairness, these notions typically do not satisfy desirable composition properties, and downstream applications based on fair classifiers might not preserve fairness.

no code implementations • 17 Jul 2020 • Alina Ene, Huy L. Nguyen, Adrian Vladu

We provide new adaptive first-order methods for constrained convex optimization.

no code implementations • ICML 2020 • Alina Ene, Huy L. Nguyen

For any desired accuracy $\epsilon$, our algorithm achieves a $1/e - \epsilon$ approximation using $O(\log{n} \log(1/\epsilon) / \epsilon^3)$ parallel rounds of function evaluations.

no code implementations • 24 Feb 2019 • Huy L. Nguyen, Jonathan Ullman, Lydia Zakynthinou

We present new differentially private algorithms for learning a large-margin halfspace.

no code implementations • 4 Dec 2018 • Alina Ene, Huy L. Nguyen, Adrian Vladu

We study parallel algorithms for the problem of maximizing a non-negative submodular function.

no code implementations • NeurIPS 2018 • Huy L. Nguyen, Lydia Zakynthinou

We study a recent model of collaborative PAC learning where $k$ players with $k$ different tasks collaborate to learn a single classifier that works for all tasks.

no code implementations • NeurIPS 2017 • Alina Ene, Huy L. Nguyen, László A. Végh

This paper investigates connections between discrete and continuous approaches for decomposable submodular function minimization.

no code implementations • 18 Nov 2016 • Alexandr Andoni, Huy L. Nguyen, Aleksandar Nikolov, Ilya Razenshteyn, Erik Waingarten

We show that every symmetric normed space admits an efficient nearest neighbor search data structure with doubly-logarithmic approximation.

no code implementations • 27 Jun 2016 • Alina Ene, Huy L. Nguyen

A function $f: \mathbb{Z}_+^E \rightarrow \mathbb{R}_+$ is DR-submodular if it satisfies $f({\bf x} + \chi_i) -f ({\bf x}) \ge f({\bf y} + \chi_i) - f({\bf y})$ for all ${\bf x}\le {\bf y}, i\in E$.

no code implementations • 5 Apr 2016 • Kasper Green Larsen, Jelani Nelson, Huy L. Nguyen, Mikkel Thorup

Our main innovation is an efficient reduction from the heavy hitters to a clustering problem in which each heavy hitter is encoded as some form of noisy spectral cluster in a much bigger graph, and the goal is to identify every cluster.

no code implementations • 14 Jul 2015 • Rafael da Ponte Barbosa, Alina Ene, Huy L. Nguyen, Justin Ward

A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as constrained submodular maximization problems.

no code implementations • 24 Jun 2015 • Mark Braverman, Ankit Garg, Tengyu Ma, Huy L. Nguyen, David P. Woodruff

We study the tradeoff between the statistical error and communication cost of distributed statistical estimation problems in high dimensions.

no code implementations • 9 Feb 2015 • Rafael da Ponte Barbosa, Alina Ene, Huy L. Nguyen, Justin Ward

A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as constrained submodular maximization problems.

no code implementations • 9 Feb 2015 • Alina Ene, Huy L. Nguyen

Submodular function minimization is a fundamental optimization problem that arises in several applications in machine learning and computer vision.

no code implementations • NeurIPS 2014 • Ankit Garg, Tengyu Ma, Huy L. Nguyen

We conjecture that the tradeoff between communication and squared loss demonstrated by this protocol is essentially optimal up to logarithmic factor.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.