Search Results for author: Huy L. Nguyen

Found 26 papers, 2 papers with code

Online and Streaming Algorithms for Constrained $k$-Submodular Maximization

no code implementations25 May 2023 Fabian Spaeh, Alina Ene, Huy L. Nguyen

Constrained $k$-submodular maximization is a general framework that captures many discrete optimization problems such as ad allocation, influence maximization, personalized recommendation, and many others.

Identification of Negative Transfers in Multitask Learning Using Surrogate Models

1 code implementation25 Mar 2023 Dongyue Li, Huy L. Nguyen, Hongyang R. Zhang

This problem is computationally challenging since the number of subsets grows exponentially with the number of source tasks; efficient heuristics for subset selection does not always capture the relationship between task subsets and multitask learning performances.

Multi-Task Learning

High Probability Convergence for Accelerated Stochastic Mirror Descent

no code implementations3 Oct 2022 Alina Ene, Huy L. Nguyen

In this work, we describe a generic approach to show convergence with high probability for stochastic convex optimization.

Vocal Bursts Intensity Prediction

On the Convergence of AdaGrad(Norm) on $\R^{d}$: Beyond Convexity, Non-Asymptotic Rate and Acceleration

no code implementations29 Sep 2022 Zijian Liu, Ta Duy Nguyen, Alina Ene, Huy L. Nguyen

Finally, we give new accelerated adaptive algorithms and their convergence guarantee in the deterministic setting with explicit dependency on the problem parameters, improving upon the asymptotic rate shown in previous works.

META-STORM: Generalized Fully-Adaptive Variance Reduced SGD for Unbounded Functions

no code implementations29 Sep 2022 Zijian Liu, Ta Duy Nguyen, Thien Hang Nguyen, Alina Ene, Huy L. Nguyen

There, STORM utilizes recursive momentum to achieve the VR effect and is then later made fully adaptive in STORM+ [Levy et al., '21], where full-adaptivity removes the requirement for obtaining certain problem-specific parameters such as the smoothness of the objective and bounds on the variance and norm of the stochastic gradients in order to set the step size.

Stochastic Optimization

Streaming Algorithm for Monotone k-Submodular Maximization with Cardinality Constraints

1 code implementation International Conference on Machine Learning 2022 Alina Ene, Huy L. Nguyen

Maximizing a monotone k-submodular function subject to cardinality constraints is a general model for several applications ranging from influence maximization with multiple products to sensor placement with multiple sensor types and online ad allocation.


Adaptive Accelerated (Extra-)Gradient Methods with Variance Reduction

no code implementations28 Jan 2022 Zijian Liu, Ta Duy Nguyen, Alina Ene, Huy L. Nguyen

To address this problem, we propose two novel adaptive VR algorithms: Adaptive Variance Reduced Accelerated Extra-Gradient (AdaVRAE) and Adaptive Variance Reduced Accelerated Gradient (AdaVRAG).

Locally Private $k$-Means Clustering with Constant Multiplicative Approximation and Near-Optimal Additive Error

no code implementations31 May 2021 Anamay Chaturvedi, Matthew Jones, Huy L. Nguyen

Recent work on this problem in the locally private setting achieves constant multiplicative approximation with additive error $\tilde{O} (n^{1/2 + a} \cdot k \cdot \max \{\sqrt{d}, \sqrt{k} \})$ and proves a lower bound of $\Omega(\sqrt{n})$ on the additive error for any solution with a constant number of rounds.


Projection-Free Bandit Optimization with Privacy Guarantees

no code implementations22 Dec 2020 Alina Ene, Huy L. Nguyen, Adrian Vladu

We design differentially private algorithms for the bandit convex optimization problem in the projection-free setting.

Adaptive and Universal Algorithms for Variational Inequalities with Optimal Convergence

no code implementations15 Oct 2020 Alina Ene, Huy L. Nguyen

We show that our algorithms are universal and simultaneously achieve the optimal convergence rates in the non-smooth, smooth, and stochastic settings.

A note on differentially private clustering with large additive error

no code implementations28 Sep 2020 Huy L. Nguyen

In this note, we describe a simple approach to obtain a differentially private algorithm for k-clustering with nearly the same multiplicative factor as any non-private counterpart at the cost of a large polynomial additive error.


Fair and Useful Cohort Selection

no code implementations4 Sep 2020 Konstantina Bairaktari, Paul Langton, Huy L. Nguyen, Niklas Smedemark-Margulies, Jonathan Ullman

A challenge in fair algorithm design is that, while there are compelling notions of individual fairness, these notions typically do not satisfy desirable composition properties, and downstream applications based on fair classifiers might not preserve fairness.


Parallel Algorithm for Non-Monotone DR-Submodular Maximization

no code implementations ICML 2020 Alina Ene, Huy L. Nguyen

For any desired accuracy $\epsilon$, our algorithm achieves a $1/e - \epsilon$ approximation using $O(\log{n} \log(1/\epsilon) / \epsilon^3)$ parallel rounds of function evaluations.

Efficient Private Algorithms for Learning Large-Margin Halfspaces

no code implementations24 Feb 2019 Huy L. Nguyen, Jonathan Ullman, Lydia Zakynthinou

We present new differentially private algorithms for learning a large-margin halfspace.

A Parallel Double Greedy Algorithm for Submodular Maximization

no code implementations4 Dec 2018 Alina Ene, Huy L. Nguyen, Adrian Vladu

We study parallel algorithms for the problem of maximizing a non-negative submodular function.

Improved Algorithms for Collaborative PAC Learning

no code implementations NeurIPS 2018 Huy L. Nguyen, Lydia Zakynthinou

We study a recent model of collaborative PAC learning where $k$ players with $k$ different tasks collaborate to learn a single classifier that works for all tasks.

PAC learning

Decomposable Submodular Function Minimization: Discrete and Continuous

no code implementations NeurIPS 2017 Alina Ene, Huy L. Nguyen, László A. Végh

This paper investigates connections between discrete and continuous approaches for decomposable submodular function minimization.

Approximate Near Neighbors for General Symmetric Norms

no code implementations18 Nov 2016 Alexandr Andoni, Huy L. Nguyen, Aleksandar Nikolov, Ilya Razenshteyn, Erik Waingarten

We show that every symmetric normed space admits an efficient nearest neighbor search data structure with doubly-logarithmic approximation.

A Reduction for Optimizing Lattice Submodular Functions with Diminishing Returns

no code implementations27 Jun 2016 Alina Ene, Huy L. Nguyen

A function $f: \mathbb{Z}_+^E \rightarrow \mathbb{R}_+$ is DR-submodular if it satisfies $f({\bf x} + \chi_i) -f ({\bf x}) \ge f({\bf y} + \chi_i) - f({\bf y})$ for all ${\bf x}\le {\bf y}, i\in E$.

Heavy hitters via cluster-preserving clustering

no code implementations5 Apr 2016 Kasper Green Larsen, Jelani Nelson, Huy L. Nguyen, Mikkel Thorup

Our main innovation is an efficient reduction from the heavy hitters to a clustering problem in which each heavy hitter is encoded as some form of noisy spectral cluster in a much bigger graph, and the goal is to identify every cluster.


A New Framework for Distributed Submodular Maximization

no code implementations14 Jul 2015 Rafael da Ponte Barbosa, Alina Ene, Huy L. Nguyen, Justin Ward

A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as constrained submodular maximization problems.

BIG-bench Machine Learning Clustering +1

Communication Lower Bounds for Statistical Estimation Problems via a Distributed Data Processing Inequality

no code implementations24 Jun 2015 Mark Braverman, Ankit Garg, Tengyu Ma, Huy L. Nguyen, David P. Woodruff

We study the tradeoff between the statistical error and communication cost of distributed statistical estimation problems in high dimensions.

The Power of Randomization: Distributed Submodular Maximization on Massive Datasets

no code implementations9 Feb 2015 Rafael da Ponte Barbosa, Alina Ene, Huy L. Nguyen, Justin Ward

A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as constrained submodular maximization problems.

BIG-bench Machine Learning Clustering +1

Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions

no code implementations9 Feb 2015 Alina Ene, Huy L. Nguyen

Submodular function minimization is a fundamental optimization problem that arises in several applications in machine learning and computer vision.

BIG-bench Machine Learning

On Communication Cost of Distributed Statistical Estimation and Dimensionality

no code implementations NeurIPS 2014 Ankit Garg, Tengyu Ma, Huy L. Nguyen

We conjecture that the tradeoff between communication and squared loss demonstrated by this protocol is essentially optimal up to logarithmic factor.

Cannot find the paper you are looking for? You can Submit a new open access paper.