Search Results for author: Oren Mangoubi

Found 15 papers, 4 papers with code

Private Covariance Approximation and Eigenvalue-Gap Bounds for Complex Gaussian Perturbations

no code implementations29 Jun 2023 Oren Mangoubi, Nisheeth K. Vishnoi

We present and analyze a complex variant of the Gaussian mechanism and show that the Frobenius norm of the difference between the matrix output by this mechanism and the best rank-$k$ approximation to $M$ is bounded by roughly $\tilde{O}(\sqrt{kd})$, whenever there is an appropriately large gap between the $k$'th and the $k+1$'th eigenvalues of $M$.

Re-Analyze Gauss: Bounds for Private Matrix Approximation via Dyson Brownian Motion

no code implementations11 Nov 2022 Oren Mangoubi, Nisheeth K. Vishnoi

These equations allow us to bound the utility as the square-root of a sum-of-squares of perturbations to the eigenvectors, as opposed to a sum of perturbation bounds obtained via Davis-Kahan-type theorems.

Private Matrix Approximation and Geometry of Unitary Orbits

no code implementations6 Jul 2022 Oren Mangoubi, Yikai Wu, Satyen Kale, Abhradeep Guha Thakurta, Nisheeth K. Vishnoi

Consider the following optimization problem: Given $n \times n$ matrices $A$ and $\Lambda$, maximize $\langle A, U\Lambda U^*\rangle$ where $U$ varies over the unitary group $\mathrm{U}(n)$.

Sampling from Log-Concave Distributions over Polytopes via a Soft-Threshold Dikin Walk

no code implementations19 Jun 2022 Oren Mangoubi, Nisheeth K. Vishnoi

Given a Lipschitz or smooth convex function $\, f:K \to \mathbb{R}$ for a bounded polytope $K \subseteq \mathbb{R}^d$ defined by $m$ inequalities, we consider the problem of sampling from the log-concave distribution $\pi(\theta) \propto e^{-f(\theta)}$ constrained to $K$.

Bayesian Inference

Sampling from Log-Concave Distributions with Infinity-Distance Guarantees

no code implementations7 Nov 2021 Oren Mangoubi, Nisheeth K. Vishnoi

For a $d$-dimensional log-concave distribution $\pi(\theta) \propto e^{-f(\theta)}$ constrained to a convex body $K$, the problem of outputting samples from a distribution $\nu$ which is $\varepsilon$-close in infinity-distance $\sup_{\theta \in K} |\log \frac{\nu(\theta)}{\pi(\theta)}|$ to $\pi$ arises in differentially private optimization.

Sync-Switch: Hybrid Parameter Synchronization for Distributed Deep Learning

1 code implementation16 Apr 2021 Shijian Li, Oren Mangoubi, Lijie Xu, Tian Guo

Further, we observe that Sync-Switch achieves 3. 8% higher converged accuracy with just 1. 23X the training time compared to training with ASP.

A Provably Convergent and Practical Algorithm for Min-Max Optimization with Applications to GANs

no code implementations28 Sep 2020 Oren Mangoubi, Sushant Sachdeva, Nisheeth K Vishnoi

We present a first-order algorithm for nonconvex-nonconcave min-max optimization problems such as those that arise in training GANs.

A Convergent and Dimension-Independent Min-Max Optimization Algorithm

2 code implementations22 Jun 2020 Vijay Keswani, Oren Mangoubi, Sushant Sachdeva, Nisheeth K. Vishnoi

The equilibrium point found by our algorithm depends on the proposal distribution, and when applying our algorithm to train GANs we choose the proposal distribution to be a distribution of stochastic gradients.

Greedy Adversarial Equilibrium: An Efficient Alternative to Nonconvex-Nonconcave Min-Max Optimization

no code implementations22 Jun 2020 Oren Mangoubi, Nisheeth K. Vishnoi

We propose an optimization model, the $\varepsilon$-greedy adversarial equilibrium, and show that it can serve as a computationally tractable alternative to the min-max optimization model.

Faster polytope rounding, sampling, and volume computation via a sublinear "Ball Walk"

no code implementations5 May 2019 Oren Mangoubi, Nisheeth K. Vishnoi

We achieve this improvement by a novel method of computing polytope membership, where one avoids checking inequalities estimated to have a very low probability of being violated.

Nonconvex sampling with the Metropolis-adjusted Langevin algorithm

no code implementations22 Feb 2019 Oren Mangoubi, Nisheeth K. Vishnoi

The Langevin Markov chain algorithms are widely deployed methods to sample from distributions in challenging high-dimensional and non-convex statistics and machine learning applications.

Online Sampling from Log-Concave Distributions

1 code implementation NeurIPS 2019 Holden Lee, Oren Mangoubi, Nisheeth K. Vishnoi

Given a sequence of convex functions $f_0, f_1, \ldots, f_T$, we study the problem of sampling from the Gibbs distribution $\pi_t \propto e^{-\sum_{k=0}^tf_k}$ for each epoch $t$ in an online manner.

regression

Does Hamiltonian Monte Carlo mix faster than a random walk on multimodal densities?

1 code implementation9 Aug 2018 Oren Mangoubi, Natesh S. Pillai, Aaron Smith

In this paper, we investigate a different scaling question: does HMC beat RWM for highly $\textit{multimodal}$ targets?

Dimensionally Tight Bounds for Second-Order Hamiltonian Monte Carlo

no code implementations NeurIPS 2018 Oren Mangoubi, Nisheeth K. Vishnoi

Hamiltonian Monte Carlo (HMC) is a widely deployed method to sample from high-dimensional distributions in Statistics and Machine learning.

Convex Optimization with Unbounded Nonconvex Oracles using Simulated Annealing

no code implementations7 Nov 2017 Oren Mangoubi, Nisheeth K. Vishnoi

In this paper we study the more general case when the noise has magnitude $\alpha F(x) + \beta$ for some $\alpha, \beta > 0$, and present a polynomial time algorithm that finds an approximate minimizer of $F$ for this noise model.

Cannot find the paper you are looking for? You can Submit a new open access paper.