Search Results for author: Ankur Moitra

Found 56 papers, 7 papers with code

Distilling Model Failures as Directions in Latent Space

1 code implementation29 Jun 2022 Saachi Jain, Hannah Lawrence, Ankur Moitra, Aleksander Madry

Moreover, by combining our framework with off-the-shelf diffusion models, we can generate images that are especially challenging for the analyzed model, and thus can be used to perform synthetic data augmentation that helps remedy the model's failure modes.

Data Augmentation

Being Robust (in High Dimensions) Can Be Practical

2 code implementations ICML 2017 Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Ankur Moitra, Alistair Stewart

Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors.

Vocal Bursts Intensity Prediction

Robust Estimators in High Dimensions without the Computational Intractability

2 code implementations21 Apr 2016 Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Ankur Moitra, Alistair Stewart

We study high-dimensional distribution learning in an agnostic setting where an adversary is allowed to arbitrarily corrupt an $\varepsilon$-fraction of the samples.

Vocal Bursts Intensity Prediction

A Practical Algorithm for Topic Modeling with Provable Guarantees

2 code implementations19 Dec 2012 Sanjeev Arora, Rong Ge, Yoni Halpern, David Mimno, Ankur Moitra, David Sontag, Yichen Wu, Michael Zhu

Topic models provide a useful method for dimensionality reduction and exploratory data analysis in large text corpora.

Dimensionality Reduction Topic Models

Learning Topic Models - Going beyond SVD

2 code implementations9 Apr 2012 Sanjeev Arora, Rong Ge, Ankur Moitra

Topic Modeling is an approach used for automatic comprehension and classification of data in a variety of settings, and perhaps the canonical application is in uncovering thematic structure in a corpus of documents.

Topic Models

Learning Structured Distributions From Untrusted Batches: Faster and Simpler

1 code implementation NeurIPS 2020 Sitan Chen, Jerry Li, Ankur Moitra

We revisit the problem of learning from untrusted batches introduced by Qiao and Valiant [QV17].

Learning Restricted Boltzmann Machines via Influence Maximization

no code implementations25 May 2018 Guy Bresler, Frederic Koehler, Ankur Moitra, Elchanan Mossel

This hardness result is based on a sharp and surprising characterization of the representational power of bounded degree RBMs: the distribution on their observed variables can simulate any bounded order MRF.

Collaborative Filtering Dimensionality Reduction

Beyond the Low-Degree Algorithm: Mixtures of Subcubes and Their Applications

no code implementations17 Mar 2018 Sitan Chen, Ankur Moitra

In contrast, as we will show, mixtures of $k$ subcubes are uniquely determined by their degree $2 \log k$ moments and hence provide a useful abstraction for simultaneously achieving the polynomial dependence on $1/\epsilon$ of the classic Occam algorithms for decision trees and the flexibility of the low-degree algorithm in being able to accommodate stochastic transitions.

Learning Theory

Robustly Learning a Gaussian: Getting Optimal Error, Efficiently

no code implementations12 Apr 2017 Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Ankur Moitra, Alistair Stewart

We give robust estimators that achieve estimation error $O(\varepsilon)$ in the total variation distance, which is optimal up to a universal constant that is independent of the dimension.

Information Theoretic Properties of Markov Random Fields, and their Algorithmic Applications

no code implementations NeurIPS 2017 Linus Hamilton, Frederic Koehler, Ankur Moitra

As an application, we obtain algorithms for learning Markov random fields on bounded degree graphs on $n$ nodes with $r$-order interactions in $n^r$ time and $\log n$ sample complexity.

Approximate Counting, the Lovasz Local Lemma and Inference in Graphical Models

no code implementations14 Oct 2016 Ankur Moitra

In this paper we introduce a new approach for approximately counting in bounded degree systems with higher-order constraints.

LEMMA

Optimality and Sub-optimality of PCA for Spiked Random Matrices and Synchronization

no code implementations19 Sep 2016 Amelia Perry, Alexander S. Wein, Afonso S. Bandeira, Ankur Moitra

Our results include: I) For the Gaussian Wigner ensemble, we show that PCA achieves the optimal detection threshold for a variety of benign priors for the spike.

Message-passing algorithms for synchronization problems over compact groups

no code implementations14 Oct 2016 Amelia Perry, Alexander S. Wein, Afonso S. Bandeira, Ankur Moitra

Various alignment problems arising in cryo-electron microscopy, community detection, time synchronization, computer vision, and other fields fall into a common framework of synchronization problems over compact groups such as Z/L, U(1), or SO(3).

Community Detection

Provable Algorithms for Inference in Topic Models

no code implementations27 May 2016 Sanjeev Arora, Rong Ge, Frederic Koehler, Tengyu Ma, Ankur Moitra

But designing provable algorithms for inference has proven to be more challenging.

Topic Models

How Robust are Reconstruction Thresholds for Community Detection?

no code implementations4 Nov 2015 Ankur Moitra, William Perry, Alexander S. Wein

The stochastic block model is one of the oldest and most ubiquitous models for studying clustering and community detection.

Clustering Community Detection +1

Noisy Tensor Completion via the Sum-of-Squares Hierarchy

no code implementations26 Jan 2015 Boaz Barak, Ankur Moitra

This is also the first algorithm for tensor completion that works in the overcomplete case when $r > n$, and in fact it works all the way up to $r = n^{3/2-\epsilon}$.

Matrix Completion

Simple, Efficient, and Neural Algorithms for Sparse Coding

no code implementations2 Mar 2015 Sanjeev Arora, Rong Ge, Tengyu Ma, Ankur Moitra

Its standard formulation is as a non-convex optimization problem which is solved in practice by heuristics based on alternating minimization.

New Algorithms for Learning Incoherent and Overcomplete Dictionaries

no code implementations28 Aug 2013 Sanjeev Arora, Rong Ge, Ankur Moitra

In sparse recovery we are given a matrix $A$ (the dictionary) and a vector of the form $A X$ where $X$ is sparse, and the goal is to recover $X$.

Dictionary Learning Edge Detection +1

Smoothed Analysis of Tensor Decompositions

no code implementations14 Nov 2013 Aditya Bhaskara, Moses Charikar, Ankur Moitra, Aravindan Vijayaraghavan

We introduce a smoothed analysis model for studying these questions and develop an efficient algorithm for tensor decomposition in the highly overcomplete case (rank polynomial in the dimension).

Tensor Decomposition

Algorithms and Hardness for Robust Subspace Recovery

no code implementations5 Nov 2012 Moritz Hardt, Ankur Moitra

We give an algorithm that finds $T$ when it contains more than a $\frac{d}{n}$ fraction of the points.

A Polynomial Time Algorithm for Lossy Population Recovery

no code implementations6 Feb 2013 Ankur Moitra, Michael Saks

This improves on algorithm of Wigderson and Yehudayoff that runs in quasi-polynomial time for any $\mu > 0$ and the polynomial time algorithm of Dvir et al which was shown to work for $\mu \gtrapprox 0. 30$ by Batman et al.

Optimality and Sub-optimality of PCA I: Spiked Random Matrix Models

no code implementations2 Jul 2018 Amelia Perry, Alexander S. Wein, Afonso S. Bandeira, Ankur Moitra

Our results leverage Le Cam's notion of contiguity, and include: i) For the Gaussian Wigner ensemble, we show that PCA achieves the optimal detection threshold for certain natural priors for the spike.

Efficiently Learning Mixtures of Mallows Models

no code implementations17 Aug 2018 Allen Liu, Ankur Moitra

Mixtures of Mallows models are a popular generative model for ranking data coming from a heterogeneous population.

Recommendation Systems

Spectral Methods from Tensor Networks

no code implementations2 Nov 2018 Ankur Moitra, Alexander S. Wein

Many existing algorithms for tensor problems (such as tensor decomposition and tensor PCA), although they are not presented this way, can be viewed as spectral methods on matrices built from simple tensor networks.

Tensor Decomposition Tensor Networks

Learning Determinantal Point Processes with Moments and Cycles

no code implementations ICML 2017 John Urschel, Victor-Emmanuel Brunel, Ankur Moitra, Philippe Rigollet

Determinantal Point Processes (DPPs) are a family of probabilistic models that have a repulsive behavior, and lend themselves naturally to many tasks in machine learning where returning a diverse set of objects is important.

Point Processes

Learning Some Popular Gaussian Graphical Models without Condition Number Bounds

no code implementations NeurIPS 2020 Jonathan Kelner, Frederic Koehler, Raghu Meka, Ankur Moitra

While there are a variety of algorithms (e. g. Graphical Lasso, CLIME) that provably recover the graph structure with a logarithmic number of samples, they assume various conditions that require the precision matrix to be in some sense well-conditioned.

Efficiently Learning Structured Distributions from Untrusted Batches

no code implementations5 Nov 2019 Sitan Chen, Jerry Li, Ankur Moitra

When $k = 1$ this is the standard robust univariate density estimation setting and it is well-understood that $\Omega (\epsilon)$ error is unavoidable.

Density Estimation

Polynomial time guarantees for the Burer-Monteiro method

no code implementations3 Dec 2019 Diego Cifuentes, Ankur Moitra

The basic idea is to solve a nonconvex program in $Y$, where $Y$ is an $n \times p$ matrix such that $X = Y Y^T$.

Fast Convergence for Langevin Diffusion with Manifold Structure

no code implementations13 Feb 2020 Ankur Moitra, Andrej Risteski

In this paper, we focus on an aspect of nonconvexity relevant for modern machine learning applications: existence of invariances (symmetries) in the function f, as a result of which the distribution p will have manifolds of points with equal probability.

Bayesian Inference

Tensor Completion Made Practical

no code implementations NeurIPS 2020 Allen Liu, Ankur Moitra

We show strong provable guarantees, including showing that our algorithm converges linearly to the true tensors even when the factors are highly correlated and can be implemented in nearly linear time.

Matrix Completion

Online and Distribution-Free Robustness: Regression and Contextual Bandits with Huber Contamination

no code implementations8 Oct 2020 Sitan Chen, Frederic Koehler, Ankur Moitra, Morris Yau

Our approach is based on a novel alternating minimization scheme that interleaves ordinary least-squares with a simple convex program that finds the optimal reweighting of the distribution under a spectral constraint.

Adversarial Robustness Multi-Armed Bandits +1

Settling the Robust Learnability of Mixtures of Gaussians

no code implementations6 Nov 2020 Allen Liu, Ankur Moitra

This work represents a natural coalescence of two important lines of work: learning mixtures of Gaussians and algorithmic robust statistics.

No-go Theorem for Acceleration in the Hyperbolic Plane

no code implementations14 Jan 2021 Linus Hamilton, Ankur Moitra

In recent years there has been significant effort to adapt the key tools and ideas in convex optimization to the Riemannian setting.

Learning to Sample from Censored Markov Random Fields

no code implementations15 Jan 2021 Ankur Moitra, Elchanan Mossel, Colin Sandon

These are Markov Random Fields where some of the nodes are censored (not observed).

Learning GMMs with Nearly Optimal Robustness Guarantees

no code implementations19 Apr 2021 Allen Liu, Ankur Moitra

In this work we solve the problem of robustly learning a high-dimensional Gaussian mixture model with $k$ components from $\epsilon$-corrupted samples up to accuracy $\widetilde{O}(\epsilon)$ in total variation distance for any constant $k$ and with mild assumptions on the mixture.

Robust Model Selection and Nearly-Proper Learning for GMMs

no code implementations5 Jun 2021 Jerry Li, Allen Liu, Ankur Moitra

Given $\textsf{poly}(k/\epsilon)$ samples from a distribution that is $\epsilon$-close in TV distance to a GMM with $k$ components, we can construct a GMM with $\widetilde{O}(k)$ components that approximates the distribution to within $\widetilde{O}(\epsilon)$ in $\textsf{poly}(k/\epsilon)$ time.

Learning Theory Model Selection

Algorithms from Invariants: Smoothed Analysis of Orbit Recovery over $SO(3)$

no code implementations4 Jun 2021 Allen Liu, Ankur Moitra

Our main result is a quasi-polynomial time algorithm for orbit recovery over $SO(3)$ in this model.

Electron Tomography Tensor Decomposition

Spoofing Generalization: When Can't You Trust Proprietary Models?

no code implementations15 Jun 2021 Ankur Moitra, Elchanan Mossel, Colin Sandon

In this work, we study the computational complexity of determining whether a machine learning model that perfectly fits the training data will generalizes to unseen data.

Dictionary Learning Under Generative Coefficient Priors with Applications to Compression

no code implementations29 Sep 2021 Hannah Lawrence, Ankur Moitra

There is a rich literature on recovering data from limited measurements under the assumption of sparsity in some basis, whether known (compressed sensing) or unknown (dictionary learning).

Denoising Dictionary Learning +2

Can Q-Learning be Improved with Advice?

no code implementations25 Oct 2021 Noah Golowich, Ankur Moitra

In this paper we address the question of whether worst-case lower bounds for regret in online learning of Markov decision processes (MDPs) can be circumvented when information about the MDP, in the form of predictions about its optimal $Q$-value function, is given to the algorithm.

Q-Learning reinforcement-learning +2

Kalman Filtering with Adversarial Corruptions

no code implementations11 Nov 2021 Sitan Chen, Frederic Koehler, Ankur Moitra, Morris Yau

In a pioneering work, Schick and Mitter gave provable guarantees when the measurement noise is a known infinitesimal perturbation of a Gaussian and raised the important question of whether one can get similar guarantees for large and unknown perturbations.

A No-go Theorem for Robust Acceleration in the Hyperbolic Plane

no code implementations NeurIPS 2021 Linus Hamilton, Ankur Moitra

In recent years there has been significant effort to adapt the key tools and ideas in convex optimization to the Riemannian setting.

Fast Convergence for Langevin with Matrix Manifold Structure

no code implementations ICLR Workshop DeepDiffEq 2019 Ankur Moitra, Andrej Risteski

In this paper, we study one aspect of nonconvexity relevant for modern machine learning applications: existence of invariances (symmetries) in the function f, as a result of which the distribution p will have manifolds of points with equal probability.

Bayesian Inference

Robust Voting Rules from Algorithmic Robust Statistics

no code implementations13 Dec 2021 Allen Liu, Ankur Moitra

Maximum likelihood estimation furnishes powerful insights into voting theory, and the design of voting rules.

Planning in Observable POMDPs in Quasipolynomial Time

no code implementations12 Jan 2022 Noah Golowich, Ankur Moitra, Dhruv Rohatgi

Our main result is a quasipolynomial-time algorithm for planning in (one-step) observable POMDPs.

Provably Auditing Ordinary Least Squares in Low Dimensions

no code implementations28 May 2022 Ankur Moitra, Dhruv Rohatgi

Measuring the stability of conclusions derived from Ordinary Least Squares linear regression is critically important, but most metrics either only measure local stability (i. e. against infinitesimal changes in the data), or are only interpretable under statistical assumptions.

regression

Learning in Observable POMDPs, without Computationally Intractable Oracles

no code implementations7 Jun 2022 Noah Golowich, Ankur Moitra, Dhruv Rohatgi

Much of reinforcement learning theory is built on top of oracles that are computationally hard to implement.

Learning Theory Reinforcement Learning (RL)

Minimax Rates for Robust Community Detection

no code implementations25 Jul 2022 Allen Liu, Ankur Moitra

In this work, we study the problem of community detection in the stochastic block model with adversarial node corruptions.

Community Detection Stochastic Block Model

A New Approach to Learning Linear Dynamical Systems

no code implementations23 Jan 2023 Ainesh Bakshi, Allen Liu, Ankur Moitra, Morris Yau

Linear dynamical systems are the foundational statistical model upon which control theory is built.

Tensor Decompositions Meet Control Theory: Learning General Mixtures of Linear Dynamical Systems

no code implementations13 Jul 2023 Ainesh Bakshi, Allen Liu, Ankur Moitra, Morris Yau

In this work we give a new approach to learning mixtures of linear dynamical systems that is based on tensor decompositions.

Tensor Decomposition Time Series

Exploring and Learning in Sparse Linear MDPs without Computationally Intractable Oracles

no code implementations18 Sep 2023 Noah Golowich, Ankur Moitra, Dhruv Rohatgi

The key assumption underlying linear Markov Decision Processes (MDPs) is that the learner has access to a known feature map $\phi(x, a)$ that maps state-action pairs to $d$-dimensional vectors, and that the rewards and transitions are linear functions in this representation.

feature selection Learning Theory +1

Learning quantum Hamiltonians at any temperature in polynomial time

no code implementations3 Oct 2023 Ainesh Bakshi, Allen Liu, Ankur Moitra, Ewin Tang

Anshu, Arunachalam, Kuwahara, and Soleimanifar (arXiv:2004. 07266) gave an algorithm to learn a Hamiltonian on $n$ qubits to precision $\epsilon$ with only polynomially many copies of the Gibbs state, but which takes exponential time.

Precise Error Rates for Computationally Efficient Testing

no code implementations1 Nov 2023 Ankur Moitra, Alexander S. Wein

Our result shows that the spectrum is a sufficient statistic for computationally bounded tests (but not for all tests).

Cannot find the paper you are looking for? You can Submit a new open access paper.