Search Results for author: Aravindan Vijayaraghavan

Found 23 papers, 1 papers with code

Efficient Algorithms for Learning Depth-2 Neural Networks with General ReLU Activations

no code implementations NeurIPS 2021 Pranjal Awasthi, Alex Tang, Aravindan Vijayaraghavan

We present polynomial time and sample efficient algorithms for learning an unknown depth-2 feedforward neural network with general ReLU activations, under mild non-degeneracy assumptions.

Beyond Perturbation Stability: LP Recovery Guarantees for MAP Inference on Noisy Stable Instances

no code implementations26 Feb 2021 Hunter Lang, Aravind Reddy, David Sontag, Aravindan Vijayaraghavan

Several works have shown that perturbation stable instances of the MAP inference problem in Potts models can be solved exactly using a natural linear programming (LP) relaxation.

Graph cuts always find a global optimum for Potts models (with a catch)

no code implementations7 Nov 2020 Hunter Lang, David Sontag, Aravindan Vijayaraghavan

On "real-world" instances, MAP assignments of small perturbations of the problem should be very similar to the MAP assignment(s) of the original problem instance.

Learning a mixture of two subspaces over finite fields

no code implementations6 Oct 2020 Aidao Chen, Anindya De, Aravindan Vijayaraghavan

We study the problem of learning a mixture of two subspaces over $\mathbb{F}_2^n$.

Data Structures and Algorithms

Efficient Tensor Decomposition

no code implementations30 Jul 2020 Aravindan Vijayaraghavan

This chapter studies the problem of decomposing a tensor into a sum of constituent rank one tensors.

Tensor Decomposition

Adversarial robustness via robust low rank representations

no code implementations NeurIPS 2020 Pranjal Awasthi, Himanshu Jain, Ankit Singh Rawat, Aravindan Vijayaraghavan

Adversarial robustness measures the susceptibility of a classifier to imperceptible perturbations made to the inputs at test time.

Adversarial Robustness

Estimating Principal Components under Adversarial Perturbations

no code implementations31 May 2020 Pranjal Awasthi, Xue Chen, Aravindan Vijayaraghavan

We design a computationally efficient algorithm that given corrupted data, recovers an estimate of the top-$r$ principal subspace with error that depends on a robustness parameter $\kappa$ that we identify.

Adversarially Robust Low Dimensional Representations

no code implementations29 Nov 2019 Pranjal Awasthi, Vaggos Chatziafratis, Xue Chen, Aravindan Vijayaraghavan

In particular, our adversarially robust PCA primitive leads to computationally efficient and robust algorithms for both unsupervised and supervised learning problems such as clustering and learning adversarially robust classifiers.

On Robustness to Adversarial Examples and Polynomial Optimization

1 code implementation NeurIPS 2019 Pranjal Awasthi, Abhratanu Dutta, Aravindan Vijayaraghavan

In particular, we leverage this connection to (a) design computationally efficient robust algorithms with provable guarantees for a large class of hypothesis, namely linear classifiers and degree-2 polynomial threshold functions (PTFs), (b) give a precise characterization of the price of achieving robustness in a computationally efficient manner for these classes, (c) design efficient algorithms to certify robustness and generate adversarial attacks in a principled manner for 2-layer neural networks.

Smoothed Analysis in Unsupervised Learning via Decoupling

no code implementations29 Nov 2018 Aditya Bhaskara, Aidao Chen, Aidan Perreault, Aravindan Vijayaraghavan

Smoothed analysis is a powerful paradigm in overcoming worst-case intractability in unsupervised learning and high-dimensional data analysis.

Block Stability for MAP Inference

no code implementations12 Oct 2018 Hunter Lang, David Sontag, Aravindan Vijayaraghavan

The simplest stability condition assumes that the MAP solution does not change at all when some of the pairwise potentials are (adversarially) perturbed.

Towards Learning Sparsely Used Dictionaries with Arbitrary Supports

no code implementations23 Apr 2018 Pranjal Awasthi, Aravindan Vijayaraghavan

To address this question while circumventing the issue of non-identifiability, we study a natural semirandom model for dictionary learning where there are a large number of samples $y=Ax$ with arbitrary k-sparse supports for x, along with a few samples where the sparse supports are chosen uniformly at random.

Dictionary Learning

Clustering Stable Instances of Euclidean k-means

no code implementations4 Dec 2017 Abhratanu Dutta, Aravindan Vijayaraghavan, Alex Wang

We design efficient algorithms that provably recover the optimal clustering for instances that are additive perturbation stable.

Clustering Stable Instances of Euclidean k-means.

no code implementations NeurIPS 2017 Aravindan Vijayaraghavan, Abhratanu Dutta, Alex Wang

To address this disconnect, we study the following question: what properties of real-world instances will enable us to design efficient algorithms and prove guarantees for finding the optimal clustering?

Clustering Semi-Random Mixtures of Gaussians

no code implementations ICML 2018 Pranjal Awasthi, Aravindan Vijayaraghavan

Gaussian mixture models (GMM) are the most widely used statistical model for the $k$-means clustering problem and form a popular framework for clustering in machine learning and data analysis.

Optimality of Approximate Inference Algorithms on Stable Instances

no code implementations6 Nov 2017 Hunter Lang, David Sontag, Aravindan Vijayaraghavan

Approximate algorithms for structured prediction problems---such as LP relaxations and the popular alpha-expansion algorithm (Boykov et al. 2001)---typically far exceed their theoretical performance guarantees on real-world instances.

Structured Prediction

On Learning Mixtures of Well-Separated Gaussians

no code implementations31 Oct 2017 Oded Regev, Aravindan Vijayaraghavan

In the most basic form of this problem, we are given samples from a uniform mixture of $k$ standard spherical Gaussians, and the goal is to estimate the means up to accuracy $\delta$ using $poly(k, d, 1/\delta)$ samples.

Learning Communities in the Presence of Errors

no code implementations10 Nov 2015 Konstantin Makarychev, Yury Makarychev, Aravindan Vijayaraghavan

Many algorithms exist for learning communities in the Stochastic Block Model, but they do not work well in the presence of errors.

Community Detection graph partitioning +1

Learning Mixtures of Ranking Models

no code implementations NeurIPS 2014 Pranjal Awasthi, Avrim Blum, Or Sheffet, Aravindan Vijayaraghavan

We present the first polynomial time algorithm which provably learns the parameters of a mixture of two Mallows models.

Tensor Decomposition

Correlation Clustering with Noisy Partial Information

no code implementations22 Jun 2014 Konstantin Makarychev, Yury Makarychev, Aravindan Vijayaraghavan

In this paper, we propose and study a semi-random model for the Correlation Clustering problem on arbitrary graphs G. We give two approximation algorithms for Correlation Clustering instances from this model.

General Classification

Smoothed Analysis of Tensor Decompositions

no code implementations14 Nov 2013 Aditya Bhaskara, Moses Charikar, Ankur Moitra, Aravindan Vijayaraghavan

We introduce a smoothed analysis model for studying these questions and develop an efficient algorithm for tensor decomposition in the highly overcomplete case (rank polynomial in the dimension).

Tensor Decomposition

Uniqueness of Tensor Decompositions with Applications to Polynomial Identifiability

no code implementations30 Apr 2013 Aditya Bhaskara, Moses Charikar, Aravindan Vijayaraghavan

We give a robust version of the celebrated result of Kruskal on the uniqueness of tensor decompositions: we prove that given a tensor whose decomposition satisfies a robust form of Kruskal's rank condition, it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (inverse polynomial) error.

Latent Variable Models Topic Models

Cannot find the paper you are looking for? You can Submit a new open access paper.