Search Results for author: Aravindan Vijayaraghavan

Found 28 papers, 2 papers with code

Training Subset Selection for Weak Supervision

1 code implementation6 Jun 2022 Hunter Lang, Aravindan Vijayaraghavan, David Sontag

Subset selection applies to any label model and classifier and is very simple to plug in to existing weak supervision pipelines, requiring just a few lines of code.

Towards Learning Sparsely Used Dictionaries with Arbitrary Supports

no code implementations23 Apr 2018 Pranjal Awasthi, Aravindan Vijayaraghavan

To address this question while circumventing the issue of non-identifiability, we study a natural semirandom model for dictionary learning where there are a large number of samples $y=Ax$ with arbitrary k-sparse supports for x, along with a few samples where the sparse supports are chosen uniformly at random.

Dictionary Learning

Optimality of Approximate Inference Algorithms on Stable Instances

no code implementations6 Nov 2017 Hunter Lang, David Sontag, Aravindan Vijayaraghavan

Approximate algorithms for structured prediction problems---such as LP relaxations and the popular alpha-expansion algorithm (Boykov et al. 2001)---typically far exceed their theoretical performance guarantees on real-world instances.

Structured Prediction

Clustering Stable Instances of Euclidean k-means

no code implementations4 Dec 2017 Abhratanu Dutta, Aravindan Vijayaraghavan, Alex Wang

We design efficient algorithms that provably recover the optimal clustering for instances that are additive perturbation stable.

Clustering

Clustering Semi-Random Mixtures of Gaussians

no code implementations ICML 2018 Pranjal Awasthi, Aravindan Vijayaraghavan

Gaussian mixture models (GMM) are the most widely used statistical model for the $k$-means clustering problem and form a popular framework for clustering in machine learning and data analysis.

Clustering

On Learning Mixtures of Well-Separated Gaussians

no code implementations31 Oct 2017 Oded Regev, Aravindan Vijayaraghavan

In the most basic form of this problem, we are given samples from a uniform mixture of $k$ standard spherical Gaussians, and the goal is to estimate the means up to accuracy $\delta$ using $poly(k, d, 1/\delta)$ samples.

Learning Communities in the Presence of Errors

no code implementations10 Nov 2015 Konstantin Makarychev, Yury Makarychev, Aravindan Vijayaraghavan

Many algorithms exist for learning communities in the Stochastic Block Model, but they do not work well in the presence of errors.

Community Detection graph partitioning +2

Correlation Clustering with Noisy Partial Information

no code implementations22 Jun 2014 Konstantin Makarychev, Yury Makarychev, Aravindan Vijayaraghavan

In this paper, we propose and study a semi-random model for the Correlation Clustering problem on arbitrary graphs G. We give two approximation algorithms for Correlation Clustering instances from this model.

Clustering General Classification

Learning Mixtures of Ranking Models

no code implementations NeurIPS 2014 Pranjal Awasthi, Avrim Blum, Or Sheffet, Aravindan Vijayaraghavan

We present the first polynomial time algorithm which provably learns the parameters of a mixture of two Mallows models.

Tensor Decomposition

Smoothed Analysis of Tensor Decompositions

no code implementations14 Nov 2013 Aditya Bhaskara, Moses Charikar, Ankur Moitra, Aravindan Vijayaraghavan

We introduce a smoothed analysis model for studying these questions and develop an efficient algorithm for tensor decomposition in the highly overcomplete case (rank polynomial in the dimension).

Tensor Decomposition

Uniqueness of Tensor Decompositions with Applications to Polynomial Identifiability

no code implementations30 Apr 2013 Aditya Bhaskara, Moses Charikar, Aravindan Vijayaraghavan

We give a robust version of the celebrated result of Kruskal on the uniqueness of tensor decompositions: we prove that given a tensor whose decomposition satisfies a robust form of Kruskal's rank condition, it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (inverse polynomial) error.

Topic Models

Block Stability for MAP Inference

no code implementations12 Oct 2018 Hunter Lang, David Sontag, Aravindan Vijayaraghavan

The simplest stability condition assumes that the MAP solution does not change at all when some of the pairwise potentials are (adversarially) perturbed.

Smoothed Analysis in Unsupervised Learning via Decoupling

no code implementations29 Nov 2018 Aditya Bhaskara, Aidao Chen, Aidan Perreault, Aravindan Vijayaraghavan

Smoothed analysis is a powerful paradigm in overcoming worst-case intractability in unsupervised learning and high-dimensional data analysis.

Clustering Stable Instances of Euclidean k-means.

no code implementations NeurIPS 2017 Aravindan Vijayaraghavan, Abhratanu Dutta, Alex Wang

To address this disconnect, we study the following question: what properties of real-world instances will enable us to design efficient algorithms and prove guarantees for finding the optimal clustering?

Clustering

On Robustness to Adversarial Examples and Polynomial Optimization

1 code implementation NeurIPS 2019 Pranjal Awasthi, Abhratanu Dutta, Aravindan Vijayaraghavan

In particular, we leverage this connection to (a) design computationally efficient robust algorithms with provable guarantees for a large class of hypothesis, namely linear classifiers and degree-2 polynomial threshold functions (PTFs), (b) give a precise characterization of the price of achieving robustness in a computationally efficient manner for these classes, (c) design efficient algorithms to certify robustness and generate adversarial attacks in a principled manner for 2-layer neural networks.

Adversarially Robust Low Dimensional Representations

no code implementations29 Nov 2019 Pranjal Awasthi, Vaggos Chatziafratis, Xue Chen, Aravindan Vijayaraghavan

In particular, our adversarially robust PCA primitive leads to computationally efficient and robust algorithms for both unsupervised and supervised learning problems such as clustering and learning adversarially robust classifiers.

BIG-bench Machine Learning Clustering

Estimating Principal Components under Adversarial Perturbations

no code implementations31 May 2020 Pranjal Awasthi, Xue Chen, Aravindan Vijayaraghavan

We design a computationally efficient algorithm that given corrupted data, recovers an estimate of the top-$r$ principal subspace with error that depends on a robustness parameter $\kappa$ that we identify.

BIG-bench Machine Learning

Adversarial robustness via robust low rank representations

no code implementations NeurIPS 2020 Pranjal Awasthi, Himanshu Jain, Ankit Singh Rawat, Aravindan Vijayaraghavan

Adversarial robustness measures the susceptibility of a classifier to imperceptible perturbations made to the inputs at test time.

Adversarial Robustness

Efficient Tensor Decomposition

no code implementations30 Jul 2020 Aravindan Vijayaraghavan

This chapter studies the problem of decomposing a tensor into a sum of constituent rank one tensors.

Tensor Decomposition

Graph cuts always find a global optimum for Potts models (with a catch)

no code implementations7 Nov 2020 Hunter Lang, David Sontag, Aravindan Vijayaraghavan

On "real-world" instances, MAP assignments of small perturbations of the problem should be very similar to the MAP assignment(s) of the original problem instance.

Learning a mixture of two subspaces over finite fields

no code implementations6 Oct 2020 Aidao Chen, Anindya De, Aravindan Vijayaraghavan

We study the problem of learning a mixture of two subspaces over $\mathbb{F}_2^n$.

Data Structures and Algorithms

Beyond Perturbation Stability: LP Recovery Guarantees for MAP Inference on Noisy Stable Instances

no code implementations26 Feb 2021 Hunter Lang, Aravind Reddy, David Sontag, Aravindan Vijayaraghavan

Several works have shown that perturbation stable instances of the MAP inference problem in Potts models can be solved exactly using a natural linear programming (LP) relaxation.

Efficient Algorithms for Learning Depth-2 Neural Networks with General ReLU Activations

no code implementations NeurIPS 2021 Pranjal Awasthi, Alex Tang, Aravindan Vijayaraghavan

We present polynomial time and sample efficient algorithms for learning an unknown depth-2 feedforward neural network with general ReLU activations, under mild non-degeneracy assumptions.

Agnostic Learning of General ReLU Activation Using Gradient Descent

no code implementations4 Aug 2022 Pranjal Awasthi, Alex Tang, Aravindan Vijayaraghavan

We provide a convergence analysis of gradient descent for the problem of agnostically learning a single ReLU function under Gaussian distributions.

Classification Protocols with Minimal Disclosure

no code implementations6 Sep 2022 Jinshuo Dong, Jason Hartline, Aravindan Vijayaraghavan

We consider multi-party protocols for classification that are motivated by applications such as e-discovery in court proceedings.

Classification

Computing linear sections of varieties: quantum entanglement, tensor decompositions and beyond

no code implementations7 Dec 2022 Nathaniel Johnston, Benjamin Lovitz, Aravindan Vijayaraghavan

While all of these problems are NP-hard in the worst case, our algorithm solves them in polynomial time for generic subspaces of dimension up to a constant multiple of the maximum possible.

Error-Tolerant E-Discovery Protocols

no code implementations31 Jan 2024 Jinshuo Dong, Jason D. Hartline, Liren Shan, Aravindan Vijayaraghavan

Our goal is to find a protocol that verifies that the responding party sends almost all responsive documents while minimizing the disclosure of non-responsive documents.

Cannot find the paper you are looking for? You can Submit a new open access paper.