1 code implementation • 6 Jun 2022 • Hunter Lang, Aravindan Vijayaraghavan, David Sontag
Subset selection applies to any label model and classifier and is very simple to plug in to existing weak supervision pipelines, requiring just a few lines of code.
no code implementations • 23 Apr 2018 • Pranjal Awasthi, Aravindan Vijayaraghavan
To address this question while circumventing the issue of non-identifiability, we study a natural semirandom model for dictionary learning where there are a large number of samples $y=Ax$ with arbitrary k-sparse supports for x, along with a few samples where the sparse supports are chosen uniformly at random.
no code implementations • 6 Nov 2017 • Hunter Lang, David Sontag, Aravindan Vijayaraghavan
Approximate algorithms for structured prediction problems---such as LP relaxations and the popular alpha-expansion algorithm (Boykov et al. 2001)---typically far exceed their theoretical performance guarantees on real-world instances.
no code implementations • 4 Dec 2017 • Abhratanu Dutta, Aravindan Vijayaraghavan, Alex Wang
We design efficient algorithms that provably recover the optimal clustering for instances that are additive perturbation stable.
no code implementations • ICML 2018 • Pranjal Awasthi, Aravindan Vijayaraghavan
Gaussian mixture models (GMM) are the most widely used statistical model for the $k$-means clustering problem and form a popular framework for clustering in machine learning and data analysis.
no code implementations • 31 Oct 2017 • Oded Regev, Aravindan Vijayaraghavan
In the most basic form of this problem, we are given samples from a uniform mixture of $k$ standard spherical Gaussians, and the goal is to estimate the means up to accuracy $\delta$ using $poly(k, d, 1/\delta)$ samples.
no code implementations • 10 Nov 2015 • Konstantin Makarychev, Yury Makarychev, Aravindan Vijayaraghavan
Many algorithms exist for learning communities in the Stochastic Block Model, but they do not work well in the presence of errors.
no code implementations • 22 Jun 2014 • Konstantin Makarychev, Yury Makarychev, Aravindan Vijayaraghavan
In this paper, we propose and study a semi-random model for the Correlation Clustering problem on arbitrary graphs G. We give two approximation algorithms for Correlation Clustering instances from this model.
no code implementations • NeurIPS 2014 • Pranjal Awasthi, Avrim Blum, Or Sheffet, Aravindan Vijayaraghavan
We present the first polynomial time algorithm which provably learns the parameters of a mixture of two Mallows models.
no code implementations • 22 Jun 2014 • Konstantin Makarychev, Yury Makarychev, Aravindan Vijayaraghavan
Let $G$ be an arbitrary graph on $V$ with no edges between $L$ and $R$.
no code implementations • 14 Nov 2013 • Aditya Bhaskara, Moses Charikar, Ankur Moitra, Aravindan Vijayaraghavan
We introduce a smoothed analysis model for studying these questions and develop an efficient algorithm for tensor decomposition in the highly overcomplete case (rank polynomial in the dimension).
no code implementations • 30 Apr 2013 • Aditya Bhaskara, Moses Charikar, Aravindan Vijayaraghavan
We give a robust version of the celebrated result of Kruskal on the uniqueness of tensor decompositions: we prove that given a tensor whose decomposition satisfies a robust form of Kruskal's rank condition, it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (inverse polynomial) error.
no code implementations • 12 Oct 2018 • Hunter Lang, David Sontag, Aravindan Vijayaraghavan
The simplest stability condition assumes that the MAP solution does not change at all when some of the pairwise potentials are (adversarially) perturbed.
no code implementations • 29 Nov 2018 • Aditya Bhaskara, Aidao Chen, Aidan Perreault, Aravindan Vijayaraghavan
Smoothed analysis is a powerful paradigm in overcoming worst-case intractability in unsupervised learning and high-dimensional data analysis.
no code implementations • NeurIPS 2017 • Aravindan Vijayaraghavan, Abhratanu Dutta, Alex Wang
To address this disconnect, we study the following question: what properties of real-world instances will enable us to design efficient algorithms and prove guarantees for finding the optimal clustering?
1 code implementation • NeurIPS 2019 • Pranjal Awasthi, Abhratanu Dutta, Aravindan Vijayaraghavan
In particular, we leverage this connection to (a) design computationally efficient robust algorithms with provable guarantees for a large class of hypothesis, namely linear classifiers and degree-2 polynomial threshold functions (PTFs), (b) give a precise characterization of the price of achieving robustness in a computationally efficient manner for these classes, (c) design efficient algorithms to certify robustness and generate adversarial attacks in a principled manner for 2-layer neural networks.
no code implementations • 29 Nov 2019 • Pranjal Awasthi, Vaggos Chatziafratis, Xue Chen, Aravindan Vijayaraghavan
In particular, our adversarially robust PCA primitive leads to computationally efficient and robust algorithms for both unsupervised and supervised learning problems such as clustering and learning adversarially robust classifiers.
no code implementations • 31 May 2020 • Pranjal Awasthi, Xue Chen, Aravindan Vijayaraghavan
We design a computationally efficient algorithm that given corrupted data, recovers an estimate of the top-$r$ principal subspace with error that depends on a robustness parameter $\kappa$ that we identify.
no code implementations • NeurIPS 2020 • Pranjal Awasthi, Himanshu Jain, Ankit Singh Rawat, Aravindan Vijayaraghavan
Adversarial robustness measures the susceptibility of a classifier to imperceptible perturbations made to the inputs at test time.
no code implementations • 30 Jul 2020 • Aravindan Vijayaraghavan
This chapter studies the problem of decomposing a tensor into a sum of constituent rank one tensors.
no code implementations • 7 Nov 2020 • Hunter Lang, David Sontag, Aravindan Vijayaraghavan
On "real-world" instances, MAP assignments of small perturbations of the problem should be very similar to the MAP assignment(s) of the original problem instance.
no code implementations • 6 Oct 2020 • Aidao Chen, Anindya De, Aravindan Vijayaraghavan
We study the problem of learning a mixture of two subspaces over $\mathbb{F}_2^n$.
Data Structures and Algorithms
no code implementations • 26 Feb 2021 • Hunter Lang, Aravind Reddy, David Sontag, Aravindan Vijayaraghavan
Several works have shown that perturbation stable instances of the MAP inference problem in Potts models can be solved exactly using a natural linear programming (LP) relaxation.
no code implementations • NeurIPS 2021 • Pranjal Awasthi, Alex Tang, Aravindan Vijayaraghavan
We present polynomial time and sample efficient algorithms for learning an unknown depth-2 feedforward neural network with general ReLU activations, under mild non-degeneracy assumptions.
no code implementations • 4 Aug 2022 • Pranjal Awasthi, Alex Tang, Aravindan Vijayaraghavan
We provide a convergence analysis of gradient descent for the problem of agnostically learning a single ReLU function under Gaussian distributions.
no code implementations • 6 Sep 2022 • Jinshuo Dong, Jason Hartline, Aravindan Vijayaraghavan
We consider multi-party protocols for classification that are motivated by applications such as e-discovery in court proceedings.
no code implementations • 7 Dec 2022 • Nathaniel Johnston, Benjamin Lovitz, Aravindan Vijayaraghavan
While all of these problems are NP-hard in the worst case, our algorithm solves them in polynomial time for generic subspaces of dimension up to a constant multiple of the maximum possible.
no code implementations • 31 Jan 2024 • Jinshuo Dong, Jason D. Hartline, Liren Shan, Aravindan Vijayaraghavan
Our goal is to find a protocol that verifies that the responding party sends almost all responsive documents while minimizing the disclosure of non-responsive documents.