Search Results for author: Vatsal Shah

Found 9 papers, 4 papers with code

Recoverability Landscape of Tree Structured Markov Random Fields under Symmetric Noise

1 code implementation17 Feb 2021 Ashish Katiyar, Soumya Basu, Vatsal Shah, Constantine Caramanis

Furthermore, we present a polynomial time, sample efficient algorithm that recovers the exact tree when this is possible, or up to the unidentifiability as promised by our characterization, when full recoverability is impossible.

On Generalization of Adaptive Methods for Over-parameterized Linear Regression

no code implementations28 Nov 2020 Vatsal Shah, Soumya Basu, Anastasios Kyrillidis, Sujay Sanghavi

In this paper, we aim to characterize the performance of adaptive methods in the over-parameterized linear regression setting.

regression

Robust Estimation of Tree Structured Ising Models

no code implementations10 Jun 2020 Ashish Katiyar, Vatsal Shah, Constantine Caramanis

We consider the task of learning Ising models when the signs of different random variables are flipped independently with possibly unequal, unknown probabilities.

Choosing the Sample with Lowest Loss makes SGD Robust

1 code implementation10 Jan 2020 Vatsal Shah, Xiaoxia Wu, Sujay Sanghavi

The presence of outliers can potentially significantly skew the parameters of machine learning models trained via stochastic gradient descent (SGD).

regression

Negative sampling in semi-supervised learning

1 code implementation ICML 2020 John Chen, Vatsal Shah, Anastasios Kyrillidis

We introduce Negative Sampling in Semi-Supervised Learning (NS3L), a simple, fast, easy to tune algorithm for semi-supervised learning (SSL).

Minimum weight norm models do not always generalize well for over-parameterized problems

no code implementations16 Nov 2018 Vatsal Shah, Anastasios Kyrillidis, Sujay Sanghavi

We empirically show that the minimum weight norm is not necessarily the proper gauge of good generalization in simplified scenaria, and different models found by adaptive methods could outperform plain gradient methods.

An Iterative Approach for Shadow Removal in Document Images

1 code implementation ICASSP 2018 Vatsal Shah, Vineet Gandhi

Uneven illumination and shadows in document images cause a challenge for digitization applications and automated workflows.

Document Shadow Removal

Matrix Completion via Factorizing Polynomials

no code implementations4 May 2017 Vatsal Shah, Nikhil Rao, Weicong Ding

While there has been recent research on incorporating explicit side information in the low-rank matrix factorization setting, often implicit information can be gleaned from the data, via higher-order interactions among entities.

Matrix Completion Recommendation Systems

Trading-off variance and complexity in stochastic gradient descent

no code implementations22 Mar 2016 Vatsal Shah, Megasthenis Asteris, Anastasios Kyrillidis, Sujay Sanghavi

Stochastic gradient descent is the method of choice for large-scale machine learning problems, by virtue of its light complexity per iteration.

Cannot find the paper you are looking for? You can Submit a new open access paper.