no code implementations • 5 Jul 2021 • Koulik Khamaru, Yash Deshpande, Tor Lattimore, Lester Mackey, Martin J. Wainwright
We propose a family of online debiasing estimators to correct these distributional anomalies in least squares estimation.
1 code implementation • EMNLP (MRL) 2021 • Houda Alberts, Teresa Huang, Yash Deshpande, Yibo Liu, Kyunghyun Cho, Clara Vania, Iacer Calixto
We also release a neural multi-modal retrieval model that can use images or sentences as inputs and retrieves entities in the KG.
no code implementations • 4 Nov 2019 • Yash Deshpande, Adel Javanmard, Mohammad Mehrabi
Adaptive collection of data is commonplace in applications throughout science and engineering.
no code implementations • NeurIPS 2018 • Yash Deshpande, Andrea Montanari, Elchanan Mossel, Subhabrata Sen
We provide the first information theoretic tight analysis for inference of latent community structure given a sparse graph along with high dimensional node covariates, correlated with the same latent communities.
1 code implementation • ICML 2018 • Yash Deshpande, Lester Mackey, Vasilis Syrgkanis, Matt Taddy
Estimators computed from adaptively collected data do not behave like their non-adaptive brethren.
no code implementations • NeurIPS 2017 • Murat A. Erdogdu, Yash Deshpande, Andrea Montanari
We demonstrate that the resulting algorithm can solve problems with tens of thousands of variables within minutes, and outperforms BP and GBP on practical problems such as image denoising and Ising spin glasses.
no code implementations • 23 Feb 2015 • Yash Deshpande, Andrea Montanari
Here we consider the degree-$4$ SOS relaxation, and study the construction of \cite{meka2013association} to prove that SOS fails unless $k\ge C\, n^{1/3}/\log n$.
no code implementations • NeurIPS 2014 • Yash Deshpande, Andrea Montanari, Emile Richard
We consider a simple model for noisy quadratic observation of an unknown vector $\bvz$.
no code implementations • NeurIPS 2014 • Yash Deshpande, Andrea Montanari
In an influential paper, \cite{johnstone2004sparse} introduced a simple algorithm that estimates the support of the principal vectors $\mathbf{v}_1,\dots,\mathbf{v}_r$ by the largest entries in the diagonal of the empirical covariance.