You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 29 Mar 2022 • Ali Jadbabaie, Arnab Sarker, Devavrat Shah

Successful predictive modeling of epidemics requires an understanding of the implicit feedback control strategies which are implemented by populations to modulate the spread of contagion.

no code implementations • 14 Feb 2022 • Raaz Dwivedi, Susan Murphy, Devavrat Shah

Second, for a generic non-parametric latent factor model, we establish that the estimate for the missing outcome of any unit at time $\mathbf{T}$ satisfies a central limit theorem as $\mathbf{T} \to \infty$, under suitable regularity conditions.

no code implementations • 7 Jan 2022 • Arnab Sarker, Ali Jadbabaie, Devavrat Shah

The model represents time series of cases and fatalities as a mixture of Gaussian curves, providing a flexible function class to learn from data compared to traditional mechanistic models.

no code implementations • 6 Jan 2022 • Ali Jadbabaie, Anuran Makur, Devavrat Shah

Under some assumptions on the loss function, e. g., strong convexity in parameter, $\eta$-H\"older smoothness in data, etc., we prove that the federated oracle complexity of FedLRGD scales like $\phi m(p/\epsilon)^{\Theta(d/\eta)}$ and that of FedAve scales like $\phi m(p/\epsilon)^{3/4}$ (neglecting sub-dominant factors), where $\phi\gg 1$ is a "communication-to-computation ratio," $p$ is the parameter dimension, and $d$ is the data dimension.

no code implementations • 5 Jan 2022 • Abdullah Alomar, Pouya Hamadanian, Arash Nasr-Esfahany, Anish Agarwal, Mohammad Alizadeh, Devavrat Shah

Through an adversarial neural network training technique that exploits distributional invariances that are present in training data coming from an RCT, CausalSim enables a novel tensor completion method despite the sparsity of observations.

no code implementations • 29 Dec 2021 • Ali Jadbabaie, Horia Mania, Devavrat Shah, Suvrit Sra

We revisit a model for time-varying linear regression that assumes the unknown parameters evolve according to a linear dynamical system.

no code implementations • NeurIPS 2021 • Arwa Alanqary, Abdullah Alomar, Devavrat Shah

The change point in such a setting corresponds to a change in the underlying spatio-temporal model.

no code implementations • NeurIPS 2021 • Abhin Shah, Devavrat Shah, Gregory W. Wornell

In this work, we propose a computationally efficient estimator that is consistent as well as asymptotically normal under mild conditions.

no code implementations • 30 Sep 2021 • Anish Agarwal, Munther Dahleh, Devavrat Shah, Dennis Shen

In particular, we establish entry-wise, i. e., max-norm, finite-sample consistency and asymptotic normality results for matrix completion with MNAR data.

no code implementations • 19 Feb 2021 • Romain Cosson, Devavrat Shah

Specifically, we argue that (a variant of) TRW produces an estimate that is within factor $\frac{1}{\sqrt{\kappa(G)}}$ of the true log-partition function for any discrete pairwise graphical model over graph $G$, where $\kappa(G) \in (0, 1]$ captures how far $G$ is from tree structure with $\kappa(G) = 1$ for trees and $2/N$ for the complete graph over $N$ vertices.

no code implementations • NeurIPS 2021 • Anish Agarwal, Abdullah Alomar, Varkey Alumootil, Devavrat Shah, Dennis Shen, Zhi Xu, Cindy Yang

We consider offline reinforcement learning (RL) with heterogeneous agents under severe data scarcity, i. e., we only observe a single historical trajectory for every agent under an unknown, potentially sub-optimal policy.

no code implementations • 11 Feb 2021 • Sarah H. Cen, Devavrat Shah

In this work, we study how competition affects the long-term outcomes of individuals as they learn.

no code implementations • NeurIPS 2020 • Ali Jadbabaie, Anuran Makur, Devavrat Shah

In this paper, we study the problem of learning the skill distribution of a population of agents from observations of pairwise games in a tournament.

no code implementations • 4 Nov 2020 • Ali Jadbabaie, Anuran Makur, Devavrat Shah

In contrast, we demonstrate that when the loss function is smooth in the data, we can learn the oracle at every iteration and beat the oracle complexities of both GD and SGD in important regimes.

no code implementations • 28 Oct 2020 • Abhin Shah, Devavrat Shah, Gregory W. Wornell

We consider learning a sparse pairwise Markov Random Field (MRF) with continuous-valued variables from i. i. d samples.

no code implementations • 27 Oct 2020 • Anish Agarwal, Devavrat Shah, Dennis Shen

We analyze the classical method of principal component regression (PCR) in a high-dimensional error-in-variables setting.

no code implementations • 24 Jun 2020 • Anish Agarwal, Abdullah Alomar, Devavrat Shah

We introduce and analyze a simpler, practically useful variant of multivariate singular spectrum analysis (mSSA), a known time series method to impute (or de-noise) and forecast a multivariate time series.

no code implementations • 15 Jun 2020 • Ali Jadbabaie, Anuran Makur, Devavrat Shah

In this paper, we study the problem of learning the skill distribution of a population of agents from observations of pairwise games in a tournament.

no code implementations • 13 Jun 2020 • Anish Agarwal, Devavrat Shah, Dennis Shen

Theoretically, under a novel tensor factor model across units, measurements, and interventions, we formally establish an identification result for each of these $N \times D$ causal parameters and establish finite-sample consistency and asymptotic normality of our estimator.

no code implementations • NeurIPS 2020 • Devavrat Shah, Dogyoon Song, Zhi Xu, Yuzhe Yang

As our key contribution, we develop a simple, iterative learning algorithm that finds $\epsilon$-optimal $Q$-function with sample complexity of $\widetilde{O}(\frac{1}{\epsilon^{\max(d_1, d_2)+2}})$ when the optimal $Q$-function has low rank $r$ and the discounting factor $\gamma$ is below a certain threshold.

no code implementations • L4DC 2020 • Devavrat Shah, Qiaomin Xie, Zhi Xu

As a proof of concept, we propose an RL policy using Sparse-Sampling-based Monte Carlo Oracle and argue that it satisfies the stability property as long as the system dynamics under the optimal policy respects a Lyapunov function.

no code implementations • 30 Apr 2020 • Anish Agarwal, Abdullah Alomar, Arnab Sarker, Devavrat Shah, Dennis Shen, Cindy Yang

In essence, the method leverages information from different interventions that have already been enacted across the world and fits it to a policy maker's setting of interest, e. g., to estimate the effect of mobility-restricting interventions on the U. S., we use daily death data from countries that enforced severe mobility restrictions to create a "synthetic low mobility U. S." and predict the counterfactual trajectory of the U. S. if it had indeed applied a similar intervention.

no code implementations • 25 Feb 2020 • Devavrat Shah, Varun Somani, Qiaomin Xie, Zhi Xu

For a concrete instance of EIS where random policy is used for "exploration", Monte-Carlo Tree Search is used for "policy improvement" and Nearest Neighbors is used for "supervised learning", we establish that this method finds an $\varepsilon$-approximate value function of Nash equilibrium in $\widetilde{O}(\varepsilon^{-(d+4)})$ steps when the underlying state-space of the game is continuous and $d$-dimensional.

no code implementations • 1 Nov 2019 • Lavanya Marla, Lav R. Varshney, Devavrat Shah, Nirmal A. Prakash, Michael E. Gale

We show this notion of pipelined network flow is optimized using network paths that are both short and wide, and develop efficient algorithms to compute such paths for given pairs of nodes and for all-pairs.

no code implementations • 3 Aug 2019 • Devavrat Shah, Christina Lee Yu

We prove that the algorithm recovers a low rank tensor with maximum entry-wise error (MEE) and mean-squared-error (MSE) decaying to $0$ as long as each entry is observed independently with probability $p = \Omega(n^{-3/2 + \kappa})$ for any arbitrarily small $\kappa > 0$.

no code implementations • ICLR 2019 • Ravichandra Addanki, Mohammad Alizadeh, Shaileshh Bojja Venkatakrishnan, Devavrat Shah, Qiaomin Xie, Zhi Xu

AlphaGo Zero (AGZ) introduced a new {\em tabula rasa} reinforcement learning algorithm that has achieved superhuman performance in the games of Go, Chess, and Shogi with no prior knowledge other than the rules of the game.

no code implementations • 17 Mar 2019 • Anish Agarwal, Abdullah Alomar, Devavrat Shah

Computationally, tspDB is 59-62x and 94-95x faster compared to LSTM and DeepAR in terms of median ML model training time and prediction query latency, respectively.

no code implementations • NeurIPS 2019 • Anish Agarwal, Devavrat Shah, Dennis Shen, Dogyoon Song

As an important contribution to the Synthetic Control literature, we establish that an (approximate) linear synthetic control exists in the setting of a generalized factor model; traditionally, the existence of a synthetic control needs to be assumed to exist as an axiom.

no code implementations • 14 Feb 2019 • Devavrat Shah, Qiaomin Xie, Zhi Xu

In effect, we establish that to learn an $\varepsilon$ approximation of the value function with respect to $\ell_\infty$ norm, MCTS combined with nearest neighbor requires a sample size scaling as $\widetilde{O}\big(\varepsilon^{-(d+4)}\big)$, where $d$ is the dimension of the state space.

no code implementations • 31 Dec 2018 • Devavrat Shah, Dogyoon Song

Despite the success of RUMs in various domains and the versatility of mixture RUMs to capture the heterogeneity in preferences, there has been only limited progress in learning a mixture of RUMs from partial data such as pairwise comparisons.

no code implementations • 15 Oct 2018 • Linqi Song, Christina Fragouli, Devavrat Shah

We consider recommendation systems that need to operate under wireless bandwidth constraints, measured as number of broadcast transmissions, and demonstrate a (tight for some instances) tradeoff between regret and bandwidth for two scenarios: the case of multi-armed bandit with context, and the case where there is a latent structure in the message space that we can exploit to reduce the learning phase.

1 code implementation • 25 Feb 2018 • Anish Agarwal, Muhammad Jehangir Amjad, Devavrat Shah, Dennis Shen

In effect, this generalizes the widely used Singular Spectrum Analysis (SSA) in time series literature, and allows us to establish a rigorous link between time series analysis and matrix estimation.

no code implementations • NeurIPS 2018 • Devavrat Shah, Qiaomin Xie

In particular, for MDPs with a $d$-dimensional state space and the discounted factor $\gamma \in (0, 1)$, given an arbitrary sample path with "covering time" $ L $, we establish that the algorithm is guaranteed to output an $\varepsilon$-accurate estimate of the optimal Q-function using $\tilde{O}\big(L/(\varepsilon^3(1-\gamma)^7)\big)$ samples.

no code implementations • NeurIPS 2017 • Christian Borgs, Jennifer Chayes, Christina E. Lee, Devavrat Shah

We show that the mean squared error (MSE) of our estimator converges to $0$ at the rate of $O(d^2 (pn)^{-2/5})$ as long as $\omega(d^5 n)$ random entries from a total of $n^2$ entries of $Y$ are observed (uniformly sampled), $\E[Y]$ has rank $d$, and the entries of $Y$ have bounded support.

1 code implementation • 18 Nov 2017 • Muhammad Jehangir Amjad, Devavrat Shah, Dennis Shen

Our experiments, using both real-world and synthetic datasets, demonstrate that our robust generalization yields an improvement over the classical synthetic control method.

no code implementations • 23 Mar 2017 • Devavrat Shah, Christina Lee Yu

Inferring the correct answers to binary tasks based on multiple noisy answers in an unsupervised manner has emerged as the canonical question for micro-task crowdsourcing or more generally aggregating opinions.

no code implementations • NeurIPS 2016 • Dogyoon Song, Christina E. Lee, Yihua Li, Devavrat Shah

In contrast with classical regression, the features $x = (x_1(u), x_2(i))$ are not observed, making it challenging to apply standard regression methods to predict the unobserved ratings.

no code implementations • 6 Oct 2015 • George Chen, Devavrat Shah, Polina Golland

Despite the popularity and empirical success of patch-based nearest-neighbor and weighted majority voting approaches to medical image segmentation, there has been no theoretical development on when, why, and how well these nonparametric methods work.

no code implementations • 20 Jul 2015 • Guy Bresler, Devavrat Shah, Luis F. Voloch

There is much empirical evidence that item-item collaborative filtering works well in practice.

no code implementations • NeurIPS 2014 • Guy Bresler, David Gamarnik, Devavrat Shah

In this paper we investigate the computational complexity of learning the graph structure underlying a discrete undirected graphical model from i. i. d.

no code implementations • NeurIPS 2014 • Sewoong Oh, Devavrat Shah

In case of single MNL models (no mixture), computationally and statistically tractable learning from pair-wise comparisons is feasible.

no code implementations • NeurIPS 2014 • Guy Bresler, George H. Chen, Devavrat Shah

Despite the prevalence of collaborative filtering in recommendation systems, there has been little theoretical development on why and how well it works, especially in the "online" setting, where items are recommended to users over time.

no code implementations • 28 Oct 2014 • Guy Bresler, David Gamarnik, Devavrat Shah

In this paper we consider the problem of learning undirected graphical models from data generated according to the Glauber dynamics.

15 code implementations • 6 Oct 2014 • Devavrat Shah, Kang Zhang

In this paper, we discuss the method of Bayesian regression and its efficacy for predicting price variation of Bitcoin, a recently popularized virtual, cryptographic currency.

no code implementations • 17 Sep 2014 • Angélique Drémeau, Christophe Schülke, Yingying Xu, Devavrat Shah

These are notes from the lecture of Devavrat Shah given at the autumn school "Statistical Physics, Optimization, Inference, and Message-Passing Algorithms", that took place in Les Houches, France from Monday September 30th, 2013, till Friday October 11th, 2013.

no code implementations • NeurIPS 2014 • Guy Bresler, David Gamarnik, Devavrat Shah

Our proof gives a polynomial time reduction from approximating the partition function of the hard-core model, known to be hard, to learning approximate parameters.

no code implementations • NeurIPS 2013 • Christina E. Lee, Asuman Ozdaglar, Devavrat Shah

In this paper, we provide a novel algorithm that answers whether a chosen state in a MC has stationary probability larger than some $\Delta \in (0, 1)$.

no code implementations • 24 Sep 2013 • Vincent Blondel, Kyomin Jung, Pushmeet Kohli, Devavrat Shah

This paper presents a novel meta algorithm, Partition-Merge (PM), which takes existing centralized algorithms for graph computation and makes them distributed and faster.

no code implementations • NeurIPS 2013 • George H. Chen, Stanislav Nikolov, Devavrat Shah

Our guiding hypothesis is that in many applications, such as forecasting which topics will become trends on Twitter, there aren't actually that many prototypical time series to begin with, relative to the number of time series we have access to, e. g., topics become trends on Twitter only in a few distinct manners whereas we can collect massive amounts of Twitter data.

no code implementations • NeurIPS 2012 • Sahand Negahban, Sewoong Oh, Devavrat Shah

In most settings, in addition to obtaining ranking, finding ‘scores’ for each object (e. g. player’s rating) is of interest to understanding the intensity of the preferences.

no code implementations • 8 Sep 2012 • Sahand Negahban, Sewoong Oh, Devavrat Shah

To study the efficacy of the algorithm, we consider the popular Bradley-Terry-Luce (BTL) model (equivalent to the Multinomial Logit (MNL) for pair-wise comparisons) in which each object has an associated score which determines the probabilistic outcomes of pair-wise comparisons between objects.

no code implementations • NeurIPS 2011 • David R. Karger, Sewoong Oh, Devavrat Shah

Crowdsourcing systems, in which tasks are electronically distributed to numerous ``information piece-workers'', have emerged as an effective paradigm for human-powered solving of large scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading.

no code implementations • 17 Oct 2011 • David R. Karger, Sewoong Oh, Devavrat Shah

Further, we compare our approach with a more general class of algorithms which can dynamically assign tasks.

no code implementations • 2 Nov 2010 • Dhruv Parthasarathy, Devavrat Shah, Tauhid Zaman

For a large number of popular social networks, it recovers communities with a much higher F1 score than other popular algorithms.

no code implementations • NeurIPS 2009 • Vivek Farias, Srikanth Jagabathula, Devavrat Shah

We visit the following fundamental problem: For a `generic model of consumer choice (namely, distributions over preference lists) and a limited amount of data on how consumers actually make decisions (such as marginal preference information), how may one predict revenues from offering a particular assortment of choices?

no code implementations • NeurIPS 2009 • Kyomin Jung, Pushmeet Kohli, Devavrat Shah

We consider the question of computing Maximum A Posteriori (MAP) assignment in an arbitrary pair-wise Markov Random Field (MRF).

no code implementations • NeurIPS 2007 • Kyomin Jung, Devavrat Shah

We present a new local approximation algorithm for computing MAP and log-partition function for arbitrary exponential family distribution represented by a finite-valued pair-wise Markov random field (MRF), say G. Our algorithm is based on decomposing G into appropriately chosen small components; computing estimates locally in each of these components and then producing a good global solution.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.