no code implementations • 25 Mar 2024 • Ankit Pensia, Varun Jog, Po-Ling Loh
In this paper, we derive a formula that characterizes the sample complexity (up to multiplicative constants that are independent of $p$, $q$, and all error parameters) for: (i) all $0 \le \alpha, \beta \le 1/8$ in the prior-free setting; and (ii) all $\delta \le \alpha/4$ in the Bayesian setting.
no code implementations • 30 Jan 2023 • Eirini Ioannou, Muni Sreenivas Pydi, Po-Ling Loh
A new variant of Newton's method for empirical risk minimization is studied, where at each iteration of the optimization algorithm, the gradient and Hessian of the objective function are replaced by robust estimators taken from existing literature on robust mean estimation for multivariate data.
no code implementations • 9 Jan 2023 • Ankit Pensia, Amir R. Asadi, Varun Jog, Po-Ling Loh
For the sample complexity of simple hypothesis testing under pure LDP constraints, we establish instance-optimal bounds for distributions with binary support; minimax-optimal bounds for general distributions; and (approximately) instance-optimal, computationally efficient algorithms for general distributions.
no code implementations • 6 Jun 2022 • Ankit Pensia, Varun Jog, Po-Ling Loh
We show that the sample complexity of simple binary hypothesis testing under communication constraints is at most a logarithmic factor larger than in the unconstrained setting and this bound is tight.
no code implementations • 31 Jan 2022 • Xiaomin Zhang, Xucheng Zhang, Po-Ling Loh, YIngyu Liang
Mixtures of ranking models are standard tools for ranking problems.
1 code implementation • 19 Mar 2021 • Marco Avella-Medina, Casey Bradshaw, Po-Ling Loh
We propose a general optimization-based framework for computing differentially private M-estimators and a new method for constructing differentially private confidence regions.
no code implementations • 20 Jan 2021 • Zheng Liu, Po-Ling Loh
Robust estimation is an important problem in statistics which aims at providing a reasonable estimator when the data-generating distribution lies within an appropriately defined ball around an uncontaminated distribution.
no code implementations • 27 Sep 2020 • Ankit Pensia, Varun Jog, Po-Ling Loh
We study the problem of linear regression where both covariates and responses are potentially (i) heavy-tailed and (ii) adversarially contaminated.
no code implementations • 16 Jun 2020 • Xiaomin Zhang, Xiaojin Zhu, Po-Ling Loh
We first formulate a general statistical algorithm for identifying buggy points and provide rigorous theoretical guarantees under the assumption that the data follow a linear model.
no code implementations • 31 Jan 2020 • Duzhe Wang, Haoda Fu, Po-Ling Loh
We present nonparametric algorithms for estimating optimal individualized treatment rules.
no code implementations • 15 Oct 2019 • Ankit Pensia, Varun Jog, Po-Ling Loh
We propose a novel strategy for extracting features in supervised learning that can be used to construct a classifier which is more robust to small perturbations in the input space.
no code implementations • 1 Aug 2019 • Zheng Liu, Jinnian Zhang, Varun Jog, Po-Ling Loh, Alan B McMillan
Materials and Methods: In this retrospective study, the accuracy of brain tumor segmentation was studied in subjects with low- and high-grade gliomas.
no code implementations • 6 Jul 2019 • Ankit Pensia, Varun Jog, Po-Ling Loh
In the multivariate setting, we generalize our theory to mean estimation for mixtures of radially symmetric distributions, and derive minimax lower bounds on the expected error of any estimator that is agnostic to the scales of individual data points.
no code implementations • 8 May 2019 • Shashank Rajput, Zhili Feng, Zachary Charles, Po-Ling Loh, Dimitris Papailiopoulos
Data augmentation (DA) is commonly used during model training, as it significantly improves test error and model robustness.
no code implementations • 6 Nov 2018 • Po-Ling Loh
However, the variance of the error term in the linear model is intricately connected to the optimal parameter used to define the shape of the Huber loss.
no code implementations • 22 Oct 2018 • Justin Khim, Po-Ling Loh
We derive bounds for a notion of adversarial risk, designed to characterize the robustness of linear and neural network classifiers to adversarial perturbations.
no code implementations • 1 Apr 2018 • Zhili Feng, Po-Ling Loh
When the adversary is allowed a bounded memory of size 1, we show that a matching lower bound of $\widetilde\Omega(T^{2/3})$ is achieved in the case of full-information feedback.
no code implementations • 13 Feb 2018 • Muni Sreenivas Pydi, Varun Jog, Po-Ling Loh
We also provide simulations showing the relative convergence rates of our algorithms in comparison to an unbiased random walk, as a function of the smoothness of the graph function.
no code implementations • 12 Jan 2018 • Ankit Pensia, Varun Jog, Po-Ling Loh
In statistical learning theory, generalization error is used to quantify the degree to which a supervised machine learning algorithm may overfit to training data.
no code implementations • NeurIPS 2016 • Justin T. Khim, Varun Jog, Po-Ling Loh
We quantify the gap between our upper and lower bounds in the case of the linear threshold model and illustrate the gains of our upper bounds for independent cascade models in relation to existing results.
no code implementations • 1 Nov 2016 • Justin Khim, Varun Jog, Po-Ling Loh
We consider the problem of influence maximization in fixed networks for contagion models in an adversarial setting.
no code implementations • 19 Oct 2015 • Justin Khim, Po-Ling Loh
At the core of our proofs is a probabilistic analysis of P\'{o}lya urns corresponding to the number of uninfected neighbors in specific subtrees of the infection tree.
no code implementations • 10 Jan 2015 • Varun Jog, Po-Ling Loh
We establish bounds on the KL divergence between two multivariate Gaussian distributions in terms of the Hamming distance between the edge sets of the corresponding graphical models.
no code implementations • 1 Jan 2015 • Po-Ling Loh
We first establish a form of local statistical consistency for the penalized regression estimators under fairly mild conditions on the error distribution: When the derivative of the loss function is bounded and satisfies a local restricted curvature condition, all stationary points within a constant radius of the true regression vector converge at the minimax rate enjoyed by the Lasso with sub-Gaussian errors.
no code implementations • 17 Dec 2014 • Po-Ling Loh, Martin J. Wainwright
We demonstrate that the primal-dual witness proof method may be used to establish variable selection consistency and $\ell_\infty$-bounds for sparse regression problems, even when the loss function and/or regularizer are nonconvex.
no code implementations • NeurIPS 2014 • Po-Ling Loh, Andre Wibisono
We establish sufficient conditions for the concavity of our reweighted objective function in terms of weight assignments in the Kikuchi expansion, and show that a reweighted version of the sum product algorithm applied to the Kikuchi region graph will produce global optima of the Kikuchi approximation whenever the algorithm converges.
no code implementations • 14 Nov 2013 • Po-Ling Loh, Peter Bühlmann
We establish a new framework for statistical estimation of directed acyclic graphs (DAGs) when data are generated from a linear, possibly non-Gaussian structural equation model.
no code implementations • NeurIPS 2013 • Po-Ling Loh, Martin J. Wainwright
We provide novel theoretical results regarding local optima of regularized $M$-estimators, allowing for nonconvexity in both loss and penalty functions.
no code implementations • NeurIPS 2012 • Po-Ling Loh, Martin J. Wainwright
We show that for certain graph structures, the support of the inverse covariance matrix of indicator variables on the vertices of a graph reflects the conditional independence structure of the graph.
no code implementations • NeurIPS 2011 • Po-Ling Loh, Martin J. Wainwright
On the statistical side, we provide non-asymptotic bounds that hold with high probability for the cases of noisy, missing, and/or dependent data.