Search Results for author: Po-Ling Loh

Found 30 papers, 1 papers with code

The Sample Complexity of Simple Binary Hypothesis Testing

no code implementations25 Mar 2024 Ankit Pensia, Varun Jog, Po-Ling Loh

In this paper, we derive a formula that characterizes the sample complexity (up to multiplicative constants that are independent of $p$, $q$, and all error parameters) for: (i) all $0 \le \alpha, \beta \le 1/8$ in the prior-free setting; and (ii) all $\delta \le \alpha/4$ in the Bayesian setting.

Robust empirical risk minimization via Newton's method

no code implementations30 Jan 2023 Eirini Ioannou, Muni Sreenivas Pydi, Po-Ling Loh

A new variant of Newton's method for empirical risk minimization is studied, where at each iteration of the optimization algorithm, the gradient and Hessian of the objective function are replaced by robust estimators taken from existing literature on robust mean estimation for multivariate data.

Simple Binary Hypothesis Testing under Local Differential Privacy and Communication Constraints

no code implementations9 Jan 2023 Ankit Pensia, Amir R. Asadi, Varun Jog, Po-Ling Loh

For the sample complexity of simple hypothesis testing under pure LDP constraints, we establish instance-optimal bounds for distributions with binary support; minimax-optimal bounds for general distributions; and (approximately) instance-optimal, computationally efficient algorithms for general distributions.

Communication-constrained hypothesis testing: Optimality, robustness, and reverse data processing inequalities

no code implementations6 Jun 2022 Ankit Pensia, Varun Jog, Po-Ling Loh

We show that the sample complexity of simple binary hypothesis testing under communication constraints is at most a logarithmic factor larger than in the unconstrained setting and this bound is tight.

On the identifiability of mixtures of ranking models

no code implementations31 Jan 2022 Xiaomin Zhang, Xucheng Zhang, Po-Ling Loh, YIngyu Liang

Mixtures of ranking models are standard tools for ranking problems.

Differentially private inference via noisy optimization

1 code implementation19 Mar 2021 Marco Avella-Medina, Casey Bradshaw, Po-Ling Loh

We propose a general optimization-based framework for computing differentially private M-estimators and a new method for constructing differentially private confidence regions.

Robust W-GAN-Based Estimation Under Wasserstein Contamination

no code implementations20 Jan 2021 Zheng Liu, Po-Ling Loh

Robust estimation is an important problem in statistics which aims at providing a reasonable estimator when the data-generating distribution lies within an appropriately defined ball around an uncontaminated distribution.

regression

Robust regression with covariate filtering: Heavy tails and adversarial contamination

no code implementations27 Sep 2020 Ankit Pensia, Varun Jog, Po-Ling Loh

We study the problem of linear regression where both covariates and responses are potentially (i) heavy-tailed and (ii) adversarially contaminated.

regression

Provable Training Set Debugging for Linear Regression

no code implementations16 Jun 2020 Xiaomin Zhang, Xiaojin Zhu, Po-Ling Loh

We first formulate a general statistical algorithm for identifying buggy points and provide rigorous theoretical guarantees under the assumption that the data follow a linear model.

BIG-bench Machine Learning regression

Boosting Algorithms for Estimating Optimal Individualized Treatment Rules

no code implementations31 Jan 2020 Duzhe Wang, Haoda Fu, Po-Ling Loh

We present nonparametric algorithms for estimating optimal individualized treatment rules.

Extracting robust and accurate features via a robust information bottleneck

no code implementations15 Oct 2019 Ankit Pensia, Varun Jog, Po-Ling Loh

We propose a novel strategy for extracting features in supervised learning that can be used to construct a classifier which is more robust to small perturbations in the input space.

Robustifying deep networks for image segmentation

no code implementations1 Aug 2019 Zheng Liu, Jinnian Zhang, Varun Jog, Po-Ling Loh, Alan B McMillan

Materials and Methods: In this retrospective study, the accuracy of brain tumor segmentation was studied in subjects with low- and high-grade gliomas.

Brain Tumor Segmentation Data Augmentation +3

Estimating location parameters in entangled single-sample distributions

no code implementations6 Jul 2019 Ankit Pensia, Varun Jog, Po-Ling Loh

In the multivariate setting, we generalize our theory to mean estimation for mixtures of radially symmetric distributions, and derive minimax lower bounds on the expected error of any estimator that is agnostic to the scales of individual data points.

regression

Does Data Augmentation Lead to Positive Margin?

no code implementations8 May 2019 Shashank Rajput, Zhili Feng, Zachary Charles, Po-Ling Loh, Dimitris Papailiopoulos

Data augmentation (DA) is commonly used during model training, as it significantly improves test error and model robustness.

Data Augmentation

Scale calibration for high-dimensional robust regression

no code implementations6 Nov 2018 Po-Ling Loh

However, the variance of the error term in the linear model is intricately connected to the optimal parameter used to define the shape of the Huber loss.

regression Vocal Bursts Intensity Prediction

Adversarial Risk Bounds via Function Transformation

no code implementations22 Oct 2018 Justin Khim, Po-Ling Loh

We derive bounds for a notion of adversarial risk, designed to characterize the robustness of linear and neural network classifiers to adversarial perturbations.

General Classification

Online learning with graph-structured feedback against adaptive adversaries

no code implementations1 Apr 2018 Zhili Feng, Po-Ling Loh

When the adversary is allowed a bounded memory of size 1, we show that a matching lower bound of $\widetilde\Omega(T^{2/3})$ is achieved in the case of full-information feedback.

Graph-Based Ascent Algorithms for Function Maximization

no code implementations13 Feb 2018 Muni Sreenivas Pydi, Varun Jog, Po-Ling Loh

We also provide simulations showing the relative convergence rates of our algorithms in comparison to an unbiased random walk, as a function of the smoothness of the graph function.

Generalization Error Bounds for Noisy, Iterative Algorithms

no code implementations12 Jan 2018 Ankit Pensia, Varun Jog, Po-Ling Loh

In statistical learning theory, generalization error is used to quantify the degree to which a supervised machine learning algorithm may overfit to training data.

Learning Theory

Computing and maximizing influence in linear threshold and triggering models

no code implementations NeurIPS 2016 Justin T. Khim, Varun Jog, Po-Ling Loh

We quantify the gap between our upper and lower bounds in the case of the linear threshold model and illustrate the gains of our upper bounds for independent cascade models in relation to existing results.

Adversarial Influence Maximization

no code implementations1 Nov 2016 Justin Khim, Varun Jog, Po-Ling Loh

We consider the problem of influence maximization in fixed networks for contagion models in an adversarial setting.

Confidence Sets for the Source of a Diffusion in Regular Trees

no code implementations19 Oct 2015 Justin Khim, Po-Ling Loh

At the core of our proofs is a probabilistic analysis of P\'{o}lya urns corresponding to the number of uninfected neighbors in specific subtrees of the infection tree.

On model misspecification and KL separation for Gaussian graphical models

no code implementations10 Jan 2015 Varun Jog, Po-Ling Loh

We establish bounds on the KL divergence between two multivariate Gaussian distributions in terms of the Hamming distance between the edge sets of the corresponding graphical models.

Model Selection

Statistical consistency and asymptotic normality for high-dimensional robust M-estimators

no code implementations1 Jan 2015 Po-Ling Loh

We first establish a form of local statistical consistency for the penalized regression estimators under fairly mild conditions on the error distribution: When the derivative of the loss function is bounded and satisfies a local restricted curvature condition, all stationary points within a constant radius of the true regression vector converge at the minimax rate enjoyed by the Lasso with sub-Gaussian errors.

regression Vocal Bursts Intensity Prediction

Support recovery without incoherence: A case for nonconvex regularization

no code implementations17 Dec 2014 Po-Ling Loh, Martin J. Wainwright

We demonstrate that the primal-dual witness proof method may be used to establish variable selection consistency and $\ell_\infty$-bounds for sparse regression problems, even when the loss function and/or regularizer are nonconvex.

regression Variable Selection

Concavity of reweighted Kikuchi approximation

no code implementations NeurIPS 2014 Po-Ling Loh, Andre Wibisono

We establish sufficient conditions for the concavity of our reweighted objective function in terms of weight assignments in the Kikuchi expansion, and show that a reweighted version of the sum product algorithm applied to the Kikuchi region graph will produce global optima of the Kikuchi approximation whenever the algorithm converges.

High-dimensional learning of linear causal networks via inverse covariance estimation

no code implementations14 Nov 2013 Po-Ling Loh, Peter Bühlmann

We establish a new framework for statistical estimation of directed acyclic graphs (DAGs) when data are generated from a linear, possibly non-Gaussian structural equation model.

Vocal Bursts Intensity Prediction

Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima

no code implementations NeurIPS 2013 Po-Ling Loh, Martin J. Wainwright

We provide novel theoretical results regarding local optima of regularized $M$-estimators, allowing for nonconvexity in both loss and penalty functions.

Structure estimation for discrete graphical models: Generalized covariance matrices and their inverses

no code implementations NeurIPS 2012 Po-Ling Loh, Martin J. Wainwright

We show that for certain graph structures, the support of the inverse covariance matrix of indicator variables on the vertices of a graph reflects the conditional independence structure of the graph.

Open-Ended Question Answering

High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity

no code implementations NeurIPS 2011 Po-Ling Loh, Martin J. Wainwright

On the statistical side, we provide non-asymptotic bounds that hold with high probability for the cases of noisy, missing, and/or dependent data.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.