Search Results for author: Samuel B. Hopkins

Found 18 papers, 2 papers with code

Adversarially-Robust Inference on Trees via Belief Propagation

no code implementations31 Mar 2024 Samuel B. Hopkins, Anqi Li

We first confirm a folklore belief that a malicious adversary who can corrupt an inverse-polynomial fraction of the leaves of their choosing makes this inference impossible.

A quasi-polynomial time algorithm for Multi-Dimensional Scaling via LP hierarchies

no code implementations29 Nov 2023 Ainesh Bakshi, Vincent Cohen-Addad, Samuel B. Hopkins, Rajesh Jayaram, Silvio Lattanzi

Multi-dimensional Scaling (MDS) is a family of methods for embedding an $n$-point metric into low-dimensional Euclidean space.

Data Visualization

Towards Practical Robustness Auditing for Linear Regression

no code implementations30 Jul 2023 Daniel Freund, Samuel B. Hopkins

We investigate practical algorithms to find or disprove the existence of small subsets of a dataset which, when removed, reverse the sign of a coefficient in an ordinary least squares regression involving that dataset.

regression

Fast, Sample-Efficient, Affine-Invariant Private Mean and Covariance Estimation for Subgaussian Distributions

no code implementations28 Jan 2023 Gavin Brown, Samuel B. Hopkins, Adam Smith

Our algorithm runs in time $\tilde{O}(nd^{\omega - 1} + nd/\varepsilon)$, where $\omega < 2. 38$ is the matrix multiplication exponent.

Open-Ended Question Answering

Robustness Implies Privacy in Statistical Estimation

no code implementations9 Dec 2022 Samuel B. Hopkins, Gautam Kamath, Mahbod Majid, Shyam Narayanan

We study the relationship between adversarial robustness and differential privacy in high-dimensional algorithmic statistics.

Adversarial Robustness

Privacy Induces Robustness: Information-Computation Gaps and Sparse Mean Estimation

1 code implementation1 Nov 2022 Kristian Georgiev, Samuel B. Hopkins

We establish a simple connection between robust and differentially-private algorithms: private mechanisms which perform well with very high probability are automatically robust in the sense that they retain accuracy even if a constant fraction of the samples they receive are adversarially corrupted.

Computational Efficiency PAC learning

The Franz-Parisi Criterion and Computational Trade-offs in High Dimensional Statistics

no code implementations19 May 2022 Afonso S. Bandeira, Ahmed El Alaoui, Samuel B. Hopkins, Tselil Schramm, Alexander S. Wein, Ilias Zadik

We define a free-energy based criterion for hardness and formally connect it to the well-established notion of low-degree hardness for a broad class of statistical problems, namely all Gaussian additive models and certain models with a sparse planted signal.

Additive models

A Robust Spectral Algorithm for Overcomplete Tensor Decomposition

no code implementations5 Mar 2022 Samuel B. Hopkins, Tselil Schramm, Jonathan Shi

We give a spectral algorithm for decomposing overcomplete order-4 tensors, so long as their components satisfy an algebraic non-degeneracy condition that holds for nearly all (all but an algebraic set of measure $0$) tensors over $(\mathbb{R}^d)^{\otimes 4}$ with rank $n \le d^2$.

Tensor Decomposition

Efficient Mean Estimation with Pure Differential Privacy via a Sum-of-Squares Exponential Mechanism

no code implementations25 Nov 2021 Samuel B. Hopkins, Gautam Kamath, Mahbod Majid

SoS proofs to algorithms is a key theme in numerous recent works in high-dimensional algorithmic statistics -- estimators which apparently require exponential running time but whose analysis can be captured by low-degree Sum of Squares proofs can be automatically turned into polynomial-time algorithms with the same provable guarantees.

Statistical Query Algorithms and Low-Degree Tests Are Almost Equivalent

no code implementations13 Sep 2020 Matthew Brennan, Guy Bresler, Samuel B. Hopkins, Jerry Li, Tselil Schramm

Researchers currently use a number of approaches to predict and substantiate information-computation gaps in high-dimensional statistical estimation problems.

Two-sample testing

Estimating Rank-One Spikes from Heavy-Tailed Noise via Self-Avoiding Walks

no code implementations NeurIPS 2020 Jingqiu Ding, Samuel B. Hopkins, David Steurer

For the case of Gaussian noise, the top eigenvector of the given matrix is a widely-studied estimator known to achieve optimal statistical guarantees, e. g., in the sense of the celebrated BBP phase transition.

Robust and Heavy-Tailed Mean Estimation Made Simple, via Regret Minimization

no code implementations NeurIPS 2020 Samuel B. Hopkins, Jerry Li, Fred Zhang

In this paper, we provide a meta-problem and a duality theorem that lead to a new unified view on robust and heavy-tailed mean estimation in high dimensions.

Robustly Learning any Clusterable Mixture of Gaussians

no code implementations13 May 2020 Ilias Diakonikolas, Samuel B. Hopkins, Daniel Kane, Sushrut Karmalkar

The key ingredients of this proof are a novel use of SoS-certifiable anti-concentration and a new characterization of pairs of Gaussians with small (dimension-independent) overlap in terms of their parameter distance.

Clustering

Quantum Entropy Scoring for Fast Robust Mean Estimation and Improved Outlier Detection

1 code implementation NeurIPS 2019 Yihe Dong, Samuel B. Hopkins, Jerry Li

In robust mean estimation the goal is to estimate the mean $\mu$ of a distribution on $\mathbb{R}^d$ given $n$ independent samples, an $\varepsilon$-fraction of which have been corrupted by a malicious adversary.

Outlier Detection

Mean Estimation with Sub-Gaussian Rates in Polynomial Time

no code implementations19 Sep 2018 Samuel B. Hopkins

We study polynomial time algorithms for estimating the mean of a heavy-tailed multivariate random vector.

Statistics Theory Data Structures and Algorithms Statistics Theory

Bayesian estimation from few samples: community detection and related problems

no code implementations30 Sep 2017 Samuel B. Hopkins, David Steurer

in constant average degree graphs---up to what we conjecture to be the computational threshold for this model.

Community Detection Stochastic Block Model +1

Fast spectral algorithms from sum-of-squares proofs: tensor decomposition and planted sparse vectors

no code implementations8 Dec 2015 Samuel B. Hopkins, Tselil Schramm, Jonathan Shi, David Steurer

For tensor decomposition, we give an algorithm with running time close to linear in the input size (with exponent $\approx 1. 086$) that approximately recovers a component of a random 3-tensor over $\mathbb R^n$ of rank up to $\tilde \Omega(n^{4/3})$.

Tensor Decomposition

Tensor principal component analysis via sum-of-squares proofs

no code implementations12 Jul 2015 Samuel B. Hopkins, Jonathan Shi, David Steurer

We study a statistical model for the tensor principal component analysis problem introduced by Montanari and Richard: Given a order-$3$ tensor $T$ of the form $T = \tau \cdot v_0^{\otimes 3} + A$, where $\tau \geq 0$ is a signal-to-noise ratio, $v_0$ is a unit vector, and $A$ is a random noise tensor, the goal is to recover the planted vector $v_0$.

Cannot find the paper you are looking for? You can Submit a new open access paper.