Search Results for author: Frederic Koehler

Found 28 papers, 2 papers with code

Distributional Hardness Against Preconditioned Lasso via Erasure-Robust Designs

no code implementations5 Mar 2022 Jonathan A. Kelner, Frederic Koehler, Raghu Meka, Dhruv Rohatgi

Surprisingly, at the heart of our lower bound is a new positive result in compressed sensing.

Sampling Approximately Low-Rank Ising Models: MCMC meets Variational Methods

no code implementations17 Feb 2022 Frederic Koehler, Holden Lee, Andrej Risteski

We consider Ising models on the hypercube with a general interaction matrix $J$, and give a polynomial time sampling algorithm when all but $O(1)$ eigenvalues of $J$ lie in an interval of length one, a situation which occurs in many models of interest.

Variational Inference

Variational autoencoders in the presence of low-dimensional data: landscape and implicit bias

1 code implementation ICLR 2022 Frederic Koehler, Viraj Mehta, Chenghui Zhou, Andrej Risteski

Recent work by Dai and Wipf (2020) proposes a two-stage training algorithm for VAEs, based on a conjecture that in standard VAE training the generator will converge to a solution with 0 variance which is correctly supported on the ground truth manifold.

Optimistic Rates: A Unifying Theory for Interpolation Learning and Regularization in Linear Regression

no code implementations8 Dec 2021 Lijia Zhou, Frederic Koehler, Danica J. Sutherland, Nathan Srebro

We study a localized notion of uniform convergence known as an "optimistic rate" (Panchenko 2002; Srebro et al. 2010) for linear regression with Gaussian data.

Kalman Filtering with Adversarial Corruptions

no code implementations11 Nov 2021 Sitan Chen, Frederic Koehler, Ankur Moitra, Morris Yau

In a pioneering work, Schick and Mitter gave provable guarantees when the measurement noise is a known infinitesimal perturbation of a Gaussian and raised the important question of whether one can get similar guarantees for large and unknown perturbations.

Multidimensional Scaling: Approximation and Complexity

no code implementations23 Sep 2021 Erik Demaine, Adam Hesterberg, Frederic Koehler, Jayson Lynch, John Urschel

In particular, the Kamada-Kawai force-directed graph drawing method is equivalent to MDS and is one of the most popular ways in practice to embed graphs into low dimensions.

Reconstruction on Trees and Low-Degree Polynomials

no code implementations14 Sep 2021 Frederic Koehler, Elchanan Mossel

Notably, the celebrated Belief Propagation (BP) algorithm achieves optimal performance for the reconstruction problem of predicting the value of the Markov process at the root of the tree from its values at the leaves.

Community Detection

Uniform Convergence of Interpolators: Gaussian Width, Norm Bounds, and Benign Overfitting

no code implementations NeurIPS 2021 Frederic Koehler, Lijia Zhou, Danica J. Sutherland, Nathan Srebro

We consider interpolation learning in high-dimensional linear regression with Gaussian data, and prove a generic uniform convergence guarantee on the generalization error of interpolators in an arbitrary hypothesis class in terms of the class's Gaussian width.

Generalization Bounds

On the Power of Preconditioning in Sparse Linear Regression

no code implementations17 Jun 2021 Jonathan Kelner, Frederic Koehler, Raghu Meka, Dhruv Rohatgi

First, we show that the preconditioned Lasso can solve a large class of sparse linear regression problems nearly optimally: it succeeds whenever the dependency structure of the covariates, in the sense of the Markov property, has low treewidth -- even if $\Sigma$ is highly ill-conditioned.

Chow-Liu++: Optimal Prediction-Centric Learning of Tree Ising Models

no code implementations7 Jun 2021 Enric Boix-Adsera, Guy Bresler, Frederic Koehler

In this paper, we introduce a new algorithm that carefully combines elements of the Chow-Liu algorithm with tree metric reconstruction methods to efficiently and optimally learn tree Ising models under a prediction-centric loss.

Uniform Convergence of Interpolators: Gaussian Width, Norm Bounds and Benign Overfitting

no code implementations NeurIPS 2021 Frederic Koehler, Lijia Zhou, Danica J. Sutherland, Nathan Srebro

We consider interpolation learning in high-dimensional linear regression with Gaussian data, and prove a generic uniform convergence guarantee on the generalization error of interpolators in an arbitrary hypothesis class in terms of the class’s Gaussian width.

Generalization Bounds

Online and Distribution-Free Robustness: Regression and Contextual Bandits with Huber Contamination

no code implementations8 Oct 2020 Sitan Chen, Frederic Koehler, Ankur Moitra, Morris Yau

Our approach is based on a novel alternating minimization scheme that interleaves ordinary least-squares with a simple convex program that finds the optimal reweighting of the distribution under a spectral constraint.

Adversarial Robustness Multi-Armed Bandits +1

Representational aspects of depth and conditioning in normalizing flows

no code implementations2 Oct 2020 Frederic Koehler, Viraj Mehta, Andrej Risteski

Normalizing flows are among the most popular paradigms in generative modeling, especially for images, primarily because we can efficiently evaluate the likelihood of a data point.

From Boltzmann Machines to Neural Networks and Back Again

no code implementations NeurIPS 2020 Surbhi Goel, Adam Klivans, Frederic Koehler

Graphical models are powerful tools for modeling high-dimensional data, but learning graphical models in the presence of latent variables is well-known to be difficult.

Fast Convergence of Belief Propagation to Global Optima: Beyond Correlation Decay

no code implementations NeurIPS 2019 Frederic Koehler

We show that under a natural initialization, BP converges quickly to the global optimum of the Bethe free energy for Ising models on arbitrary graphs, as long as the Ising model is \emph{ferromagnetic} (i. e. neighbors prefer to be aligned).

Accuracy-Memory Tradeoffs and Phase Transitions in Belief Propagation

no code implementations24 May 2019 Vishesh Jain, Frederic Koehler, Jingbo Liu, Elchanan Mossel

The analysis of Belief Propagation and other algorithms for the {\em reconstruction problem} plays a key role in the analysis of community detection in inference on graphs, phylogenetic reconstruction in bioinformatics, and the cavity method in statistical physics.

Community Detection

Learning Some Popular Gaussian Graphical Models without Condition Number Bounds

no code implementations NeurIPS 2020 Jonathan Kelner, Frederic Koehler, Raghu Meka, Ankur Moitra

While there are a variety of algorithms (e. g. Graphical Lasso, CLIME) that provably recover the graph structure with a logarithmic number of samples, they assume various conditions that require the precision matrix to be in some sense well-conditioned.

The Comparative Power of ReLU Networks and Polynomial Kernels in the Presence of Sparse Latent Structure

no code implementations ICLR 2019 Frederic Koehler, Andrej Risteski

We give an almost-tight theoretical analysis of the performance of both neural networks and polynomials for this problem, as well as verify our theory with simulations.

Mean-field approximation, convex hierarchies, and the optimality of correlation rounding: a unified perspective

no code implementations22 Aug 2018 Vishesh Jain, Frederic Koehler, Andrej Risteski

More precisely, we show that the mean-field approximation is within $O((n\|J\|_{F})^{2/3})$ of the free energy, where $\|J\|_F$ denotes the Frobenius norm of the interaction matrix of the Ising model.

Representational Power of ReLU Networks and Polynomial Kernels: Beyond Worst-Case Analysis

no code implementations29 May 2018 Frederic Koehler, Andrej Risteski

We give almost-tight bounds on the performance of both neural networks and low degree polynomials for this problem.

Learning Restricted Boltzmann Machines via Influence Maximization

no code implementations25 May 2018 Guy Bresler, Frederic Koehler, Ankur Moitra, Elchanan Mossel

This hardness result is based on a sharp and surprising characterization of the representational power of bounded degree RBMs: the distribution on their observed variables can simulate any bounded order MRF.

Collaborative Filtering Dimensionality Reduction

The Mean-Field Approximation: Information Inequalities, Algorithms, and Complexity

no code implementations16 Feb 2018 Vishesh Jain, Frederic Koehler, Elchanan Mossel

The mean field approximation to the Ising model is a canonical variational tool that is used for analysis and inference in Ising models.

The Vertex Sample Complexity of Free Energy is Polynomial

no code implementations16 Feb 2018 Vishesh Jain, Frederic Koehler, Elchanan Mossel

Results in graph limit literature by Borgs, Chayes, Lov\'asz, S\'os, and Vesztergombi show that for Ising models on $n$ nodes and interactions of strength $\Theta(1/n)$, an $\epsilon$ approximation to $\log Z_n / n$ can be achieved by sampling a randomly induced model on $2^{O(1/\epsilon^2)}$ nodes.

Approximating Partition Functions in Constant Time

no code implementations5 Nov 2017 Vishesh Jain, Frederic Koehler, Elchanan Mossel

One exception is recent results by Risteski (2016) who considered dense graphical models and showed that using variational methods, it is possible to find an $O(\epsilon n)$ additive approximation to the log partition function in time $n^{O(1/\epsilon^2)}$ even in a regime where correlation decay does not hold.

Information Theoretic Properties of Markov Random Fields, and their Algorithmic Applications

no code implementations NeurIPS 2017 Linus Hamilton, Frederic Koehler, Ankur Moitra

As an application, we obtain algorithms for learning Markov random fields on bounded degree graphs on $n$ nodes with $r$-order interactions in $n^r$ time and $\log n$ sample complexity.

Provable Algorithms for Inference in Topic Models

no code implementations27 May 2016 Sanjeev Arora, Rong Ge, Frederic Koehler, Tengyu Ma, Ankur Moitra

But designing provable algorithms for inference has proven to be more challenging.

Topic Models

Cannot find the paper you are looking for? You can Submit a new open access paper.