Search Results for author: Boaz Nadler

Found 30 papers, 14 papers with code

A Majorization-Minimization Gauss-Newton Method for 1-Bit Matrix Completion

no code implementations27 Apr 2023 Xiaoqian Liu, Xu Han, Eric C. Chi, Boaz Nadler

In 1-bit matrix completion, the aim is to estimate an underlying low-rank matrix from a partial set of binary observations.

Low-Rank Matrix Completion

Distributed Sparse Linear Regression under Communication Constraints

no code implementations9 Jan 2023 Rodney Fonseca, Boaz Nadler

In multiple domains, statistical tasks are performed in distributed settings, with data split among several end machines that are connected to a fusion center.


Recovery Guarantees for Distributed-OMP

no code implementations15 Sep 2022 Chen Amiraz, Robert Krauthgamer, Boaz Nadler

We study distributed schemes for high-dimensional sparse linear regression, based on orthogonal matching pursuit (OMP).


Inductive Matrix Completion: No Bad Local Minima and a Fast Algorithm

1 code implementation31 Jan 2022 Pini Zilber, Boaz Nadler

The inductive matrix completion (IMC) problem is to recover a low rank matrix from few observed entries while incorporating prior knowledge about its row and column subspaces.

Matrix Completion

GNMR: A provable one-line algorithm for low rank matrix recovery

1 code implementation24 Jun 2021 Pini Zilber, Boaz Nadler

Low rank matrix recovery problems, including matrix completion and matrix sensing, appear in a broad range of applications.

Low-Rank Matrix Completion

Spectral Top-Down Recovery of Latent Tree Models

1 code implementation26 Feb 2021 Yariv Aizenbud, Ariel Jaffe, Meng Wang, Amber Hu, Noah Amsel, Boaz Nadler, Joseph T. Chang, Yuval Kluger

For large trees, a common approach, termed divide-and-conquer, is to recover the tree structure in two steps.

Distributed Sparse Normal Means Estimation with Sublinear Communication

no code implementations5 Feb 2021 Chen Amiraz, Robert Krauthgamer, Boaz Nadler

We assume there are $M$ machines, each holding $d$-dimensional observations of a $K$-sparse vector $\mu$ corrupted by additive Gaussian noise.

Improved Convergence Guarantees for Learning Gaussian Mixture Models by EM and Gradient EM

no code implementations3 Jan 2021 Nimrod Segol, Boaz Nadler

In previous works, the required number of samples had a quadratic dependence on the maximal separation between the K components, and the resulting error estimate increased linearly with this maximal separation.

"Self-Wiener" Filtering: Data-Driven Deconvolution of Deterministic Signals

no code implementations20 Jul 2020 Amir Weiss, Boaz Nadler

Specifically, our algorithm works in the frequency-domain, where it tries to mimic the optimal unrealizable non-linear Wiener-like filter as if the unknown deterministic signal were known.

The Trimmed Lasso: Sparse Recovery Guarantees and Practical Optimization by the Generalized Soft-Min Penalty

2 code implementations18 May 2020 Tal Amir, Ronen Basri, Boaz Nadler

We present a new approach to solve the sparse approximation or best subset selection problem, namely find a $k$-sparse vector ${\bf x}\in\mathbb{R}^d$ that minimizes the $\ell_2$ residual $\lVert A{\bf x}-{\bf y} \rVert_2$.

Spectral neighbor joining for reconstruction of latent tree models

3 code implementations28 Feb 2020 Ariel Jaffe, Noah Amsel, Yariv Aizenbud, Boaz Nadler, Joseph T. Chang, Yuval Kluger

A common assumption in multiple scientific applications is that the distribution of observed data can be modeled by a latent tree graphical model.

Rank $2r$ iterative least squares: efficient recovery of ill-conditioned low rank matrices from few entries

3 code implementations5 Feb 2020 Jonathan Bauch, Boaz Nadler

We present a new, simple and computationally efficient iterative method for low rank matrix completion.

Optimization and Control

Beyond Trees: Classification with Sparse Pairwise Dependencies

no code implementations6 Jun 2018 Yaniv Tenzer, Amit Moscovich, Mary Frances Dorn, Boaz Nadler, Clifford Spiegelman

The resulting classifier is linear in the log-transformed univariate and bivariate densities that correspond to the tree edges.

Classification General Classification

Cost efficient gradient boosting

1 code implementation NeurIPS 2017 Sven Peter, Ferran Diego, Fred A. Hamprecht, Boaz Nadler

In contrast to previous approaches to learning with cost penalties, our method can grow very deep trees that on average are nonetheless cheap to compute.

On Detection of Faint Edges in Noisy Images

2 code implementations22 Jun 2017 Nati Ofir, Meirav Galun, Sharon Alpert, Achi Brandt, Boaz Nadler, Ronen Basri

A fundamental question for edge detection in noisy images is how faint can an edge be and still be detected.

Edge Detection

Unsupervised Ensemble Regression

no code implementations8 Mar 2017 Omer Dror, Boaz Nadler, Erhan Bilal, Yuval Kluger

Consider a regression problem where there is no labeled data and the only observations are the predictions $f_i(x_j)$ of $m$ experts $f_{i}$ over many samples $x_j$.


Minimax-optimal semi-supervised regression on unknown manifolds

no code implementations7 Nov 2016 Amit Moscovich, Ariel Jaffe, Boaz Nadler

We consider semi-supervised regression when the predictor variables are drawn from an unknown manifold.

Indoor Localization Pose Estimation +1

A Deep Learning Approach to Unsupervised Ensemble Learning

1 code implementation6 Feb 2016 Uri Shaham, Xiuyuan Cheng, Omer Dror, Ariel Jaffe, Boaz Nadler, Joseph Chang, Yuval Kluger

We show how deep learning methods can be applied in the context of crowdsourcing and unsupervised ensemble learning.

Ensemble Learning

Unsupervised Ensemble Learning with Dependent Classifiers

no code implementations20 Oct 2015 Ariel Jaffe, Ethan Fetaya, Boaz Nadler, Tingting Jiang, Yuval Kluger

In unsupervised ensemble learning, one obtains predictions from multiple sources or classifiers, yet without knowing the reliability and expertise of each source, and with no labeled data to assess it.

Ensemble Learning

Fast Detection of Curved Edges at Low SNR

3 code implementations CVPR 2016 Nati Ofir, Meirav Galun, Boaz Nadler, Ronen Basri

Detecting edges is a fundamental problem in computer vision with many applications, some involving very noisy images.

Edge Detection

Detecting the large entries of a sparse covariance matrix in sub-quadratic time

no code implementations12 May 2015 Ofer Shwartz, Boaz Nadler

observations, it is typically estimated by the sample covariance matrix, at a computational cost of $O(np^{2})$ operations.

Learning Parametric-Output HMMs with Two Aliased States

no code implementations7 Feb 2015 Roi Weiss, Boaz Nadler

In various applications involving hidden Markov models (HMMs), some of the hidden states are aliased, having identical output distributions.

Vocal Bursts Valence Prediction

Estimating the Accuracies of Multiple Classifiers Without Labeled Data

no code implementations29 Jul 2014 Ariel Jaffe, Boaz Nadler, Yuval Kluger

In various situations one is given only the predictions of multiple classifiers over a large unlabeled test data.

On the Optimality of Averaging in Distributed Statistical Learning

1 code implementation10 Jul 2014 Jonathan Rosenblatt, Boaz Nadler

For both regimes and under suitable assumptions, we present asymptotically exact expressions for this estimation error.

Do semidefinite relaxations solve sparse PCA up to the information limit?

no code implementations16 Jun 2013 Robert Krauthgamer, Boaz Nadler, Dan Vilenchik

In fact, we conjecture that in the single-spike model, no computationally-efficient algorithm can recover a spike of $\ell_0$-sparsity $k\geq\Omega(\sqrt{n})$.

Ranking and combining multiple predictors without labeled data

no code implementations13 Mar 2013 Fabio Parisi, Francesco Strino, Boaz Nadler, Yuval Kluger

This scenario is different from the standard supervised setting, where each classifier accuracy can be assessed using available labeled data, and raises two questions: given only the predictions of several classifiers over a large set of unlabeled test data, is it possible to a) reliably rank them; and b) construct a meta-classifier more accurate than most classifiers in the ensemble?

Decision Making

Statistical Analysis of Semi-Supervised Learning: The Limit of Infinite Unlabelled Data

no code implementations NeurIPS 2009 Boaz Nadler, Nathan Srebro, Xueyuan Zhou

We study the behavior of the popular Laplacian Regularization method for Semi-Supervised Learning at the regime of a fixed number of labeled points but a large number of unlabeled points.

Treelets--An adaptive multi-scale basis for sparse unordered data

3 code implementations3 Jul 2007 Ann B. Lee, Boaz Nadler, Larry Wasserman

In many modern applications, including analysis of gene expression and text documents, the data are noisy, high-dimensional, and unordered--with no particular meaning to the given order of the variables.


Cannot find the paper you are looking for? You can Submit a new open access paper.