no code implementations • ICML 2020 • Yunpeng Shi, Gilad Lerman
We propose an efficient algorithm for solving robust group synchronization given adversarially corrupted group ratios.
no code implementations • 2 Sep 2022 • Casey Garner, Gilad Lerman, Shuzhong Zhang
Matrix functions are utilized to rewrite smooth spectral constrained matrix optimization problems as smooth unconstrained problems over the set of symmetric matrices which are then solved via the cubic-regularized Newton method.
1 code implementation • 17 Jun 2022 • Yunpeng Shi, Cole Wyeth, Gilad Lerman
We propose a novel quadratic programming formulation for estimating the corruption levels in group synchronization, and use these estimates to solve this problem.
1 code implementation • 4 Jun 2022 • Yinglong Guo, Dongmian Zou, Gilad Lerman
Since this unpooling layer is trainable, it can be applied to graph generation either in the decoder of a variational autoencoder or in the generator of a generative adversarial network (GAN).
no code implementations • CVPR 2022 • Shaohan Li, Yunpeng Shi, Gilad Lerman
Previous partial permutation synchronization (PPS) algorithms, which are commonly used for multi-object matching, often involve computation-intensive and memory-demanding matrix operations.
no code implementations • 17 Mar 2022 • Tyler Maunu, Chenyu Yu, Gilad Lerman
Our results emphasize the advantages of the nonconvex methods over another convex approach to solving this problem in the differentially private setting.
no code implementations • 4 Feb 2022 • Chieh-Hsin Lai, Dongmian Zou, Gilad Lerman
We experimentally demonstrate that RVQ-VAE is able to generate examples from inliers even if a large portion of the training data points are corrupted.
1 code implementation • 13 Jan 2022 • Yunpeng Shi, Shaohan Li, Tyler Maunu, Gilad Lerman
We develop new statistics for robustly filtering corrupted keypoint matches in the structure from motion pipeline.
no code implementations • 7 Sep 2020 • Sagar K. Tamang, Ardeshir Ebtehaj, Peter J. Van Leeuwen, Dongmian Zou, Gilad Lerman
Unlike the Eulerian penalization of error in the Euclidean space, the Wasserstein metric can capture translation and difference between the shapes of square-integrable probability distributions of the background state and observations -- enabling to formally penalize geophysical biases in state-space with non-Gaussian distributions.
Methodology
1 code implementation • 27 Jul 2020 • Yunpeng Shi, Gilad Lerman
We propose an efficient algorithm for solving group synchronization under high levels of corruption and noise, while we focus on rotation synchronization.
no code implementations • NeurIPS 2020 • Yunpeng Shi, Shaohan Li, Gilad Lerman
We propose an efficient and robust iterative solution to the multi-object matching problem.
1 code implementation • 9 Jun 2020 • Chieh-Hsin Lai, Dongmian Zou, Gilad Lerman
We establish both robustness to outliers and suitability to low-rank modeling of the Wasserstein metric as opposed to the KL divergence.
no code implementations • 13 Feb 2020 • Tyler Maunu, Gilad Lerman
We give robust recovery results for synchronization on the rotation group, $\mathrm{SO}(D)$.
2 code implementations • 24 Dec 2019 • Gilad Lerman, Yunpeng Shi
We then establish exact recovery and linear convergence guarantees for the proposed message passing procedure under a deterministic setting with adversarial corruption.
no code implementations • 5 Apr 2019 • Tyler Maunu, Gilad Lerman
The two estimators are fast to compute and achieve state-of-the-art theoretical performance in a noiseless RSR setting with adversarial outliers.
2 code implementations • ICLR 2020 • Chieh-Hsin Lai, Dongmian Zou, Gilad Lerman
The encoder maps the data into a latent space, from which the RSR layer extracts the subspace.
Unsupervised Anomaly Detection with Specified Settings -- 0.1% anomaly
Unsupervised Anomaly Detection with Specified Settings -- 10% anomaly
+3
no code implementations • 15 Jan 2019 • Mauricio Flores, Jeff Calder, Gilad Lerman
In the first part of the paper we prove new discrete to continuum convergence results for $p$-Laplace problems on $k$-nearest neighbor ($k$-NN) graphs, which are more commonly used in practice than random geometric graphs.
3 code implementations • 7 Nov 2018 • Vahan Huroyan, Gilad Lerman, Hau-Tieng Wu
The main contribution of this work is a method for recovering the rotations of the pieces when both shuffles and rotations are unknown.
1 code implementation • 28 Sep 2018 • Dongmian Zou, Gilad Lerman
The decoder is a simple fully connected network that is adapted to specific tasks, such as link prediction, signal generation on graphs and full graph and signal generation.
Ranked #2 on
Link Prediction
on Cora (biased evaluation)
no code implementations • 27 Sep 2018 • Dongmian Zou, Gilad Lerman
These results are in contrast to experience with Euclidean data, where it is difficult to form a generative scattering network that performs as well as state-of-the-art methods.
1 code implementation • CVPR 2018 • Yunpeng Shi, Gilad Lerman
We propose a strategy for improving camera location estimation in structure from motion.
1 code implementation • 31 Mar 2018 • Dongmian Zou, Gilad Lerman
We generalize the scattering transform to graphs and consequently construct a convolutional neural network on graphs.
Ranked #55 on
Node Classification
on Cora
no code implementations • 2 Mar 2018 • Gilad Lerman, Tyler Maunu
This paper will serve as an introduction to the body of work on robust subspace recovery.
no code implementations • 27 Sep 2017 • Gilad Lerman, Yunpeng Shi, Teng Zhang
We establish exact recovery for the Least Unsquared Deviations (LUD) algorithm of Ozyesil and Singer.
no code implementations • 13 Jun 2017 • Tyler Maunu, Teng Zhang, Gilad Lerman
The practicality of the deterministic condition is demonstrated on some statistical models of data, and the method achieves almost state-of-the-art recovery guarantees on the Haystack Model for different regimes of sample size and ambient dimension.
no code implementations • 25 May 2017 • Vahan Huroyan, Gilad Lerman
We propose distributed solutions to the problem of Robust Subspace Recovery (RSR).
no code implementations • 4 Nov 2015 • Bryan Poling, Gilad Lerman
We present a deeply integrated method of exploiting low-cost gyroscopes to improve general purpose feature tracking.
no code implementations • 28 Oct 2015 • Xu Wang, Gilad Lerman
Kernel methods obtain superb performance in terms of accuracy for various machine learning tasks since they can effectively extract nonlinear relations.
no code implementations • 1 Oct 2014 • Xu Wang, Konstantinos Slavakis, Gilad Lerman
This paper advocates a novel framework for segmenting a dataset in a Riemannian manifold $M$ into clusters lying around low-dimensional submanifolds of $M$.
no code implementations • 24 Jun 2014 • Gilad Lerman, Tyler Maunu
Further, under a special model of data, FMS converges to a point which is near to the global minimum with overwhelming probability.
no code implementations • CVPR 2014 • Bryan Poling, Gilad Lerman, Arthur Szlam
Our approach does not require direct modeling of the structure or the motion of the scene, and runs in real time on a single CPU core.
no code implementations • 10 Apr 2013 • Bryan Poling, Gilad Lerman
We present a new approach to rigid-body motion segmentation from two views.
no code implementations • NeurIPS 2012 • Matthew Coudron, Gilad Lerman
This estimator is used in a convex algorithm for robust subspace recovery (i. e., robust PCA).
no code implementations • 18 Feb 2012 • Gilad Lerman, Michael McCoy, Joel A. Tropp, Teng Zhang
Consider a dataset of vector-valued observations that consists of noisy inliers, which are explained well by a low-dimensional subspace, along with some number of outliers.
no code implementations • 20 Dec 2011 • Teng Zhang, Gilad Lerman
That is, we assume a data set that some of its points are sampled around a fixed subspace and the rest of them are spread in the whole ambient space, and we aim to recover the fixed underlying subspace.
no code implementations • 18 Dec 2010 • Gilad Lerman, Teng Zhang
We say that one of the underlying subspaces of the model is most significant if its mixture weight is higher than the sum of the mixture weights of all other subspaces.