1 code implementation • 4 Aug 2024 • Daniel Shalam, Simon Korman
Many leading self-supervised methods for unsupervised representation learning, in particular those for embedding image features, are built on variants of the instance discrimination task, whose optimization is known to be prone to instabilities that can lead to feature collapse.
Ranked #36 on Self-Supervised Image Classification on ImageNet
Representation Learning Self-Supervised Image Classification +2
1 code implementation • 25 Jun 2024 • Daniel Shalam, Simon Korman
The transformed set encodes a rich representation of high order relations between the input features.
1 code implementation • CVPR 2023 • Deborah Levy, Amit Peleg, Naama Pearl, Dan Rosenbaum, Derya Akkaynak, Simon Korman, Tali treibitz
Even more excitingly, we can render clear views of these scenes, removing the medium between the camera and the scene and reconstructing the appearance and depth of far objects, which are severely occluded by the medium.
no code implementations • CVPR 2022 • Naama Pearl, Tali treibitz, Simon Korman
Such assumptions are not realistic in the presence of large motion and high levels of noise.
1 code implementation • 6 Apr 2022 • Daniel Shalam, Simon Korman
The Self-Optimal-Transport (SOT) feature transform is designed to upgrade the set of features of a data instance to facilitate downstream matching or grouping related tasks.
Few-Shot Image Classification Large-Scale Person Re-Identification
no code implementations • 1 May 2021 • Kensuke Nakamura, Simon Korman, Byung-Woo Hong
Based on these observations, we propose a data representation for the GAN training, called noisy scale-space (NSS), that recursively applies the smoothing with a balanced noise to data in order to replace the high-frequency information by random data, leading to a coarse-to-fine training of GANs.
no code implementations • CVPR 2018 • Simon Korman, Mark Milam, Stefano Soatto
We present a novel approach to template matching that is efficient, can handle partial occlusions, and comes with provable performance guarantees.
1 code implementation • CVPR 2018 • Simon Korman, Roee Litman
We present a method that can evaluate a RANSAC hypothesis in constant time, i. e. independent of the size of the data.
no code implementations • 13 Jul 2016 • Omri Ben-Eliezer, Simon Korman, Daniel Reichman
For any $\epsilon \in [0, 1]$ and any large enough pattern $P$ over any alphabet, other than a very small set of exceptional patterns, we design a tolerant tester that distinguishes between the case that the distance is at least $\epsilon$ and the case that it is at most $a_d \epsilon$, with query complexity and running time $c_d \epsilon^{-1}$, where $a_d < 1$ and $c_d$ depend only on $d$.
no code implementations • ICCV 2015 • Simon Korman, Eyal Ofek, Shai Avidan
We demonstrate on real-world data that our algorithm is capable of completing a full 3D scene from a single depth image and can synthesize a full depth map from a novel viewpoint of the scene.
no code implementations • CVPR 2015 • Roee Litman, Simon Korman, Alexander Bronstein, Shai Avidan
This work presents a novel approach for detecting inliers in a given set of correspondences (matches).
no code implementations • CVPR 2013 • Simon Korman, Daniel Reichman, Gilad Tsur, Shai Avidan
Fast-Match is a fast algorithm for approximate template matching under 2D affine transformations that minimizes the Sum-of-Absolute-Differences (SAD) error measure.