1 code implementation • 27 Jun 2024 • Mohammad Salama, Jonathan Kahana, Eliahu Horwitz, Yedid Hoshen
In this paper, we introduce a new task: dataset size recovery, that aims to determine the number of samples used to train a model, directly from its weights.
no code implementations • 13 Jun 2024 • Bar Cavia, Eliahu Horwitz, Tal Reiss, Yedid Hoshen
The image deepfake score is the pooled score of its patches.
1 code implementation • 30 May 2024 • Tal Reiss, George Kour, Naama Zwerdling, Ateret Anaby-Tavor, Yedid Hoshen
This paper studies the realistic but underexplored cold-start setting where an anomaly detection model is initialized using zero-shot guidance, but subsequently receives a small number of contaminated observations (namely, that may include anomalies).
Ranked #1 on Cold-Start Anomaly Detection on BANKING77-OOS
1 code implementation • 28 May 2024 • Eliahu Horwitz, Asaf Shul, Yedid Hoshen
However, this information is underutilized as the weights are uninterpretable, and publicly available models are disorganized.
no code implementations • 27 Mar 2024 • Daniel Winter, Matan Cohen, Shlomi Fruchter, Yael Pritch, Alex Rav-Acha, Yedid Hoshen
To tackle this challenge, we propose bootstrap supervision; leveraging our object removal model trained on a small counterfactual dataset, we synthetically expand this dataset considerably.
1 code implementation • 18 Mar 2024 • Asaf Shul, Eliahu Horwitz, Yedid Hoshen
Current methods frame this as maximizing the distilled classification accuracy for a budget of K distilled images-per-class, where K is a positive integer.
Ranked #1 on Dataset Distillation - 1IPC on CUB-200-2011
1 code implementation • 15 Feb 2024 • Eliahu Horwitz, Jonathan Kahana, Yedid Hoshen
The dominant paradigm in generative modeling consists of two steps: i) pre-training on a large-scale but unsafe dataset, ii) aligning the pre-trained model with human values via fine-tuning.
1 code implementation • 8 Feb 2024 • Daniel Winter, Niv Cohen, Yedid Hoshen
Recently, distillation methods succeeded in eliminating the use of GNNs at test time but they still require them during training.
1 code implementation • 24 Nov 2023 • Niv Cohen, Issar Tzachor, Yedid Hoshen
This paper proposes to use set features for detecting anomalies in samples that consist of unusual combinations of normal elements.
Ranked #4 on Anomaly Detection on MVTec LOCO AD
1 code implementation • 2 Nov 2023 • Tal Reiss, Bar Cavia, Yedid Hoshen
We therefore introduce the concept of "fact checking", adapted from fake news detection, for detecting zero-day deepfake attacks.
Ranked #1 on DeepFake Detection on FakeAVCeleb
no code implementations • 20 Jul 2023 • Yedid Hoshen
These limitations can be overcome when there are strong tasks priors, as is the case for many industrial tasks.
no code implementations • 12 Jun 2023 • Tal Reiss, Niv Cohen, Yedid Hoshen
It is tempting to hypothesize that anomaly detection can improve indefinitely by increasing the scale of our networks, making their representations more expressive.
1 code implementation • 30 Mar 2023 • Tzeviya Sylvia Fuchs, Yedid Hoshen
We use a thresholding function on the temporal gradient magnitude to define a psuedo-label for wordness.
1 code implementation • 23 Feb 2023 • Niv Cohen, Issar Tzachor, Yedid Hoshen
Fine-grained anomaly detection has recently been dominated by segmentation based approaches.
Ranked #1 on Anomaly Detection on UEA time-series datasets
no code implementations • 2 Feb 2023 • Eyal Molad, Eliahu Horwitz, Dani Valevski, Alex Rav Acha, Yossi Matias, Yael Pritch, Yaniv Leviathan, Yedid Hoshen
Our approach uses a video diffusion model to combine, at inference time, the low-resolution spatio-temporal information from the original video with new, high resolution information that it synthesized to align with the guiding text prompt.
1 code implementation • 1 Dec 2022 • Jonathan Kahana, Niv Cohen, Yedid Hoshen
We propose a new approach, CLIPPR (CLIP with Priors), which adapts zero-shot models for regression and classification on unlabelled datasets.
4 code implementations • 1 Dec 2022 • Tal Reiss, Yedid Hoshen
Surprisingly, we find that this simple representation is sufficient to achieve state-of-the-art performance in ShanghaiTech, the largest and most complex VAD dataset.
Ranked #1 on Abnormal Event Detection In Video on UCSD Ped2
1 code implementation • 17 Nov 2022 • Eliahu Horwitz, Yedid Hoshen
Diffusion models have become the go-to method for many generative tasks, particularly for image-to-image generation tasks such as super-resolution and inpainting.
1 code implementation • 19 Oct 2022 • Tal Reiss, Niv Cohen, Eliahu Horwitz, Ron Abutbul, Yedid Hoshen
Anomaly detection seeks to identify unusual phenomena, a central task in science and industry.
Ranked #1 on Anomaly Detection on ODDS
no code implementations • 7 Jul 2022 • Niv Cohen, Jonathan Kahana, Yedid Hoshen
Breaking from previous research, we present a new anomaly detection method that allows operators to exclude an attribute from being considered as relevant for anomaly detection.
1 code implementation • 21 Mar 2022 • Jonathan Kahana, Yedid Hoshen
Here, our objective is to learn representations that are invariant to the domain (sensitive attribute) for which labels are provided, while being informative over all other image attributes, which are unlabeled.
1 code implementation • 10 Mar 2022 • Eliahu Horwitz, Yedid Hoshen
We utilize a recently introduced 3D anomaly detection dataset to evaluate whether or not using 3D information is a lost opportunity.
Ranked #3 on 3D Anomaly Detection and Segmentation on MVTEC 3D-AD
3D Anomaly Detection 3D Anomaly Detection and Segmentation +3
1 code implementation • 8 Feb 2022 • Yedid Hoshen
We show that by parameterizing each time series using cumulative Radon features, we are able to efficiently and effectively model the distribution of normal time series.
no code implementations • 14 Dec 2021 • Niv Cohen, Ron Abutbul, Yedid Hoshen
Out-of-distribution detection seeks to identify novelties, samples that deviate from the norm.
Ranked #1 on Anomaly Detection on Anomaly Detection on Anomaly Detection on Unlabeled ImageNet-30 vs Flowers-102 (using extra training data)
no code implementations • 6 Dec 2021 • Nir Zabari, Yedid Hoshen
The output of this stage provides pixel-level pseudo-labels, instead of the manual pixel-level labels required by supervised methods.
no code implementations • ICLR 2022 • Yoav Levine, Noam Wies, Daniel Jannai, Dan Navon, Yedid Hoshen, Amnon Shashua
We highlight a bias introduced by this common practice: we prove that the pretrained NLM can model much stronger dependencies between text segments that appeared in the same training example, than it can between text segments that appeared in different training examples.
no code implementations • 29 Sep 2021 • Jonathan Kahana, Yedid Hoshen
Current discriminative approaches are typically based on adversarial-training and do not reach comparable accuracy.
no code implementations • 29 Sep 2021 • Chen Almagor, Yedid Hoshen
Tabular data is one of the most common data-types in machine learning, however, deep neural networks have not yet convincingly outperformed classical baselines on such datasets.
no code implementations • 29 Sep 2021 • Niv Cohen, Yedid Hoshen
In this setting, the model is provided with an exhaustive list of phrases describing all the possible values of a specific attribute, together with a shared image-language embedding (e. g.
1 code implementation • ICCV 2021 • Yael Vinker, Eliahu Horwitz, Nir Zabari, Yedid Hoshen
In this paper, we present DeepSIM, a generative model for conditional image manipulation based on a single image.
Ranked #1 on Image Manipulation on LRS2
1 code implementation • NeurIPS 2021 • Aviv Gabbay, Niv Cohen, Yedid Hoshen
Unsupervised disentanglement has been shown to be theoretically impossible without inductive biases on the models and the data.
2 code implementations • 7 Jun 2021 • Tal Reiss, Yedid Hoshen
We take the approach of transferring representations pre-trained on external datasets for anomaly detection.
Ranked #4 on Anomaly Detection on One-class CIFAR-100 (using extra training data)
no code implementations • 8 Apr 2021 • Niv Cohen, Yedid Hoshen
The output of our method is a set of K principal concepts that summarize the dataset.
Ranked #4 on Image Clustering on ImageNet-100 (using extra training data)
1 code implementation • ICCV 2021 • Aviv Gabbay, Yedid Hoshen
In this work, we propose OverLORD, a single framework for disentangling labeled and unlabeled attributes as well as synthesizing high-fidelity images, which is composed of two stages; (i) Disentanglement: Learning disentangled representations with latent optimization.
1 code implementation • ICCV 2021 • Avital Shafran, Shmuel Peleg, Yedid Hoshen
Membership inference attacks (MIA) try to detect if data samples were used to train a neural network model, e. g. to detect copyright abuses.
no code implementations • 1 Jan 2021 • Aviv Gabbay, Yedid Hoshen
Recent approaches for unsupervised image translation are strongly reliant on generative adversarial training and architectural locality constraints.
no code implementations • 1 Jan 2021 • Avital Shafran, Shmuel Peleg, Yedid Hoshen
A simple but effective approach for membership attacks can therefore use the reconstruction error.
1 code implementation • CVPR 2021 • Tal Reiss, Niv Cohen, Liron Bergman, Yedid Hoshen
In recent years, the anomaly detection community has attempted to obtain better features using advances in deep self-supervised feature learning.
Ranked #1 on Anomaly Detection on Cats-and-Dogs
no code implementations • 9 Jul 2020 • Aviv Gabbay, Yedid Hoshen
Unsupervised image-to-image translation methods have achieved tremendous success in recent years.
1 code implementation • 2 Jul 2020 • Yael Vinker, Eliahu Horwitz, Nir Zabari, Yedid Hoshen
In this paper, we present DeepSIM, a generative model for conditional image manipulation based on a single image.
5 code implementations • 5 May 2020 • Niv Cohen, Yedid Hoshen
Nearest neighbor (kNN) methods utilizing deep pre-trained features exhibit very strong anomaly detection performance when applied to entire images.
Ranked #23 on Anomaly Detection on VisA
2 code implementations • ICLR 2020 • Liron Bergman, Yedid Hoshen
Anomaly detection, finding patterns that substantially deviate from those seen previously, is one of the fundamental problems of artificial intelligence.
Ranked #2 on Anomaly Detection on UEA time-series datasets
no code implementations • 7 Apr 2020 • Yael Vinker, Nir Zabari, Yedid Hoshen
We present AugurOne, a novel approach for training single image generative models.
no code implementations • 24 Feb 2020 • Liron Bergman, Niv Cohen, Yedid Hoshen
Nearest neighbors is a successful and long-standing technique for anomaly detection.
Ranked #2 on Anomaly Detection on Anomaly Detection on Unlabeled ImageNet-30 vs CUB-200 (using extra training data)
1 code implementation • 27 Nov 2019 • Avital Shafran, Gil Segev, Shmuel Peleg, Yedid Hoshen
As neural networks revolutionize many applications, significant privacy conflicts between model users and providers emerge.
2 code implementations • ICLR 2020 • Aviv Gabbay, Yedid Hoshen
Learning to disentangle the hidden factors of variations within a set of observations is a key task for artificial intelligence.
1 code implementation • 5 Jun 2019 • Aviv Gabbay, Yedid Hoshen
We show that style generators outperform other GANs as well as Deep Image Prior as priors for image enhancement tasks.
1 code implementation • CVPR 2019 • Yedid Hoshen, Jitendra Malik
GLANN combines the strengths of IMLE and GLO in a way that overcomes the main drawbacks of each method.
no code implementations • 14 Dec 2018 • Yedid Hoshen
Blind single-channel source separation is a long standing signal processing challenge.
no code implementations • NeurIPS 2018 • Yedid Hoshen
Our method is much faster at inference time, is able to leverage large datasets and has a simple interpretation.
1 code implementation • ICLR 2019 • Tavi Halperin, Ariel Ephrat, Yedid Hoshen
In this work, we introduce a new method---Neural Egg Separation---to tackle the scenario of extracting a signal from an unobserved distribution additively mixed with a signal from an observed distribution.
1 code implementation • ICML'19 2018 • Tavi Halperin, Ariel Ephrat, Yedid Hoshen
In this work, we introduce a new method---Neural Egg Separation---to tackle the scenario of extracting a signal from an unobserved distribution additively mixed with a signal from an observed distribution.
1 code implementation • ECCV 2018 • Yedid Hoshen, Lior Wolf
NAM relies on a pre-trained generative model of the target domain, and aligns each source image with an image synthesized from the target domain, while jointly optimizing the domain mapping function.
no code implementations • CVPR 2018 • Yedid Hoshen, Lior Wolf
Linking between two data sources is a basic building block in numerous computer vision problems.
4 code implementations • EMNLP 2018 • Yedid Hoshen, Lior Wolf
We present a novel method that first aligns the second moment of the word distributions of the two languages and then iteratively refines the alignment.
no code implementations • ICLR 2018 • Yedid Hoshen, Lior Wolf
We further show that the cross-domain mapping task can be broken into two parts: domain alignment and learning the mapping function.
no code implementations • NeurIPS 2017 • Yedid Hoshen
In this paper we introduce VAIN, a novel attentional architecture for multi-agent predictive modeling that scales linearly with the number of agents.
no code implementations • 7 Jun 2015 • Yedid Hoshen, Shmuel Peleg
This indicates that while some tasks may be easily learnable end-to-end, other may need to be broken into sub-tasks.
no code implementations • 20 May 2015 • Yedid Hoshen, Shmuel Peleg
Video surveillance cameras generate most of recorded video, and there is far more recorded video than operators can watch.
no code implementations • CVPR 2016 • Yedid Hoshen, Shmuel Peleg
As head-worn cameras do not capture the photographer, it may seem that the anonymity of the photographer is preserved even when the video is publicly distributed.