Search Results for author: Yedid Hoshen

Found 56 papers, 31 papers with code

An Egocentric Look at Video Photographer Identity

no code implementations CVPR 2016 Yedid Hoshen, Shmuel Peleg

As head-worn cameras do not capture the photographer, it may seem that the anonymity of the photographer is preserved even when the video is publicly distributed.

Live Video Synopsis for Multiple Cameras

no code implementations20 May 2015 Yedid Hoshen, Shmuel Peleg

Video surveillance cameras generate most of recorded video, and there is far more recorded video than operators can watch.

Decision Making

Visual Learning of Arithmetic Operations

no code implementations7 Jun 2015 Yedid Hoshen, Shmuel Peleg

This indicates that while some tasks may be easily learnable end-to-end, other may need to be broken into sub-tasks.

VAIN: Attentional Multi-agent Predictive Modeling

no code implementations NeurIPS 2017 Yedid Hoshen

In this paper we introduce VAIN, a novel attentional architecture for multi-agent predictive modeling that scales linearly with the number of agents.

Identifying Analogies Across Domains

no code implementations ICLR 2018 Yedid Hoshen, Lior Wolf

We further show that the cross-domain mapping task can be broken into two parts: domain alignment and learning the mapping function.

Translation

Non-Adversarial Unsupervised Word Translation

4 code implementations EMNLP 2018 Yedid Hoshen, Lior Wolf

We present a novel method that first aligns the second moment of the word distributions of the two languages and then iteratively refines the alignment.

Translation Word Translation

Unsupervised Correlation Analysis

no code implementations CVPR 2018 Yedid Hoshen, Lior Wolf

Linking between two data sources is a basic building block in numerous computer vision problems.

NAM: Non-Adversarial Unsupervised Domain Mapping

1 code implementation ECCV 2018 Yedid Hoshen, Lior Wolf

NAM relies on a pre-trained generative model of the target domain, and aligns each source image with an image synthesized from the target domain, while jointly optimizing the domain mapping function.

Neural separation of observed and unobserved distributions

1 code implementation ICLR 2019 Tavi Halperin, Ariel Ephrat, Yedid Hoshen

In this work, we introduce a new method---Neural Egg Separation---to tackle the scenario of extracting a signal from an unobserved distribution additively mixed with a signal from an observed distribution.

Speaker Separation

Neural Separation of Observed and Unobserved Distribution

1 code implementation ICML'19 2018 Tavi Halperin, Ariel Ephrat, Yedid Hoshen

In this work, we introduce a new method---Neural Egg Separation---to tackle the scenario of extracting a signal from an unobserved distribution additively mixed with a signal from an observed distribution.

Non-Adversarial Mapping with VAEs

no code implementations NeurIPS 2018 Yedid Hoshen

Our method is much faster at inference time, is able to leverage large datasets and has a simple interpretation.

Style Generator Inversion for Image Enhancement and Animation

1 code implementation5 Jun 2019 Aviv Gabbay, Yedid Hoshen

We show that style generators outperform other GANs as well as Deep Image Prior as priors for image enhancement tasks.

Image Enhancement Image Manipulation +1

Demystifying Inter-Class Disentanglement

2 code implementations ICLR 2020 Aviv Gabbay, Yedid Hoshen

Learning to disentangle the hidden factors of variations within a set of observations is a key task for artificial intelligence.

Clustering Disentanglement +2

Crypto-Oriented Neural Architecture Design

1 code implementation27 Nov 2019 Avital Shafran, Gil Segev, Shmuel Peleg, Yedid Hoshen

As neural networks revolutionize many applications, significant privacy conflicts between model users and providers emerge.

Deep Nearest Neighbor Anomaly Detection

no code implementations24 Feb 2020 Liron Bergman, Niv Cohen, Yedid Hoshen

Nearest neighbors is a successful and long-standing technique for anomaly detection.

Anomaly Detection

Training End-to-end Single Image Generators without GANs

no code implementations7 Apr 2020 Yael Vinker, Nir Zabari, Yedid Hoshen

We present AugurOne, a novel approach for training single image generative models.

Image Generation

Classification-Based Anomaly Detection for General Data

2 code implementations ICLR 2020 Liron Bergman, Yedid Hoshen

Anomaly detection, finding patterns that substantially deviate from those seen previously, is one of the fundamental problems of artificial intelligence.

Anomaly Detection Classification +1

Sub-Image Anomaly Detection with Deep Pyramid Correspondences

5 code implementations5 May 2020 Niv Cohen, Yedid Hoshen

Nearest neighbor (kNN) methods utilizing deep pre-trained features exhibit very strong anomaly detection performance when applied to entire images.

Segmentation Unsupervised Anomaly Detection

Image Shape Manipulation from a Single Augmented Training Sample

1 code implementation2 Jul 2020 Yael Vinker, Eliahu Horwitz, Nir Zabari, Yedid Hoshen

In this paper, we present DeepSIM, a generative model for conditional image manipulation based on a single image.

Image Manipulation Image-to-Image Translation +1

Improving Style-Content Disentanglement in Image-to-Image Translation

no code implementations9 Jul 2020 Aviv Gabbay, Yedid Hoshen

Unsupervised image-to-image translation methods have achieved tremendous success in recent years.

Disentanglement Translation +1

PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation

1 code implementation CVPR 2021 Tal Reiss, Niv Cohen, Liron Bergman, Yedid Hoshen

In recent years, the anomaly detection community has attempted to obtain better features using advances in deep self-supervised feature learning.

Continual Learning Multi-class Classification +2

Learning Disentangled Representations for Image Translation

no code implementations1 Jan 2021 Aviv Gabbay, Yedid Hoshen

Recent approaches for unsupervised image translation are strongly reliant on generative adversarial training and architectural locality constraints.

Disentanglement Translation

Membership Inference Attacks are Easier on Difficult Problems

1 code implementation ICCV 2021 Avital Shafran, Shmuel Peleg, Yedid Hoshen

Membership inference attacks (MIA) try to detect if data samples were used to train a neural network model, e. g. to detect copyright abuses.

Image Segmentation Medical Image Segmentation +4

Scaling-up Disentanglement for Image Translation

1 code implementation ICCV 2021 Aviv Gabbay, Yedid Hoshen

In this work, we propose OverLORD, a single framework for disentangling labeled and unlabeled attributes as well as synthesizing high-fidelity images, which is composed of two stages; (i) Disentanglement: Learning disentangled representations with latent optimization.

Disentanglement Translation

Dataset Summarization by K Principal Concepts

no code implementations8 Apr 2021 Niv Cohen, Yedid Hoshen

The output of our method is a set of K principal concepts that summarize the dataset.

Ranked #4 on Image Clustering on ImageNet-100 (using extra training data)

Clustering Image Clustering

Mean-Shifted Contrastive Loss for Anomaly Detection

2 code implementations7 Jun 2021 Tal Reiss, Yedid Hoshen

We take the approach of transferring representations pre-trained on external datasets for anomaly detection.

Ranked #3 on Anomaly Detection on One-class CIFAR-100 (using extra training data)

Anomaly Detection Contrastive Learning +2

An Image is Worth More Than a Thousand Words: Towards Disentanglement in the Wild

1 code implementation NeurIPS 2021 Aviv Gabbay, Niv Cohen, Yedid Hoshen

Unsupervised disentanglement has been shown to be theoretically impossible without inductive biases on the models and the data.

Disentanglement Image Manipulation

Language-Guided Image Clustering

no code implementations29 Sep 2021 Niv Cohen, Yedid Hoshen

In this setting, the model is provided with an exhaustive list of phrases describing all the possible values of a specific attribute, together with a shared image-language embedding (e. g.

Attribute Clustering +2

Fieldwise Factorized Networks for Tabular Data Classification

no code implementations29 Sep 2021 Chen Almagor, Yedid Hoshen

Tabular data is one of the most common data-types in machine learning, however, deep neural networks have not yet convincingly outperformed classical baselines on such datasets.

Classification tabular-classification

Inductive-Biases for Contrastive Learning of Disentangled Representations

no code implementations29 Sep 2021 Jonathan Kahana, Yedid Hoshen

Current discriminative approaches are typically based on adversarial-training and do not reach comparable accuracy.

Contrastive Learning Disentanglement +2

The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design

no code implementations ICLR 2022 Yoav Levine, Noam Wies, Daniel Jannai, Dan Navon, Yedid Hoshen, Amnon Shashua

We highlight a bias introduced by this common practice: we prove that the pretrained NLM can model much stronger dependencies between text segments that appeared in the same training example, than it can between text segments that appeared in different training examples.

Chunking In-Context Learning +4

Semantic Segmentation In-the-Wild Without Seeing Any Segmentation Examples

no code implementations6 Dec 2021 Nir Zabari, Yedid Hoshen

The output of this stage provides pixel-level pseudo-labels, instead of the manual pixel-level labels required by supervised methods.

Image Segmentation Segmentation +1

Time Series Anomaly Detection by Cumulative Radon Features

1 code implementation8 Feb 2022 Yedid Hoshen

We show that by parameterizing each time series using cumulative Radon features, we are able to efficiently and effectively model the distribution of normal time series.

Anomaly Detection Time Series +1

A Contrastive Objective for Learning Disentangled Representations

1 code implementation21 Mar 2022 Jonathan Kahana, Yedid Hoshen

Here, our objective is to learn representations that are invariant to the domain (sensitive attribute) for which labels are provided, while being informative over all other image attributes, which are unlabeled.

Attribute Informativeness +1

Red PANDA: Disambiguating Anomaly Detection by Removing Nuisance Factors

no code implementations7 Jul 2022 Niv Cohen, Jonathan Kahana, Yedid Hoshen

Breaking from previous research, we present a new anomaly detection method that allows operators to exclude an attribute from being considered as relevant for anomaly detection.

Anomaly Detection Attribute

Conffusion: Confidence Intervals for Diffusion Models

1 code implementation17 Nov 2022 Eliahu Horwitz, Yedid Hoshen

Diffusion models have become the go-to method for many generative tasks, particularly for image-to-image generation tasks such as super-resolution and inpainting.

Conformal Prediction Facial Inpainting +2

Attribute-based Representations for Accurate and Interpretable Video Anomaly Detection

4 code implementations1 Dec 2022 Tal Reiss, Yedid Hoshen

Surprisingly, we find that this simple representation is sufficient to achieve state-of-the-art performance in ShanghaiTech, the largest and most complex VAD dataset.

Abnormal Event Detection In Video Attribute +1

Improving Zero-Shot Models with Label Distribution Priors

1 code implementation1 Dec 2022 Jonathan Kahana, Niv Cohen, Yedid Hoshen

We propose a new approach, CLIPPR (CLIP with Priors), which adapts zero-shot models for regression and classification on unlabelled datasets.

Attribute regression

Dreamix: Video Diffusion Models are General Video Editors

no code implementations2 Feb 2023 Eyal Molad, Eliahu Horwitz, Dani Valevski, Alex Rav Acha, Yossi Matias, Yael Pritch, Yaniv Leviathan, Yedid Hoshen

Our approach uses a video diffusion model to combine, at inference time, the low-resolution spatio-temporal information from the original video with new, high resolution information that it synthesized to align with the guiding text prompt.

Image Animation Image to Video Generation +3

Set Features for Fine-grained Anomaly Detection

1 code implementation23 Feb 2023 Niv Cohen, Issar Tzachor, Yedid Hoshen

Fine-grained anomaly detection has recently been dominated by segmentation based approaches.

Anomaly Detection Time Series

Unsupervised Word Segmentation Using Temporal Gradient Pseudo-Labels

1 code implementation30 Mar 2023 Tzeviya Sylvia Fuchs, Yedid Hoshen

We use a thresholding function on the temporal gradient magnitude to define a psuedo-label for wordness.

Pseudo Label Segmentation

No Free Lunch: The Hazards of Over-Expressive Representations in Anomaly Detection

no code implementations12 Jun 2023 Tal Reiss, Niv Cohen, Yedid Hoshen

It is tempting to hypothesize that anomaly detection can improve indefinitely by increasing the scale of our networks, making their representations more expressive.

Anomaly Detection

Representation Learning in Anomaly Detection: Successes, Limits and a Grand Challenge

no code implementations20 Jul 2023 Yedid Hoshen

These limitations can be overcome when there are strong tasks priors, as is the case for many industrial tasks.

Anomaly Detection Representation Learning

Detecting Deepfakes Without Seeing Any

1 code implementation2 Nov 2023 Tal Reiss, Bar Cavia, Yedid Hoshen

We therefore introduce the concept of "fact checking", adapted from fake news detection, for detecting zero-day deepfake attacks.

DeepFake Detection Face Swapping +2

Set Features for Anomaly Detection

1 code implementation24 Nov 2023 Niv Cohen, Issar Tzachor, Yedid Hoshen

This paper proposes set features for detecting anomalies in samples that consist of unusual combinations of normal elements.

Anomaly Detection Density Estimation +2

Classifying Nodes in Graphs without GNNs

1 code implementation8 Feb 2024 Daniel Winter, Niv Cohen, Yedid Hoshen

Recently, distillation methods succeeded in eliminating the use of GNNs at test time but they still require them during training.

Node Classification

Recovering the Pre-Fine-Tuning Weights of Generative Models

1 code implementation15 Feb 2024 Eliahu Horwitz, Jonathan Kahana, Yedid Hoshen

The dominant paradigm in generative modeling consists of two steps: i) pre-training on a large-scale but unsafe dataset, ii) aligning the pre-trained model with human values via fine-tuning.

Pre-Fine-Tuning Weight Recovery

Distilling Datasets Into Less Than One Image

1 code implementation18 Mar 2024 Asaf Shul, Eliahu Horwitz, Yedid Hoshen

Current methods frame this as maximizing the distilled classification accuracy for a budget of K distilled images-per-class, where K is a positive integer.

ObjectDrop: Bootstrapping Counterfactuals for Photorealistic Object Removal and Insertion

no code implementations27 Mar 2024 Daniel Winter, Matan Cohen, Shlomi Fruchter, Yael Pritch, Alex Rav-Acha, Yedid Hoshen

To tackle this challenge, we propose bootstrap supervision; leveraging our object removal model trained on a small counterfactual dataset, we synthetically expand this dataset considerably.

counterfactual Object

Cannot find the paper you are looking for? You can Submit a new open access paper.