Search Results for author: Hila Chefer

Found 10 papers, 8 papers with code

Still-Moving: Customized Video Generation without Customized Video Data

no code implementations11 Jul 2024 Hila Chefer, Shiran Zada, Roni Paiss, Ariel Ephrat, Omer Tov, Michael Rubinstein, Lior Wolf, Tali Dekel, Tomer Michaeli, Inbar Mosseri

We assume access to a customized version of the T2I model, trained only on still image data (e. g., using DreamBooth or StyleDrop).

Video Generation

The Hidden Language of Diffusion Models

1 code implementation1 Jun 2023 Hila Chefer, Oran Lang, Mor Geva, Volodymyr Polosukhin, Assaf Shocher, Michal Irani, Inbar Mosseri, Lior Wolf

In this work, we present Conceptor, a novel method to interpret the internal representation of a textual concept by a diffusion model.

Bias Detection Image Manipulation

Discriminative Class Tokens for Text-to-Image Diffusion Models

1 code implementation ICCV 2023 Idan Schwartz, Vésteinn Snæbjarnarson, Hila Chefer, Ryan Cotterell, Serge Belongie, Lior Wolf, Sagie Benaim

This approach has two disadvantages: (i) supervised datasets are generally small compared to large-scale scraped text-image datasets on which text-to-image models are trained, affecting the quality and diversity of the generated images, or (ii) the input is a hard-coded label, as opposed to free-form text, limiting the control over the generated images.

Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models

2 code implementations31 Jan 2023 Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, Daniel Cohen-Or

Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt.

Generative Semantic Nursing

Optimizing Relevance Maps of Vision Transformers Improves Robustness

1 code implementation2 Jun 2022 Hila Chefer, Idan Schwartz, Lior Wolf

It has been observed that visual classification models often rely mostly on the image background, neglecting the foreground, which hurts their robustness to distribution changes.

Image Classification Out-of-Distribution Generalization

No Token Left Behind: Explainability-Aided Image Classification and Generation

1 code implementation11 Apr 2022 Roni Paiss, Hila Chefer, Lior Wolf

To mitigate it, we present a novel explainability-based approach, which adds a loss term to ensure that CLIP focuses on all relevant semantic parts of the input, in addition to employing the CLIP similarity loss used in previous works.

Image Classification Image Generation +4

Image-Based CLIP-Guided Essence Transfer

1 code implementation24 Oct 2021 Hila Chefer, Sagie Benaim, Roni Paiss, Lior Wolf

We make the distinction between (i) style transfer, in which a source image is manipulated to match the textures and colors of a target image, and (ii) essence transfer, in which one edits the source image to include high-level semantic attributes from the target.

Domain Adaptation Style Transfer

Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers

1 code implementation ICCV 2021 Hila Chefer, Shir Gur, Lior Wolf

Transformers are increasingly dominating multi-modal reasoning tasks, such as visual question answering, achieving state-of-the-art results thanks to their ability to contextualize information using the self-attention and co-attention mechanisms.

Decoder Image Segmentation +5

Transformer Interpretability Beyond Attention Visualization

3 code implementations CVPR 2021 Hila Chefer, Shir Gur, Lior Wolf

Self-attention techniques, and specifically Transformers, are dominating the field of text processing and are becoming increasingly popular in computer vision classification tasks.

General Classification text-classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.