Search Results for author: Hila Chefer

Found 6 papers, 6 papers with code

Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models

1 code implementation31 Jan 2023 Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, Daniel Cohen-Or

Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt.

Generative Semantic Nursing

Optimizing Relevance Maps of Vision Transformers Improves Robustness

1 code implementation2 Jun 2022 Hila Chefer, Idan Schwartz, Lior Wolf

It has been observed that visual classification models often rely mostly on the image background, neglecting the foreground, which hurts their robustness to distribution changes.

Image Classification Out-of-Distribution Generalization

No Token Left Behind: Explainability-Aided Image Classification and Generation

1 code implementation11 Apr 2022 Roni Paiss, Hila Chefer, Lior Wolf

To mitigate it, we present a novel explainability-based approach, which adds a loss term to ensure that CLIP focuses on all relevant semantic parts of the input, in addition to employing the CLIP similarity loss used in previous works.

Image Classification Image Generation +3

Image-Based CLIP-Guided Essence Transfer

1 code implementation24 Oct 2021 Hila Chefer, Sagie Benaim, Roni Paiss, Lior Wolf

We make the distinction between (i) style transfer, in which a source image is manipulated to match the textures and colors of a target image, and (ii) essence transfer, in which one edits the source image to include high-level semantic attributes from the target.

Domain Adaptation Style Transfer

Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers

1 code implementation ICCV 2021 Hila Chefer, Shir Gur, Lior Wolf

Transformers are increasingly dominating multi-modal reasoning tasks, such as visual question answering, achieving state-of-the-art results thanks to their ability to contextualize information using the self-attention and co-attention mechanisms.

Image Segmentation object-detection +4

Transformer Interpretability Beyond Attention Visualization

2 code implementations CVPR 2021 Hila Chefer, Shir Gur, Lior Wolf

Self-attention techniques, and specifically Transformers, are dominating the field of text processing and are becoming increasingly popular in computer vision classification tasks.

General Classification text-classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.