1 code implementation • 31 Jan 2023 • Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, Daniel Cohen-Or
Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt.
1 code implementation • 2 Jun 2022 • Hila Chefer, Idan Schwartz, Lior Wolf
It has been observed that visual classification models often rely mostly on the image background, neglecting the foreground, which hurts their robustness to distribution changes.
Ranked #1 on Out-of-Distribution Generalization on ImageNet-W
1 code implementation • 11 Apr 2022 • Roni Paiss, Hila Chefer, Lior Wolf
To mitigate it, we present a novel explainability-based approach, which adds a loss term to ensure that CLIP focuses on all relevant semantic parts of the input, in addition to employing the CLIP similarity loss used in previous works.
1 code implementation • 24 Oct 2021 • Hila Chefer, Sagie Benaim, Roni Paiss, Lior Wolf
We make the distinction between (i) style transfer, in which a source image is manipulated to match the textures and colors of a target image, and (ii) essence transfer, in which one edits the source image to include high-level semantic attributes from the target.
1 code implementation • ICCV 2021 • Hila Chefer, Shir Gur, Lior Wolf
Transformers are increasingly dominating multi-modal reasoning tasks, such as visual question answering, achieving state-of-the-art results thanks to their ability to contextualize information using the self-attention and co-attention mechanisms.
2 code implementations • CVPR 2021 • Hila Chefer, Shir Gur, Lior Wolf
Self-attention techniques, and specifically Transformers, are dominating the field of text processing and are becoming increasingly popular in computer vision classification tasks.