no code implementations • 22 Apr 2024 • Tamar Rott Shaham, Sarah Schwettmann, Franklin Wang, Achyuta Rajaram, Evan Hernandez, Jacob Andreas, Antonio Torralba
Interpretability experiments proposed by MAIA compose these tools to describe and explain system behavior.
no code implementations • 22 Feb 2024 • Nikhil Prakash, Tamar Rott Shaham, Tal Haklay, Yonatan Belinkov, David Bau
We identify the mechanism that enables entity tracking and show that (i) in both the original model and its fine-tuned versions primarily the same circuit implements entity tracking.
no code implementations • 3 Jan 2024 • Pratyusha Sharma, Tamar Rott Shaham, Manel Baradad, Stephanie Fu, Adrian Rodriguez-Munoz, Shivam Duggal, Phillip Isola, Antonio Torralba
Although LLM-generated images do not look like natural images, results on image generation and the ability of models to correct these generated images indicate that precise modeling of strings can teach language models about numerous aspects of the visual world.
1 code implementation • NeurIPS 2023 • Sarah Schwettmann, Tamar Rott Shaham, Joanna Materzynska, Neil Chowdhury, Shuang Li, Jacob Andreas, David Bau, Antonio Torralba
FIND contains functions that resemble components of trained neural networks, and accompanying descriptions of the kind we seek to generate.
no code implementations • 7 Jul 2023 • Xander Davies, Max Nadeau, Nikhil Prakash, Tamar Rott Shaham, David Bau
Recent work has shown that computation in language models may be human-understandable, with successful efforts to localize and intervene on both single-unit features and input-output circuits.
1 code implementation • 18 Dec 2022 • Noa Alkobi, Tamar Rott Shaham, Tomer Michaeli
Image completion is widely used in photo restoration and editing applications, e. g. for object removal.
no code implementations • 3 Dec 2022 • Idan Kligvasser, Tamar Rott Shaham, Noa Alkobi, Tomer Michaeli
Training a generative model on a single image has drawn significant attention in recent years.
1 code implementation • NeurIPS 2021 • Gal Greshler, Tamar Rott Shaham, Tomer Michaeli
Models for audio generation are typically trained on hours of recordings.
1 code implementation • CVPR 2021 • Tamar Rott Shaham, Michael Gharbi, Richard Zhang, Eli Shechtman, Tomer Michaeli
We introduce a new generator architecture, aimed at fast and efficient high-resolution image-to-image translation.
44 code implementations • ICCV 2019 • Tamar Rott Shaham, Tali Dekel, Tomer Michaeli
We introduce SinGAN, an unconditional generative model that can be learned from a single natural image.
no code implementations • CVPR 2018 • Tamar Rott Shaham, Tomer Michaeli
Lossy compression algorithms aim to compactly encode images in a way which enables to restore them with minimal error.
1 code implementation • CVPR 2018 • Idan Kligvasser, Tamar Rott Shaham, Tomer Michaeli
However, state-of-the-art results are typically achieved by very deep networks, which can reach tens of layers with tens of millions of parameters.