1 code implementation • NeurIPS 2023 • Sarah Schwettmann, Tamar Rott Shaham, Joanna Materzynska, Neil Chowdhury, Shuang Li, Jacob Andreas, David Bau, Antonio Torralba
FIND contains functions that resemble components of trained neural networks, and accompanying descriptions of the kind we seek to generate.
no code implementations • 3 Aug 2023 • Sarah Schwettmann, Neil Chowdhury, Samuel Klein, David Bau, Antonio Torralba
Language models demonstrate remarkable capacity to generalize representations learned in one modality to downstream tasks in other modalities.
2 code implementations • 26 Jan 2022 • Evan Hernandez, Sarah Schwettmann, David Bau, Teona Bagashvili, Antonio Torralba, Jacob Andreas
Given a neuron, MILAN generates a description by searching for a natural language string that maximizes pointwise mutual information with the image regions in which the neuron is active.
1 code implementation • ICCV 2021 • Sarah Schwettmann, Evan Hernandez, David Bau, Samuel Klein, Jacob Andreas, Antonio Torralba
A large body of recent work has identified transformations in the latent spaces of generative adversarial networks (GANs) that consistently and interpretably transform generated images.
no code implementations • ICLR 2022 • Evan Hernandez, Sarah Schwettmann, David Bau, Teona Bagashvili, Antonio Torralba, Jacob Andreas
Given a neuron, MILAN generates a description by searching for a natural language string that maximizes pointwise mutual information with the image regions in which the neuron is active.
no code implementations • 20 Dec 2020 • Sarah Schwettmann, Hendrik Strobelt, Mauro Martino
Our approach puts creators in the discovery loop during real-time tool use, in order to identify directions that are perceptually meaningful to them, and generate interpretable image translations along those directions.