Search Results for author: David Bau

Found 22 papers, 14 papers with code

Locating and Editing Factual Knowledge in GPT

1 code implementation10 Feb 2022 Kevin Meng, David Bau, Alex Andonian, Yonatan Belinkov

We investigate the mechanisms underlying factual knowledge recall in autoregressive transformer language models.

Natural Language Descriptions of Deep Visual Features

no code implementations26 Jan 2022 Evan Hernandez, Sarah Schwettmann, David Bau, Teona Bagashvili, Antonio Torralba, Jacob Andreas

Given a neuron, MILAN generates a description by searching for a natural language string that maximizes pointwise mutual information with the image regions in which the neuron is active.

Editing a classifier by rewriting its prediction rules

1 code implementation NeurIPS 2021 Shibani Santurkar, Dimitris Tsipras, Mahalaxmi Elango, David Bau, Antonio Torralba, Aleksander Madry

We present a methodology for modifying the behavior of a classifier by directly rewriting its prediction rules.

Toward a Visual Concept Vocabulary for GAN Latent Space

1 code implementation ICCV 2021 Sarah Schwettmann, Evan Hernandez, David Bau, Samuel Klein, Jacob Andreas, Antonio Torralba

A large body of recent work has identified transformations in the latent spaces of generative adversarial networks (GANs) that consistently and interpretably transform generated images.

Disentanglement

Natural Language Descriptions of Deep Features

no code implementations ICLR 2022 Evan Hernandez, Sarah Schwettmann, David Bau, Teona Bagashvili, Antonio Torralba, Jacob Andreas

Given a neuron, MILAN generates a description by searching for a natural language string that maximizes pointwise mutual information with the image regions in which the neuron is active.

Sketch Your Own GAN

1 code implementation ICCV 2021 Sheng-Yu Wang, David Bau, Jun-Yan Zhu

In particular, we change the weights of an original GAN model according to user sketches.

Image Generation

Understanding the Role of Individual Units in a Deep Neural Network

2 code implementations10 Sep 2020 David Bau, Jun-Yan Zhu, Hendrik Strobelt, Agata Lapedriza, Bolei Zhou, Antonio Torralba

Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes.

Image Classification Image Generation +1

What makes fake images detectable? Understanding properties that generalize

1 code implementation ECCV 2020 Lucy Chai, David Bau, Ser-Nam Lim, Phillip Isola

The quality of image generation and manipulation is reaching impressive levels, making it increasingly difficult for a human to distinguish between what is real and what is fake.

Image Generation

Rewriting a Deep Generative Model

3 code implementations ECCV 2020 David Bau, Steven Liu, Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba

To address the problem, we propose a formulation in which the desired rule is changed by manipulating a layer of a deep network as a linear associative memory.

Diverse Image Generation via Self-Conditioned GANs

2 code implementations CVPR 2020 Steven Liu, Tongzhou Wang, David Bau, Jun-Yan Zhu, Antonio Torralba

We introduce a simple but effective unsupervised method for generating realistic and diverse images.

Image Generation

Dissecting Pruned Neural Networks

no code implementations29 Jun 2019 Jonathan Frankle, David Bau

Namely, we consider the effect of removing unnecessary structure on the number of hidden units that learn disentangled representations of human-recognizable concepts as identified by network dissection.

On the Units of GANs (Extended Abstract)

no code implementations29 Jan 2019 David Bau, Jun-Yan Zhu, Hendrik Strobelt, Bolei Zhou, Joshua B. Tenenbaum, William T. Freeman, Antonio Torralba

We quantify the causal effect of interpretable units by measuring the ability of interventions to control objects in the output.

Interpretable Basis Decomposition for Visual Explanation

1 code implementation ECCV 2018 Bolei Zhou, Yiyou Sun, David Bau, Antonio Torralba

Explanations of the decisions made by a deep neural network are important for human end-users to be able to understand and diagnose the trustworthiness of the system.

Revisiting the Importance of Individual Units in CNNs via Ablation

no code implementations7 Jun 2018 Bolei Zhou, Yiyou Sun, David Bau, Antonio Torralba

We confirm that unit attributes such as class selectivity are a poor predictor for impact on overall accuracy as found previously in recent work \cite{morcos2018importance}.

General Classification

Interpreting Deep Visual Representations via Network Dissection

1 code implementation15 Nov 2017 Bolei Zhou, David Bau, Aude Oliva, Antonio Torralba

In this work, we describe Network Dissection, a method that interprets networks by providing labels for the units of their deep visual representations.

Network Dissection: Quantifying Interpretability of Deep Visual Representations

no code implementations CVPR 2017 David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, Antonio Torralba

Given any CNN model, the proposed method draws on a broad data set of visual concepts to score the semantics of hidden units at each intermediate convolutional layer.

Cannot find the paper you are looking for? You can Submit a new open access paper.