Search Results for author: Samaneh Azadi

Found 9 papers, 5 papers with code

Unconditional Synthesis of Complex Scenes Using a Semantic Bottleneck

no code implementations1 Jan 2021 Samaneh Azadi, Michael Tschannen, Eric Tzeng, Sylvain Gelly, Trevor Darrell, Mario Lucic

Coupling the high-fidelity generation capabilities of label-conditional image synthesis methods with the flexibility of unconditional generative models, we propose a semantic bottleneck GAN model for unconditional synthesis of complex scenes.

Image Generation

Semantic Bottleneck Scene Generation

2 code implementations26 Nov 2019 Samaneh Azadi, Michael Tschannen, Eric Tzeng, Sylvain Gelly, Trevor Darrell, Mario Lucic

For the former, we use an unconditional progressive segmentation generation network that captures the distribution of realistic semantic scene layouts.

Conditional Image Generation Image-to-Image Translation +1

Compositional GAN (Extended Abstract): Learning Image-Conditional Binary Composition

no code implementations ICLR Workshop DeepGenStruct 2019 Samaneh Azadi, Deepak Pathak, Sayna Ebrahimi, Trevor Darrell

Generative Adversarial Networks (GANs) can produce images of surprising complexity and realism but are generally structured to sample from a single latent source ignoring the explicit spatial interaction between multiple entities that could be present in a scene.

Discriminator Rejection Sampling

1 code implementation ICLR 2019 Samaneh Azadi, Catherine Olsson, Trevor Darrell, Ian Goodfellow, Augustus Odena

We propose a rejection sampling scheme using the discriminator of a GAN to approximately correct errors in the GAN generator distribution.

Image Generation

Compositional GAN: Learning Image-Conditional Binary Composition

1 code implementation19 Jul 2018 Samaneh Azadi, Deepak Pathak, Sayna Ebrahimi, Trevor Darrell

Generative Adversarial Networks (GANs) can produce images of remarkable complexity and realism but are generally structured to sample from a single latent source ignoring the explicit spatial interaction between multiple entities that could be present in a scene.

Multi-Content GAN for Few-Shot Font Style Transfer

6 code implementations CVPR 2018 Samaneh Azadi, Matthew Fisher, Vladimir Kim, Zhaowen Wang, Eli Shechtman, Trevor Darrell

In this work, we focus on the challenge of taking partial observations of highly-stylized text and generalizing the observations to generate unobserved glyphs in the ornamented typeface.

Font Style Transfer

Learning Detection with Diverse Proposals

1 code implementation CVPR 2017 Samaneh Azadi, Jiashi Feng, Trevor Darrell

To predict a set of diverse and informative proposals with enriched representations, this paper introduces a differentiable Determinantal Point Process (DPP) layer that is able to augment the object detection architectures.

object-detection Object Detection

Auxiliary Image Regularization for Deep CNNs with Noisy Labels

no code implementations22 Nov 2015 Samaneh Azadi, Jiashi Feng, Stefanie Jegelka, Trevor Darrell

Precisely-labeled data sets with sufficient amount of samples are very important for training deep convolutional neural networks (CNNs).

Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.