Search Results for author: Gökhan Yildirim

Found 6 papers, 3 papers with code

Grid Partitioned Attention: Efficient TransformerApproximation with Inductive Bias for High Resolution Detail Generation

1 code implementation8 Jul 2021 Nikolay Jetchev, Gökhan Yildirim, Christian Bracher, Roland Vollgraf

Attention is a general reasoning mechanism than can flexibly deal with image information, but its memory requirements had made it so far impractical for high resolution image generation.

Conditional Image Generation Deep Attention +1

Evaluating Salient Object Detection in Natural Images with Multiple Objects having Multi-level Saliency

1 code implementation19 Mar 2020 Gökhan Yildirim, Debashis Sen, Mohan Kankanhalli, Sabine Süsstrunk

In this paper, we corroborate based on three subjective experiments on a novel image dataset that objects in natural images are inherently perceived to have varying levels of importance.

Object object-detection +3

Transform the Set: Memory Attentive Generation of Guided and Unguided Image Collages

no code implementations16 Oct 2019 Nikolay Jetchev, Urs Bergmann, Gökhan Yildirim

Cutting and pasting image segments feels intuitive: the choice of source templates gives artists flexibility in recombining existing source material.

BIG-bench Machine Learning Image Generation

Unlabeled Disentangling of GANs with Guided Siamese Networks

no code implementations ICLR 2019 Gökhan Yildirim, Nikolay Jetchev, Urs Bergmann

In addition, we illustrate that simple guidance functions we use in UD-GAN-G allow us to directly capture the desired variations in the data.

Disentangling Multiple Conditional Inputs in GANs

2 code implementations20 Jun 2018 Gökhan Yildirim, Calvin Seward, Urs Bergmann

In this paper, we propose a method that disentangles the effects of multiple input conditions in Generative Adversarial Networks (GANs).

Cannot find the paper you are looking for? You can Submit a new open access paper.