Search Results for author: Utkarsh Ojha

Found 10 papers, 7 papers with code

Edit One for All: Interactive Batch Image Editing

no code implementations18 Jan 2024 Thao Nguyen, Utkarsh Ojha, Yuheng Li, Haotian Liu, Yong Jae Lee

With increased human control, it is now possible to edit an image in a plethora of ways; from specifying in text what we want to change, to straight up dragging the contents of the image in an interactive point-based manner.

Visual Instruction Inversion: Image Editing via Visual Prompting

1 code implementation26 Jul 2023 Thao Nguyen, Yuheng Li, Utkarsh Ojha, Yong Jae Lee

Given pairs of example that represent the "before" and "after" images of an edit, our goal is to learn a text-based editing direction that can be used to perform the same edit on new images.

Visual Prompting

Towards Universal Fake Image Detectors that Generalize Across Generative Models

1 code implementation CVPR 2023 Utkarsh Ojha, Yuheng Li, Yong Jae Lee

In this work, we first show that the existing paradigm, which consists of training a deep network for real-vs-fake classification, fails to detect fake images from newer breeds of generative models when trained to detect GAN fake images.

Classification Language Modelling

Generating Furry Cars: Disentangling Object Shape & Appearance across Multiple Domains

no code implementations5 Apr 2021 Utkarsh Ojha, Krishna Kumar Singh, Yong Jae Lee

We consider the novel task of learning disentangled representations of object shape and appearance across multiple domains (e. g., dogs and cars).

Disentanglement Object

Generating Furry Cars: Disentangling Object Shape and Appearance across Multiple Domains

no code implementations ICLR 2021 Utkarsh Ojha, Krishna Kumar Singh, Yong Jae Lee

We consider the novel task of learning disentangled representations of object shape and appearance across multiple domains (e. g., dogs and cars).

Disentanglement Object

MixNMatch: Multifactor Disentanglement and Encoding for Conditional Image Generation

3 code implementations CVPR 2020 Yuheng Li, Krishna Kumar Singh, Utkarsh Ojha, Yong Jae Lee

We present MixNMatch, a conditional generative model that learns to disentangle and encode background, object pose, shape, and texture from real images with minimal supervision, for mix-and-match image generation.

Conditional Image Generation Disentanglement

Elastic-InfoGAN: Unsupervised Disentangled Representation Learning in Class-Imbalanced Data

1 code implementation NeurIPS 2020 Utkarsh Ojha, Krishna Kumar Singh, Cho-Jui Hsieh, Yong Jae Lee

We propose a novel unsupervised generative model that learns to disentangle object identity from other low-level aspects in class-imbalanced data.

Object Representation Learning

FineGAN: Unsupervised Hierarchical Disentanglement for Fine-Grained Object Generation and Discovery

1 code implementation CVPR 2019 Krishna Kumar Singh, Utkarsh Ojha, Yong Jae Lee

We propose FineGAN, a novel unsupervised GAN framework, which disentangles the background, object shape, and object appearance to hierarchically generate images of fine-grained object categories.

Conditional Image Generation Disentanglement +3

NAG: Network for Adversary Generation

1 code implementation CVPR 2018 Konda Reddy Mopuri, Utkarsh Ojha, Utsav Garg, R. Venkatesh Babu

Our trained generator network attempts to capture the distribution of adversarial perturbations for a given classifier and readily generates a wide variety of such perturbations.

Cannot find the paper you are looking for? You can Submit a new open access paper.