Search Results for author: Duygu Ceylan

Found 57 papers, 17 papers with code

On Unifying Video Generation and Camera Pose Estimation

no code implementations2 Jan 2025 Chun-Hao Paul Huang, Jae Shin Yoon, Hyeonho Jeong, Niloy Mitra, Duygu Ceylan

Inspired by the emergent 3D capabilities in image generators, we explore whether video generators similarly exhibit 3D awareness.

Camera Pose Estimation Pose Estimation +1

Free-viewpoint Human Animation with Pose-correlated Reference Selection

no code implementations23 Dec 2024 Fa-Ting Hong, Zhan Xu, Haiyang Liu, Qinjie Lin, Luchuan Song, Zhixin Shu, Yang Zhou, Duygu Ceylan, Dan Xu

Diffusion-based human animation aims to animate a human character based on a source human image as well as driving signals such as a sequence of poses.

Human Animation

GANFusion: Feed-Forward Text-to-3D with Diffusion in GAN Space

no code implementations21 Dec 2024 Souhaib Attaiki, Paul Guerrero, Duygu Ceylan, Niloy J. Mitra, Maks Ovsjanikov

We observe that GAN- and diffusion-based generators have complementary qualities: GANs can be trained efficiently with 2D supervision to produce high-quality 3D objects but are hard to condition on text.

Denoising Text to 3D

Track4Gen: Teaching Video Diffusion Models to Track Points Improves Video Generation

no code implementations8 Dec 2024 Hyeonho Jeong, Chun-Hao Paul Huang, Jong Chul Ye, Niloy Mitra, Duygu Ceylan

While recent foundational video generators produce visually rich output, they still struggle with appearance drift, where objects gradually degrade or change inconsistently across frames, breaking visual coherence.

Point Tracking Video Generation

FloAt: Flow Warping of Self-Attention for Clothing Animation Generation

no code implementations22 Nov 2024 Swasti Shreya Mishra, Kuldeep Kulkarni, Duygu Ceylan, Balaji Vasan Srinivasan

The input to our model is a text prompt depicting the type of clothing and the texture of clothing like leopard, striped, or plain, and a sequence of normal maps that capture the underlying animation that we desire in the output.

SSIM

HyperGAN-CLIP: A Unified Framework for Domain Adaptation, Image Synthesis and Manipulation

1 code implementation19 Nov 2024 Abdul Basit Anees, Ahmet Canberk Baykal, Muhammed Burak Kizil, Duygu Ceylan, Erkut Erdem, Aykut Erdem

Towards this end, in this study, we present a novel framework that significantly extends the capabilities of a pre-trained StyleGAN by integrating CLIP space via hypernetworks.

Domain Adaptation Image Manipulation +1

Boosting Camera Motion Control for Video Diffusion Transformers

no code implementations14 Oct 2024 Soon Yau Cheong, Duygu Ceylan, Armin Mustafa, Andrew Gilbert, Chun-Hao Paul Huang

While U-Net-based models have shown promising results for camera control, transformer-based diffusion models (DiT)-the preferred architecture for large-scale video generation - suffer from severe degradation in camera motion accuracy.

Video Generation

Learned Single-Pass Multitasking Perceptual Graphics for Immersive Displays

no code implementations31 Jul 2024 Doğa Yılmaz, Towaki Takikawa, Duygu Ceylan, Kaan Akşit

Uniquely, a single inference step of our model supports different permutations of these perceptual tasks at different prompted rates (i. e., mildly, lightly), eliminating the need for daisy-chaining multiple models to get the desired perceptual effect.

Image Denoising

SuperGaussian: Repurposing Video Models for 3D Super Resolution

no code implementations2 Jun 2024 Yuan Shen, Duygu Ceylan, Paul Guerrero, Zexiang Xu, Niloy J. Mitra, Shenlong Wang, Anna Frühstück

We demonstrate that it is possible to directly repurpose existing (pretrained) video models for 3D super-resolution and thus sidestep the problem of the shortage of large repositories of high-quality 3D training models.

Super-Resolution

Neural Garment Dynamics via Manifold-Aware Transformers

1 code implementation13 May 2024 Peizhuo Li, Tuanfeng Y. Wang, Timur Levent Kesdogan, Duygu Ceylan, Olga Sorkine-Hornung

Data driven and learning based solutions for modeling dynamic garments have significantly advanced, especially in the context of digital humans.

SonicDiffusion: Audio-Driven Image Generation and Editing with Pretrained Diffusion Models

no code implementations1 May 2024 Burak Can Biner, Farrin Marouf Sofian, Umur Berkay Karakaş, Duygu Ceylan, Erkut Erdem, Aykut Erdem

In addition to audio conditioned image generation, our method can also be utilized in conjuction with diffusion based editing methods to enable audio conditioned image editing.

Text-to-Image Generation

Generative Rendering: Controllable 4D-Guided Video Generation with 2D Diffusion Models

no code implementations CVPR 2024 Shengqu Cai, Duygu Ceylan, Matheus Gadelha, Chun-Hao Paul Huang, Tuanfeng Yang Wang, Gordon Wetzstein

Traditional 3D content creation tools empower users to bring their imagination to life by giving them direct control over a scene's geometry, appearance, motion, and camera path.

Text-to-Image Generation Video Generation

BLiSS: Bootstrapped Linear Shape Space

no code implementations4 Sep 2023 Sanjeev Muralikrishnan, Chun-Hao Paul Huang, Duygu Ceylan, Niloy J. Mitra

Morphable models are fundamental to numerous human-centered processes as they offer a simple yet expressive shape space.

GRIP: Generating Interaction Poses Using Spatial Cues and Latent Consistency

no code implementations22 Aug 2023 Omid Taheri, Yi Zhou, Dimitrios Tzionas, Yang Zhou, Duygu Ceylan, Soren Pirk, Michael J. Black

In contrast, we introduce GRIP, a learning-based method that takes, as input, the 3D motion of the body and the object, and synthesizes realistic motion for both hands before, during, and after object interaction.

Mixed Reality Object

CLIP-Guided StyleGAN Inversion for Text-Driven Real Image Editing

no code implementations17 Jul 2023 Ahmet Canberk Baykal, Abdul Basit Anees, Duygu Ceylan, Erkut Erdem, Aykut Erdem, Deniz Yuret

Existing approaches for editing images using language either resort to instance-level latent code optimization or map predefined text prompts to some editing directions in the latent space.

Attribute

VidStyleODE: Disentangled Video Editing via StyleGAN and NeuralODEs

no code implementations ICCV 2023 Moayed Haji Ali, Andrew Bond, Tolga Birdal, Duygu Ceylan, Levent Karacan, Erkut Erdem, Aykut Erdem

However, the applicability of such advancements to the video domain has been hindered by the difficulty of representing and controlling videos in the latent space of GANs.

Image Animation Video Editing +1

Neural Image-based Avatars: Generalizable Radiance Fields for Human Avatar Modeling

no code implementations10 Apr 2023 Youngjoong Kwon, Dahun Kim, Duygu Ceylan, Henry Fuchs

We present a method that enables synthesizing novel views and novel poses of arbitrary human performers from sparse multi-view images.

NeRF

Pix2Video: Video Editing using Image Diffusion

1 code implementation ICCV 2023 Duygu Ceylan, Chun-Hao Paul Huang, Niloy J. Mitra

Our method works in two simple steps: first, we use a pre-trained structure-guided (e. g., depth) image diffusion model to perform text-guided edits on an anchor frame; then, in the key step, we progressively propagate the changes to the future frames via self-attention feature injection to adapt the core denoising step of the diffusion model.

Denoising Text Generation +1

Normal-guided Garment UV Prediction for Human Re-texturing

no code implementations CVPR 2023 Yasamin Jafarian, Tuanfeng Y. Wang, Duygu Ceylan, Jimei Yang, Nathan Carr, Yi Zhou, Hyun Soo Park

To edit human videos in a physically plausible way, a texture map must take into account not only the garment transformation induced by the body movements and clothes fitting, but also its 3D fine-grained surface geometry.

3D Reconstruction Prediction

Motion Guided Deep Dynamic 3D Garments

1 code implementation23 Sep 2022 Meng Zhang, Duygu Ceylan, Niloy J. Mitra

Technically, we model garment dynamics, driven using the input character motion, by predicting per-frame local displacements in a canonical state of the garment that is enriched with frame-dependent skinning weights to bring the garment to the global space.

Learning Visibility for Robust Dense Human Body Estimation

1 code implementation23 Aug 2022 Chun-Han Yao, Jimei Yang, Duygu Ceylan, Yi Zhou, Yang Zhou, Ming-Hsuan Yang

An alternative approach is to estimate dense vertices of a predefined template body in the image space.

A Repulsive Force Unit for Garment Collision Handling in Neural Networks

no code implementations28 Jul 2022 Qingyang Tan, Yi Zhou, Tuanfeng Wang, Duygu Ceylan, Xin Sun, Dinesh Manocha

Despite recent success, deep learning-based methods for predicting 3D garment deformation under body motion suffer from interpenetration problems between the garment and the body.

RiCS: A 2D Self-Occlusion Map for Harmonizing Volumetric Objects

no code implementations14 May 2022 Yunseok Jang, Ruben Villegas, Jimei Yang, Duygu Ceylan, Xin Sun, Honglak Lee

We test the effectiveness of our representation on the human image harmonization task by predicting shading that is coherent with a given background image.

Image Harmonization

Learning Motion-Dependent Appearance for High-Fidelity Rendering of Dynamic Humans from a Single Camera

no code implementations CVPR 2022 Jae Shin Yoon, Duygu Ceylan, Tuanfeng Y. Wang, Jingwan Lu, Jimei Yang, Zhixin Shu, Hyun Soo Park

Appearance of dressed humans undergoes a complex geometric transformation induced not only by the static pose but also by its dynamics, i. e., there exists a number of cloth geometric configurations given a pose depending on the way it has moved.

Decoder

Neural Human Performer: Learning Generalizable Radiance Fields for Human Performance Rendering

1 code implementation NeurIPS 2021 Youngjoong Kwon, Dahun Kim, Duygu Ceylan, Henry Fuchs

To tackle this, we propose Neural Human Performer, a novel approach that learns generalizable neural radiance fields based on a parametric human body model for robust performance capture.

Generalizable Novel View Synthesis NeRF

Contact-Aware Retargeting of Skinned Motion

no code implementations ICCV 2021 Ruben Villegas, Duygu Ceylan, Aaron Hertzmann, Jimei Yang, Jun Saito

Self-contacts, such as when hands touch each other or the torso or the head, are important attributes of human body language and dynamics, yet existing methods do not model or preserve these contacts.

Motion Estimation motion retargeting

CPFN: Cascaded Primitive Fitting Networks for High-Resolution Point Clouds

1 code implementation ICCV 2021 Eric-Tuan Lê, Minhyuk Sung, Duygu Ceylan, Radomir Mech, Tamy Boubekeur, Niloy J. Mitra

We present Cascaded Primitive Fitting Networks (CPFN) that relies on an adaptive patch sampling network to assemble detection results of global and local primitive detection networks.

Vocal Bursts Intensity Prediction

Task-Generic Hierarchical Human Motion Prior using VAEs

no code implementations7 Jun 2021 Jiaman Li, Ruben Villegas, Duygu Ceylan, Jimei Yang, Zhengfei Kuang, Hao Li, Yajie Zhao

We demonstrate the effectiveness of our hierarchical motion variational autoencoder in a variety of tasks including video-based human pose estimation, motion completion from partial observations, and motion synthesis from sparse key-frames.

Motion Synthesis Pose Estimation

A Deep Emulator for Secondary Motion of 3D Characters

no code implementations CVPR 2021 Mianlun Zheng, Yi Zhou, Duygu Ceylan, Jernej Barbič

Being a local method, our network is independent of the mesh topology and generalizes to arbitrarily shaped 3D character meshes at test time.

Dynamic Neural Garments

no code implementations23 Feb 2021 Meng Zhang, Duygu Ceylan, Tuanfeng Wang, Niloy J. Mitra

A vital task of the wider digital human effort is the creation of realistic garments on digital avatars, both in the form of characteristic fold patterns and wrinkles in static frames as well as richness of garment dynamics under avatars' motion.

Neural Rendering

Intuitive, Interactive Beard and Hair Synthesis with Generative Models

1 code implementation CVPR 2020 Kyle Olszewski, Duygu Ceylan, Jun Xing, Jose Echevarria, Zhili Chen, Weikai Chen, Hao Li

We present an interactive approach to synthesizing realistic variations in facial hair in images, ranging from subtle edits to existing hair to the addition of complex and challenging hair in images of clean-shaven subjects.

3D geometry

Learning Generative Models of Shape Handles

no code implementations CVPR 2020 Matheus Gadelha, Giorgio Gori, Duygu Ceylan, Radomir Mech, Nathan Carr, Tamy Boubekeur, Rui Wang, Subhransu Maji

We present a generative model to synthesize 3D shapes as sets of handles -- lightweight proxies that approximate the original 3D shape -- for applications in interactive editing, shape parsing, and building compact 3D representations.

Unsupervised Learning of Intrinsic Structural Representation Points

1 code implementation CVPR 2020 Nenglun Chen, Lingjie Liu, Zhiming Cui, Runnan Chen, Duygu Ceylan, Changhe Tu, Wenping Wang

The 3D structure points produced by our method encode the shape structure intrinsically and exhibit semantic consistency across all the shape instances with similar structures.

3DN: 3D Deformation Network

1 code implementation CVPR 2019 Weiyue Wang, Duygu Ceylan, Radomir Mech, Ulrich Neumann

Given such a source 3D model and a target which can be a 2D image, 3D model, or a point cloud acquired as a depth scan, we introduce 3DN, an end-to-end network that deforms the source model to resemble the target.

3D Shape Generation

SwapNet: Garment Transfer in Single View Images

1 code implementation ECCV 2018 Amit Raj, Patsorn Sangkloy, Huiwen Chang, Jingwan Lu, Duygu Ceylan, James Hays

Garment transfer is a challenging task that requires (i) disentangling the features of the clothing from the body pose and shape and (ii) realistic synthesis of the garment texture on the new body.

 Ranked #1 on Virtual Try-on on FashionIQ (using extra training data)

Virtual Try-on

Learning a Shared Shape Space for Multimodal Garment Design

no code implementations29 Jun 2018 Tuanfeng Y. Wang, Duygu Ceylan, Jovan Popovic, Niloy J. Mitra

Designing real and virtual garments is becoming extremely demanding with rapidly changing fashion trends and increasing need for synthesizing realistic dressed digital humans for various applications.

Graphics

iMapper: Interaction-guided Joint Scene and Human Motion Mapping from Monocular Videos

no code implementations20 Jun 2018 Aron Monszpart, Paul Guerrero, Duygu Ceylan, Ersin Yumer, Niloy J. Mitra

A long-standing challenge in scene analysis is the recovery of scene arrangements under moderate to heavy occlusion, directly from monocular video.

Human-Object Interaction Detection Object

PlaneNet: Piece-wise Planar Reconstruction from a Single RGB Image

1 code implementation CVPR 2018 Chen Liu, Jimei Yang, Duygu Ceylan, Ersin Yumer, Yasutaka Furukawa

The proposed end-to-end DNN learns to directly infer a set of plane parameters and corresponding plane segmentation masks from a single RGB image.

Depth Estimation Depth Prediction +1

Neural Kinematic Networks for Unsupervised Motion Retargetting

1 code implementation CVPR 2018 Ruben Villegas, Jimei Yang, Duygu Ceylan, Honglak Lee

We propose a recurrent neural network architecture with a Forward Kinematics layer and cycle consistency based adversarial training objective for unsupervised motion retargetting.

Learning Dense Facial Correspondences in Unconstrained Images

no code implementations ICCV 2017 Ronald Yu, Shunsuke Saito, Haoxiang Li, Duygu Ceylan, Hao Li

To train such a network, we generate a massive dataset of synthetic faces with dense labels using renderings of a morphable face model with variations in pose, expressions, lighting, and occlusions.

Face Alignment Face Model

3D-PRNN: Generating Shape Primitives with Recurrent Neural Networks

2 code implementations ICCV 2017 Chuhang Zou, Ersin Yumer, Jimei Yang, Duygu Ceylan, Derek Hoiem

The success of various applications including robotics, digital content creation, and visualization demand a structured and abstract representation of the 3D world from limited sensor data.

Retrieval

Material Editing Using a Physically Based Rendering Network

no code implementations ICCV 2017 Guilin Liu, Duygu Ceylan, Ersin Yumer, Jimei Yang, Jyh-Ming Lien

We propose an end-to-end network architecture that replicates the forward image formation process to accomplish this task.

Image Generation

Learning Local Shape Descriptors from Part Correspondences With Multi-view Convolutional Networks

no code implementations14 Jun 2017 Haibin Huang, Evangelos Kalogerakis, Siddhartha Chaudhuri, Duygu Ceylan, Vladimir G. Kim, Ersin Yumer

We present a new local descriptor for 3D shapes, directly applicable to a wide range of shape analysis problems such as point correspondences, semantic segmentation, affordance prediction, and shape-to-scan matching.

Semantic Segmentation

Transformation-Grounded Image Generation Network for Novel 3D View Synthesis

2 code implementations CVPR 2017 Eunbyung Park, Jimei Yang, Ersin Yumer, Duygu Ceylan, Alexander C. Berg

Instead of taking a 'blank slate' approach, we first explicitly infer the parts of the geometry visible both in the input and novel views and then re-cast the remaining synthesis problem as image completion.

Image Generation Novel View Synthesis

Symmetry-aware Depth Estimation using Deep Neural Networks

no code implementations20 Apr 2016 Guilin Liu, Chao Yang, Zimo Li, Duygu Ceylan, Qi-Xing Huang

Due to the abundance of 2D product images from the Internet, developing efficient and scalable algorithms to recover the missing depth information is central to many applications.

Depth Estimation

Capturing Dynamic Textured Surfaces of Moving Targets

no code implementations11 Apr 2016 Ruizhe Wang, Lingyu Wei, Etienne Vouga, Qi-Xing Huang, Duygu Ceylan, Gerard Medioni, Hao Li

We present an end-to-end system for reconstructing complete watertight and textured models of moving subjects such as clothed humans and animals, using only three or four handheld sensors.

Cannot find the paper you are looking for? You can Submit a new open access paper.