Search Results for author: Duygu Ceylan

Found 44 papers, 15 papers with code

Dense Human Body Correspondences Using Convolutional Networks

no code implementations CVPR 2016 Lingyu Wei, Qi-Xing Huang, Duygu Ceylan, Etienne Vouga, Hao Li

We propose a deep learning approach for finding dense correspondences between 3D scans of people.

Capturing Dynamic Textured Surfaces of Moving Targets

no code implementations11 Apr 2016 Ruizhe Wang, Lingyu Wei, Etienne Vouga, Qi-Xing Huang, Duygu Ceylan, Gerard Medioni, Hao Li

We present an end-to-end system for reconstructing complete watertight and textured models of moving subjects such as clothed humans and animals, using only three or four handheld sensors.

Symmetry-aware Depth Estimation using Deep Neural Networks

no code implementations20 Apr 2016 Guilin Liu, Chao Yang, Zimo Li, Duygu Ceylan, Qi-Xing Huang

Due to the abundance of 2D product images from the Internet, developing efficient and scalable algorithms to recover the missing depth information is central to many applications.

Depth Estimation

Transformation-Grounded Image Generation Network for Novel 3D View Synthesis

2 code implementations CVPR 2017 Eunbyung Park, Jimei Yang, Ersin Yumer, Duygu Ceylan, Alexander C. Berg

Instead of taking a 'blank slate' approach, we first explicitly infer the parts of the geometry visible both in the input and novel views and then re-cast the remaining synthesis problem as image completion.

Image Generation Novel View Synthesis

Learning Local Shape Descriptors from Part Correspondences With Multi-view Convolutional Networks

no code implementations14 Jun 2017 Haibin Huang, Evangelos Kalogerakis, Siddhartha Chaudhuri, Duygu Ceylan, Vladimir G. Kim, Ersin Yumer

We present a new local descriptor for 3D shapes, directly applicable to a wide range of shape analysis problems such as point correspondences, semantic segmentation, affordance prediction, and shape-to-scan matching.

Semantic Segmentation

Material Editing Using a Physically Based Rendering Network

no code implementations ICCV 2017 Guilin Liu, Duygu Ceylan, Ersin Yumer, Jimei Yang, Jyh-Ming Lien

We propose an end-to-end network architecture that replicates the forward image formation process to accomplish this task.

Image Generation

3D-PRNN: Generating Shape Primitives with Recurrent Neural Networks

2 code implementations ICCV 2017 Chuhang Zou, Ersin Yumer, Jimei Yang, Duygu Ceylan, Derek Hoiem

The success of various applications including robotics, digital content creation, and visualization demand a structured and abstract representation of the 3D world from limited sensor data.

Retrieval

Learning Dense Facial Correspondences in Unconstrained Images

no code implementations ICCV 2017 Ronald Yu, Shunsuke Saito, Haoxiang Li, Duygu Ceylan, Hao Li

To train such a network, we generate a massive dataset of synthetic faces with dense labels using renderings of a morphable face model with variations in pose, expressions, lighting, and occlusions.

Face Alignment Face Model

Neural Kinematic Networks for Unsupervised Motion Retargetting

1 code implementation CVPR 2018 Ruben Villegas, Jimei Yang, Duygu Ceylan, Honglak Lee

We propose a recurrent neural network architecture with a Forward Kinematics layer and cycle consistency based adversarial training objective for unsupervised motion retargetting.

PlaneNet: Piece-wise Planar Reconstruction from a Single RGB Image

1 code implementation CVPR 2018 Chen Liu, Jimei Yang, Duygu Ceylan, Ersin Yumer, Yasutaka Furukawa

The proposed end-to-end DNN learns to directly infer a set of plane parameters and corresponding plane segmentation masks from a single RGB image.

Depth Estimation Depth Prediction +1

iMapper: Interaction-guided Joint Scene and Human Motion Mapping from Monocular Videos

no code implementations20 Jun 2018 Aron Monszpart, Paul Guerrero, Duygu Ceylan, Ersin Yumer, Niloy J. Mitra

A long-standing challenge in scene analysis is the recovery of scene arrangements under moderate to heavy occlusion, directly from monocular video.

Human-Object Interaction Detection Object

Learning a Shared Shape Space for Multimodal Garment Design

no code implementations29 Jun 2018 Tuanfeng Y. Wang, Duygu Ceylan, Jovan Popovic, Niloy J. Mitra

Designing real and virtual garments is becoming extremely demanding with rapidly changing fashion trends and increasing need for synthesizing realistic dressed digital humans for various applications.

Graphics

SwapNet: Garment Transfer in Single View Images

1 code implementation ECCV 2018 Amit Raj, Patsorn Sangkloy, Huiwen Chang, Jingwan Lu, Duygu Ceylan, James Hays

Garment transfer is a challenging task that requires (i) disentangling the features of the clothing from the body pose and shape and (ii) realistic synthesis of the garment texture on the new body.

 Ranked #1 on Virtual Try-on on FashionIQ (using extra training data)

Virtual Try-on

3DN: 3D Deformation Network

1 code implementation CVPR 2019 Weiyue Wang, Duygu Ceylan, Radomir Mech, Ulrich Neumann

Given such a source 3D model and a target which can be a 2D image, 3D model, or a point cloud acquired as a depth scan, we introduce 3DN, an end-to-end network that deforms the source model to resemble the target.

3D Shape Generation

Unsupervised Learning of Intrinsic Structural Representation Points

1 code implementation CVPR 2020 Nenglun Chen, Lingjie Liu, Zhiming Cui, Runnan Chen, Duygu Ceylan, Changhe Tu, Wenping Wang

The 3D structure points produced by our method encode the shape structure intrinsically and exhibit semantic consistency across all the shape instances with similar structures.

Learning Generative Models of Shape Handles

no code implementations CVPR 2020 Matheus Gadelha, Giorgio Gori, Duygu Ceylan, Radomir Mech, Nathan Carr, Tamy Boubekeur, Rui Wang, Subhransu Maji

We present a generative model to synthesize 3D shapes as sets of handles -- lightweight proxies that approximate the original 3D shape -- for applications in interactive editing, shape parsing, and building compact 3D representations.

Intuitive, Interactive Beard and Hair Synthesis with Generative Models

1 code implementation CVPR 2020 Kyle Olszewski, Duygu Ceylan, Jun Xing, Jose Echevarria, Zhili Chen, Weikai Chen, Hao Li

We present an interactive approach to synthesizing realistic variations in facial hair in images, ranging from subtle edits to existing hair to the addition of complex and challenging hair in images of clean-shaven subjects.

Dynamic Neural Garments

no code implementations23 Feb 2021 Meng Zhang, Duygu Ceylan, Tuanfeng Wang, Niloy J. Mitra

A vital task of the wider digital human effort is the creation of realistic garments on digital avatars, both in the form of characteristic fold patterns and wrinkles in static frames as well as richness of garment dynamics under avatars' motion.

Neural Rendering

A Deep Emulator for Secondary Motion of 3D Characters

no code implementations CVPR 2021 Mianlun Zheng, Yi Zhou, Duygu Ceylan, Jernej Barbič

Being a local method, our network is independent of the mesh topology and generalizes to arbitrarily shaped 3D character meshes at test time.

Task-Generic Hierarchical Human Motion Prior using VAEs

no code implementations7 Jun 2021 Jiaman Li, Ruben Villegas, Duygu Ceylan, Jimei Yang, Zhengfei Kuang, Hao Li, Yajie Zhao

We demonstrate the effectiveness of our hierarchical motion variational autoencoder in a variety of tasks including video-based human pose estimation, motion completion from partial observations, and motion synthesis from sparse key-frames.

Motion Synthesis Pose Estimation

CPFN: Cascaded Primitive Fitting Networks for High-Resolution Point Clouds

1 code implementation ICCV 2021 Eric-Tuan Lê, Minhyuk Sung, Duygu Ceylan, Radomir Mech, Tamy Boubekeur, Niloy J. Mitra

We present Cascaded Primitive Fitting Networks (CPFN) that relies on an adaptive patch sampling network to assemble detection results of global and local primitive detection networks.

Vocal Bursts Intensity Prediction

Neural Human Performer: Learning Generalizable Radiance Fields for Human Performance Rendering

1 code implementation NeurIPS 2021 Youngjoong Kwon, Dahun Kim, Duygu Ceylan, Henry Fuchs

To tackle this, we propose Neural Human Performer, a novel approach that learns generalizable neural radiance fields based on a parametric human body model for robust performance capture.

Generalizable Novel View Synthesis

Contact-Aware Retargeting of Skinned Motion

no code implementations ICCV 2021 Ruben Villegas, Duygu Ceylan, Aaron Hertzmann, Jimei Yang, Jun Saito

Self-contacts, such as when hands touch each other or the torso or the head, are important attributes of human body language and dynamics, yet existing methods do not model or preserve these contacts.

Motion Estimation motion retargeting

Dance In the Wild: Monocular Human Animation with Neural Dynamic Appearance Synthesis

no code implementations10 Nov 2021 Tuanfeng Y. Wang, Duygu Ceylan, Krishna Kumar Singh, Niloy J. Mitra

Synthesizing dynamic appearances of humans in motion plays a central role in applications such as AR/VR and video editing.

motion retargeting Video Editing

Learning Motion-Dependent Appearance for High-Fidelity Rendering of Dynamic Humans from a Single Camera

no code implementations CVPR 2022 Jae Shin Yoon, Duygu Ceylan, Tuanfeng Y. Wang, Jingwan Lu, Jimei Yang, Zhixin Shu, Hyun Soo Park

Appearance of dressed humans undergoes a complex geometric transformation induced not only by the static pose but also by its dynamics, i. e., there exists a number of cloth geometric configurations given a pose depending on the way it has moved.

RiCS: A 2D Self-Occlusion Map for Harmonizing Volumetric Objects

no code implementations14 May 2022 Yunseok Jang, Ruben Villegas, Jimei Yang, Duygu Ceylan, Xin Sun, Honglak Lee

We test the effectiveness of our representation on the human image harmonization task by predicting shading that is coherent with a given background image.

Image Harmonization

A Repulsive Force Unit for Garment Collision Handling in Neural Networks

no code implementations28 Jul 2022 Qingyang Tan, Yi Zhou, Tuanfeng Wang, Duygu Ceylan, Xin Sun, Dinesh Manocha

Despite recent success, deep learning-based methods for predicting 3D garment deformation under body motion suffer from interpenetration problems between the garment and the body.

Learning Visibility for Robust Dense Human Body Estimation

1 code implementation23 Aug 2022 Chun-Han Yao, Jimei Yang, Duygu Ceylan, Yi Zhou, Yang Zhou, Ming-Hsuan Yang

An alternative approach is to estimate dense vertices of a predefined template body in the image space.

Motion Guided Deep Dynamic 3D Garments

1 code implementation23 Sep 2022 Meng Zhang, Duygu Ceylan, Niloy J. Mitra

Technically, we model garment dynamics, driven using the input character motion, by predicting per-frame local displacements in a canonical state of the garment that is enriched with frame-dependent skinning weights to bring the garment to the global space.

Normal-guided Garment UV Prediction for Human Re-texturing

no code implementations CVPR 2023 Yasamin Jafarian, Tuanfeng Y. Wang, Duygu Ceylan, Jimei Yang, Nathan Carr, Yi Zhou, Hyun Soo Park

To edit human videos in a physically plausible way, a texture map must take into account not only the garment transformation induced by the body movements and clothes fitting, but also its 3D fine-grained surface geometry.

3D Reconstruction

Pix2Video: Video Editing using Image Diffusion

1 code implementation ICCV 2023 Duygu Ceylan, Chun-Hao Paul Huang, Niloy J. Mitra

Our method works in two simple steps: first, we use a pre-trained structure-guided (e. g., depth) image diffusion model to perform text-guided edits on an anchor frame; then, in the key step, we progressively propagate the changes to the future frames via self-attention feature injection to adapt the core denoising step of the diffusion model.

Denoising Text Generation +1

Neural Image-based Avatars: Generalizable Radiance Fields for Human Avatar Modeling

no code implementations10 Apr 2023 Youngjoong Kwon, Dahun Kim, Duygu Ceylan, Henry Fuchs

We present a method that enables synthesizing novel views and novel poses of arbitrary human performers from sparse multi-view images.

VidStyleODE: Disentangled Video Editing via StyleGAN and NeuralODEs

no code implementations ICCV 2023 Moayed Haji Ali, Andrew Bond, Tolga Birdal, Duygu Ceylan, Levent Karacan, Erkut Erdem, Aykut Erdem

However, the applicability of such advancements to the video domain has been hindered by the difficulty of representing and controlling videos in the latent space of GANs.

Image Animation Video Editing +1

CLIP-Guided StyleGAN Inversion for Text-Driven Real Image Editing

no code implementations17 Jul 2023 Ahmet Canberk Baykal, Abdul Basit Anees, Duygu Ceylan, Erkut Erdem, Aykut Erdem, Deniz Yuret

Existing approaches for editing images using language either resort to instance-level latent code optimization or map predefined text prompts to some editing directions in the latent space.

Attribute

GRIP: Generating Interaction Poses Using Latent Consistency and Spatial Cues

no code implementations22 Aug 2023 Omid Taheri, Yi Zhou, Dimitrios Tzionas, Yang Zhou, Duygu Ceylan, Soren Pirk, Michael J. Black

In contrast, we introduce GRIP, a learning-based method that takes, as input, the 3D motion of the body and the object, and synthesizes realistic motion for both hands before, during, and after object interaction.

Mixed Reality Object

BLiSS: Bootstrapped Linear Shape Space

no code implementations4 Sep 2023 Sanjeev Muralikrishnan, Chun-Hao Paul Huang, Duygu Ceylan, Niloy J. Mitra

Morphable models are fundamental to numerous human-centered processes as they offer a simple yet expressive shape space.

Generative Rendering: Controllable 4D-Guided Video Generation with 2D Diffusion Models

no code implementations3 Dec 2023 Shengqu Cai, Duygu Ceylan, Matheus Gadelha, Chun-Hao Paul Huang, Tuanfeng Yang Wang, Gordon Wetzstein

Traditional 3D content creation tools empower users to bring their imagination to life by giving them direct control over a scene's geometry, appearance, motion, and camera path.

Text-to-Image Generation Video Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.