Search Results for author: Jae Shin Yoon

Found 16 papers, 3 papers with code

Text2HOI: Text-guided 3D Motion Generation for Hand-Object Interaction

1 code implementation31 Mar 2024 Junuk Cha, Jihyeon Kim, Jae Shin Yoon, Seungryul Baek

For contact generation, a VAE-based network takes as input a text and an object mesh, and generates the probability of contacts between the surfaces of hands and the object during the interaction.

Object

3D Reconstruction of Interacting Multi-Person in Clothing from a Single Image

no code implementations12 Jan 2024 Junuk Cha, Hansol Lee, Jaewon Kim, Nhat Nguyen Bao Truong, Jae Shin Yoon, Seungryul Baek

This paper introduces a novel pipeline to reconstruct the geometry of interacting multi-person in clothing on a globally coherent scene space from a single image.

3D Reconstruction

Dynamic Appearance Modeling of Clothed 3D Human Avatars using a Single Camera

no code implementations28 Dec 2023 Hansol Lee, Junuk Cha, Yunhoe Ku, Jae Shin Yoon, Seungryul Baek

For implicit modeling, an implicit network combines the appearance and 3D motion features to decode high-fidelity clothed 3D human avatars with motion-dependent geometry and texture.

Relightful Harmonization: Lighting-aware Portrait Background Replacement

no code implementations11 Dec 2023 Mengwei Ren, Wei Xiong, Jae Shin Yoon, Zhixin Shu, Jianming Zhang, HyunJoon Jung, Guido Gerig, He Zhang

Portrait harmonization aims to composite a subject into a new background, adjusting its lighting and color to ensure harmony with the background scene.

Bidirectional Temporal Diffusion Model for Temporally Consistent Human Animation

no code implementations2 Jul 2023 Tserendorj Adiya, Jae Shin Yoon, Jungeun Lee, Sanghun Kim, Hwasup Lim

To prove our claim, we design a novel human animation framework using a denoising diffusion model: a neural network learns to generate the image of a person by denoising temporal Gaussian noises whose intermediate results are cross-conditioned bidirectionally between consecutive frames.

Denoising

Complete 3D Human Reconstruction From a Single Incomplete Image

no code implementations CVPR 2023 Junying Wang, Jae Shin Yoon, Tuanfeng Y. Wang, Krishna Kumar Singh, Ulrich Neumann

This paper presents a method to reconstruct a complete human geometry and texture from an image of a person with only partial body observed, e. g., a torso.

3D Human Reconstruction

Learning Motion-Dependent Appearance for High-Fidelity Rendering of Dynamic Humans from a Single Camera

no code implementations CVPR 2022 Jae Shin Yoon, Duygu Ceylan, Tuanfeng Y. Wang, Jingwan Lu, Jimei Yang, Zhixin Shu, Hyun Soo Park

Appearance of dressed humans undergoes a complex geometric transformation induced not only by the static pose but also by its dynamics, i. e., there exists a number of cloth geometric configurations given a pose depending on the way it has moved.

HUMBI: A Large Multiview Dataset of Human Body Expressions and Benchmark Challenge

no code implementations30 Sep 2021 Jae Shin Yoon, Zhixuan Yu, Jaesik Park, Hyun Soo Park

We demonstrate that HUMBI is highly effective in learning and reconstructing a complete human model and is complementary to the existing datasets of human body expressions with limited views and subjects such as MPII-Gaze, Multi-PIE, Human3. 6M, and Panoptic Studio datasets.

Neural 3D Clothes Retargeting from a Single Image

no code implementations29 Jan 2021 Jae Shin Yoon, Kihwan Kim, Jan Kautz, Hyun Soo Park

In this paper, we present a method of clothes retargeting; generating the potential poses and deformations of a given 3D clothing template model to fit onto a person in a single RGB image.

Pose-Guided Human Animation from a Single Image in the Wild

no code implementations CVPR 2021 Jae Shin Yoon, Lingjie Liu, Vladislav Golyanik, Kripasindhu Sarkar, Hyun Soo Park, Christian Theobalt

We present a new pose transfer method for synthesizing a human animation from a single image of a person controlled by a sequence of body poses.

Pose Transfer

Novel View Synthesis of Dynamic Scenes with Globally Coherent Depths from a Monocular Camera

no code implementations CVPR 2020 Jae Shin Yoon, Kihwan Kim, Orazio Gallo, Hyun Soo Park, Jan Kautz

Our insight is that although its scale and quality are inconsistent with other views, the depth estimation from a single view can be used to reason about the globally coherent geometry of dynamic contents.

Depth Estimation Novel View Synthesis

Self-Supervised Adaptation of High-Fidelity Face Models for Monocular Performance Tracking

no code implementations CVPR 2019 Jae Shin Yoon, Takaaki Shiratori, Shoou-I Yu, Hyun Soo Park

In this paper, we propose a self-supervised domain adaptation approach to enable the animation of high-fidelity face models from a commodity camera.

Domain Adaptation Face Model

HUMBI: A Large Multiview Dataset of Human Body Expressions

1 code implementation CVPR 2020 Zhixuan Yu, Jae Shin Yoon, In Kyu Lee, Prashanth Venkatesh, Jaesik Park, Jihun Yu, Hyun Soo Park

This paper presents a new large multiview dataset called HUMBI for human body expressions with natural clothing.

3D Semantic Trajectory Reconstruction from 3D Pixel Continuum

no code implementations CVPR 2018 Jae Shin Yoon, Ziwei Li, Hyun Soo Park

This paper presents a method to reconstruct dense semantic trajectory stream of human interactions in 3D from synchronized multiple videos.

VPGNet: Vanishing Point Guided Network for Lane and Road Marking Detection and Recognition

3 code implementations ICCV 2017 Seokju Lee, Junsik Kim, Jae Shin Yoon, Seunghak Shin, Oleksandr Bailo, Namil Kim, Tae-Hee Lee, Hyun Seok Hong, Seung-Hoon Han, In So Kweon

In this paper, we propose a unified end-to-end trainable multi-task network that jointly handles lane and road marking detection and recognition that is guided by a vanishing point under adverse weather conditions.

Lane Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.