Search Results for author: Xiaogang Jin

Found 21 papers, 9 papers with code

Quadruplet Network with One-Shot Learning for Fast Visual Object Tracking

no code implementations19 May 2017 Xingping Dong, Jianbing Shen, Dongming Wu, Kan Guo, Xiaogang Jin, Fatih Porikli

In this paper, we propose a new quadruplet deep network to examine the potential connections among the training instances, aiming to achieve a more powerful representation.

One-Shot Learning Visual Object Tracking

Example-based Real-time Clothing Synthesis for Virtual Agents

no code implementations8 Jan 2021 Nannan Wu, Qianwen Chao, Yanzhen Chen, Weiwei Xu, Chen Liu, Dinesh Manocha, Wenxin Sun, Yi Han, Xinran Yao, Xiaogang Jin

Given a query shape and pose of the virtual agent, we synthesize the resulting clothing deformation by blending the Taylor expansion results of nearby anchoring points.

Graphics

HairMapper: Removing Hair From Portraits Using GANs

2 code implementations CVPR 2022 Yiqian Wu, Yong-Liang Yang, Xiaogang Jin

Removing hair from portrait images is challenging due to the complex occlusions between hair and face, as well as the lack of paired portrait data with/without hair.

3D Face Reconstruction

Do Inpainting Yourself: Generative Facial Inpainting Guided by Exemplars

1 code implementation13 Feb 2022 Wanglong Lu, Hanli Zhao, Xianta Jiang, Xiaogang Jin, YongLiang Yang, Min Wang, Jiankai Lyu, Kaijie Shi

We introduce a novel attribute similarity metric to encourage networks to learn the style of facial attributes from the exemplar in a self-supervised way.

Attribute Facial Inpainting

Predicting Loose-Fitting Garment Deformations Using Bone-Driven Motion Networks

1 code implementation3 May 2022 Xiaoyu Pan, Jiaming Mai, Xinwei Jiang, Dongxue Tang, Jingxiang Li, Tianjia Shao, Kun Zhou, Xiaogang Jin, Dinesh Manocha

We present a learning algorithm that uses bone-driven motion networks to predict the deformation of loose-fitting garment meshes at interactive rates.

Real-time Controllable Motion Transition for Characters

no code implementations5 May 2022 Xiangjun Tang, He Wang, Bo Hu, Xu Gong, Ruifan Yi, Qilong Kou, Xiaogang Jin

Then, during generation, we design a transition model which is essentially a sampling strategy to sample from the learned manifold, based on the target frame and the aimed transition duration.

Parametric Reshaping of Portraits in Videos

no code implementations5 May 2022 Xiangjun Tang, Wenxin Sun, Yong-Liang Yang, Xiaogang Jin

In the second stage, we first reshape the reconstructed 3D face using a parametric reshaping model reflecting the weight change of the face, and then utilize the reshaped 3D face to guide the warping of video frames.

Face Reconstruction Video Editing

LPFF: A Portrait Dataset for Face Generators Across Large Poses

no code implementations ICCV 2023 Yiqian Wu, Jing Zhang, Hongbo Fu, Xiaogang Jin

To better validate our pose-conditional 3D-aware generators, we develop a new FID measure to evaluate the 3D-level performance.

3D Reconstruction

GRIG: Few-Shot Generative Residual Image Inpainting

no code implementations24 Apr 2023 Wanglong Lu, Xianta Jiang, Xiaogang Jin, Yong-Liang Yang, Minglun Gong, Tao Wang, Kaijie Shi, Hanli Zhao

Image inpainting is the task of filling in missing or masked region of an image with semantically meaningful contents.

Image Inpainting

RSMT: Real-time Stylized Motion Transition for Characters

1 code implementation21 Jun 2023 Xiangjun Tang, Linjun Wu, He Wang, Bo Hu, Xu Gong, Yuchen Liao, Songnan Li, Qilong Kou, Xiaogang Jin

Styled online in-between motion generation has important application scenarios in computer animation and games.

3DPortraitGAN: Learning One-Quarter Headshot 3D GANs from a Single-View Portrait Dataset with Diverse Body Poses

no code implementations27 Jul 2023 Yiqian Wu, Hao Xu, Xiangjun Tang, Hongbo Fu, Xiaogang Jin

We then propose 3DPortraitGAN, the first 3D-aware one-quarter headshot portrait generator that learns a canonical 3D avatar distribution from the 360{\deg}PHQ dataset with body pose self-learning.

Self-Learning

A General Implicit Framework for Fast NeRF Composition and Rendering

no code implementations9 Aug 2023 Xinyu Gao, ZiYi Yang, Yunlu Zhao, Yuxiang Sun, Xiaogang Jin, Changqing Zou

Mainly, our work introduces a new surface representation known as Neural Depth Fields (NeDF) that quickly determines the spatial relationship between objects by allowing direct intersection computation between rays and implicit surfaces.

A Locality-based Neural Solver for Optical Motion Capture

1 code implementation1 Sep 2023 Xiaoyu Pan, Bowen Zheng, Xinwei Jiang, Guanglong Xu, Xianli Gu, Jingxiang Li, Qilong Kou, He Wang, Tianjia Shao, Kun Zhou, Xiaogang Jin

Finally, we propose a training regime based on representation learning and data augmentation, by training the model on data with masking.

Data Augmentation Representation Learning

Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction

1 code implementation22 Sep 2023 ZiYi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, Xiaogang Jin

Implicit neural representation has paved the way for new approaches to dynamic scene reconstruction and rendering.

Neural Rendering Novel View Synthesis

Enhancing the Authenticity of Rendered Portraits with Identity-Consistent Transfer Learning

no code implementations6 Oct 2023 Luyuan Wang, Yiqian Wu, YongLiang Yang, Chen Liu, Xiaogang Jin

In this paper, we present a novel photo-realistic portrait generation framework that can effectively mitigate the ''uncanny valley'' effect and improve the overall authenticity of rendered portraits.

Transfer Learning

On Optimal Sampling for Learning SDF Using MLPs Equipped with Positional Encoding

no code implementations2 Jan 2024 Guying Lin, Lei Yang, YuAn Liu, Congyi Zhang, Junhui Hou, Xiaogang Jin, Taku Komura, John Keyser, Wenping Wang

Sampling against this intrinsic frequency following the Nyquist-Sannon sampling theorem allows us to determine an appropriate training sampling rate.

Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splatting

no code implementations24 Feb 2024 ZiYi Yang, Xinyu Gao, Yangtian Sun, Yihua Huang, Xiaoyang Lyu, Wen Zhou, Shaohui Jiao, Xiaojuan Qi, Xiaogang Jin

The recent advancements in 3D Gaussian splatting (3D-GS) have not only facilitated real-time rendering through modern GPU rasterization pipelines but have also attained state-of-the-art rendering quality.

SocialCVAE: Predicting Pedestrian Trajectory via Interaction Conditioned Latents

1 code implementation27 Feb 2024 Wei Xiang, Haoteng Yin, He Wang, Xiaogang Jin

Pedestrian trajectory prediction is the key technology in many applications for providing insights into human behavior and anticipating human future motions.

Pedestrian Trajectory Prediction Trajectory Prediction

Portrait3D: Text-Guided High-Quality 3D Portrait Generation Using Pyramid Representation and GANs Prior

no code implementations16 Apr 2024 Yiqian Wu, Hao Xu, Xiangjun Tang, Xien Chen, Siyu Tang, Zhebin Zhang, Chen Li, Xiaogang Jin

Existing neural rendering-based text-to-3D-portrait generation methods typically make use of human geometry prior and diffusion models to obtain guidance.

Neural Rendering Text to 3D

Cannot find the paper you are looking for? You can Submit a new open access paper.