Search Results for author: Jimei Yang

Found 53 papers, 25 papers with code

RiCS: A 2D Self-Occlusion Map for Harmonizing Volumetric Objects

no code implementations14 May 2022 Yunseok Jang, Ruben Villegas, Jimei Yang, Duygu Ceylan, Xin Sun, Honglak Lee

We test the effectiveness of our representation on the human image harmonization task by predicting shading that is coherent with a given background image.

The Best of Both Worlds: Combining Model-based and Nonparametric Approaches for 3D Human Body Estimation

no code implementations1 May 2022 Zhe Wang, Jimei Yang, Charless Fowlkes

Our framework leverages the best of non-parametric and model-based methods and is also robust to partial occlusion.

Learning Motion-Dependent Appearance for High-Fidelity Rendering of Dynamic Humans from a Single Camera

no code implementations24 Mar 2022 Jae Shin Yoon, Duygu Ceylan, Tuanfeng Y. Wang, Jingwan Lu, Jimei Yang, Zhixin Shu, Hyun Soo Park

Appearance of dressed humans undergoes a complex geometric transformation induced not only by the static pose but also by its dynamics, i. e., there exists a number of cloth geometric configurations given a pose depending on the way it has moved.

Contact-Aware Retargeting of Skinned Motion

no code implementations ICCV 2021 Ruben Villegas, Duygu Ceylan, Aaron Hertzmann, Jimei Yang, Jun Saito

Self-contacts, such as when hands touch each other or the torso or the head, are important attributes of human body language and dynamics, yet existing methods do not model or preserve these contacts.

Motion Estimation motion retargeting

Single-image Full-body Human Relighting

no code implementations15 Jul 2021 Manuel Lagunas, Xin Sun, Jimei Yang, Ruben Villegas, Jianming Zhang, Zhixin Shu, Belen Masia, Diego Gutierrez

We present a single-image data-driven method to automatically relight images with full-body humans in them.

Image Reconstruction

Task-Generic Hierarchical Human Motion Prior using VAEs

no code implementations7 Jun 2021 Jiaman Li, Ruben Villegas, Duygu Ceylan, Jimei Yang, Zhengfei Kuang, Hao Li, Yajie Zhao

We demonstrate the effectiveness of our hierarchical motion variational autoencoder in a variety of tasks including video-based human pose estimation, motion completion from partial observations, and motion synthesis from sparse key-frames.

Frame motion synthesis +1

Attribute-conditioned Layout GAN for Automatic Graphic Design

no code implementations11 Sep 2020 Jianan Li, Jimei Yang, Jianming Zhang, Chang Liu, Christina Wang, Tingfa Xu

In this paper, we introduce Attribute-conditioned Layout GAN to incorporate the attributes of design elements for graphic layout generation by forcing both the generator and the discriminator to meet attribute conditions.

Contact and Human Dynamics from Monocular Video

1 code implementation ECCV 2020 Davis Rempe, Leonidas J. Guibas, Aaron Hertzmann, Bryan Russell, Ruben Villegas, Jimei Yang

Existing deep models predict 2D and 3D kinematic poses from video that are approximately accurate, but contain visible errors that violate physical constraints, such as feet penetrating the ground and bodies leaning at extreme angles.

Human Dynamics Pose Estimation

Generative Tweening: Long-term Inbetweening of 3D Human Motions

no code implementations18 May 2020 Yi Zhou, Jingwan Lu, Connelly Barnes, Jimei Yang, Sitao Xiang, Hao Li

We introduce a biomechanically constrained generative adversarial network that performs long-term inbetweening of human motions, conditioned on keyframe constraints.

3D Ken Burns Effect from a Single Image

4 code implementations12 Sep 2019 Simon Niklaus, Long Mai, Jimei Yang, Feng Liu

According to this depth estimate, our framework then maps the input image to a point cloud and synthesizes the resulting video frames by rendering the point cloud from the corresponding camera positions.

Depth Estimation

Multimodal Style Transfer via Graph Cuts

2 code implementations ICCV 2019 Yulun Zhang, Chen Fang, Yilin Wang, Zhaowen Wang, Zhe Lin, Yun Fu, Jimei Yang

An assumption widely used in recent neural style transfer methods is that image styles can be described by global statics of deep features like Gram or covariance matrices.

Style Transfer

Foreground-aware Image Inpainting

no code implementations CVPR 2019 Wei Xiong, Jiahui Yu, Zhe Lin, Jimei Yang, Xin Lu, Connelly Barnes, Jiebo Luo

We show that by such disentanglement, the contour completion model predicts reasonable contours of objects, and further substantially improves the performance of image inpainting.

Disentanglement Image Inpainting

On the Continuity of Rotation Representations in Neural Networks

5 code implementations CVPR 2019 Yi Zhou, Connelly Barnes, Jingwan Lu, Jimei Yang, Hao Li

Thus, widely used representations such as quaternions and Euler angles are discontinuous and difficult for neural networks to learn.

Flow-Grounded Spatial-Temporal Video Prediction from Still Images

1 code implementation ECCV 2018 Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, Ming-Hsuan Yang

Existing video prediction methods mainly rely on observing multiple historical frames or focus on predicting the next one-frame.

Frame Video Prediction

Free-Form Image Inpainting with Gated Convolution

31 code implementations ICCV 2019 Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, Thomas Huang

We present a generative image inpainting system to complete images with free-form mask and guidance.

Image Inpainting

PlaneNet: Piece-wise Planar Reconstruction from a Single RGB Image

1 code implementation CVPR 2018 Chen Liu, Jimei Yang, Duygu Ceylan, Ersin Yumer, Yasutaka Furukawa

The proposed end-to-end DNN learns to directly infer a set of plane parameters and corresponding plane segmentation masks from a single RGB image.

Depth Estimation

Neural Kinematic Networks for Unsupervised Motion Retargetting

1 code implementation CVPR 2018 Ruben Villegas, Jimei Yang, Duygu Ceylan, Honglak Lee

We propose a recurrent neural network architecture with a Forward Kinematics layer and cycle consistency based adversarial training objective for unsupervised motion retargetting.

Generative Image Inpainting with Contextual Attention

28 code implementations CVPR 2018 Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, Thomas S. Huang

Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions.

Image Inpainting

Predicting Scene Parsing and Motion Dynamics in the Future

no code implementations NeurIPS 2017 Xiaojie Jin, Huaxin Xiao, Xiaohui Shen, Jimei Yang, Zhe Lin, Yunpeng Chen, Zequn Jie, Jiashi Feng, Shuicheng Yan

The ability of predicting the future is important for intelligent systems, e. g. autonomous vehicles and robots to plan early and make decisions accordingly.

Autonomous Vehicles motion prediction +2

FoveaNet: Perspective-aware Urban Scene Parsing

no code implementations ICCV 2017 Xin Li, Zequn Jie, Wei Wang, Changsong Liu, Jimei Yang, Xiaohui Shen, Zhe Lin, Qiang Chen, Shuicheng Yan, Jiashi Feng

Thus, they suffer from heterogeneous object scales caused by perspective projection of cameras on actual scenes and inevitably encounter parsing failures on distant objects as well as other boundary and recognition errors.

Scene Parsing

3D-PRNN: Generating Shape Primitives with Recurrent Neural Networks

2 code implementations ICCV 2017 Chuhang Zou, Ersin Yumer, Jimei Yang, Duygu Ceylan, Derek Hoiem

The success of various applications including robotics, digital content creation, and visualization demand a structured and abstract representation of the 3D world from limited sensor data.

Material Editing Using a Physically Based Rendering Network

no code implementations ICCV 2017 Guilin Liu, Duygu Ceylan, Ersin Yumer, Jimei Yang, Jyh-Ming Lien

We propose an end-to-end network architecture that replicates the forward image formation process to accomplish this task.

Image Generation

Deep GrabCut for Object Selection

no code implementations2 Jul 2017 Ning Xu, Brian Price, Scott Cohen, Jimei Yang, Thomas Huang

In this paper, we propose a novel segmentation approach that uses a rectangle as a soft constraint by transforming it into an Euclidean distance map.

Instance Segmentation Interactive Segmentation +1

Decomposing Motion and Content for Natural Video Sequence Prediction

1 code implementation25 Jun 2017 Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, Honglak Lee

To the best of our knowledge, this is the first end-to-end trainable network architecture with motion and content separation to model the spatiotemporal dynamics for pixel-level future prediction in natural videos.

 Ranked #1 on Video Prediction on KTH (Cond metric)

Frame Future prediction +1

Universal Style Transfer via Feature Transforms

15 code implementations NeurIPS 2017 Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, Ming-Hsuan Yang

The whitening and coloring transforms reflect a direct matching of feature covariance of the content image to a given style image, which shares similar spirits with the optimization of Gram matrix based cost in neural style transfer.

Image Reconstruction Style Transfer

Generative Face Completion

1 code implementation CVPR 2017 Yijun Li, Sifei Liu, Jimei Yang, Ming-Hsuan Yang

In this paper, we propose an effective face completion algorithm using a deep generative model.

Facial Inpainting Semantic Parsing

Learning to Generate Long-term Future via Hierarchical Prediction

2 code implementations ICML 2017 Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn, Xunyu Lin, Honglak Lee

To avoid inherent compounding errors in recursive pixel-level prediction, we propose to first estimate high-level structure in the input frames, then predict how that structure evolves in the future, and finally by observing a single frame from the past and the predicted high-level structure, we construct the future frames without having to observe any of the pixel-level predictions.

Frame Video Prediction

Recurrent Multimodal Interaction for Referring Image Segmentation

1 code implementation ICCV 2017 Chenxi Liu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Alan Yuille

In this paper we are interested in the problem of image segmentation given natural language descriptions, i. e. referring expressions.

Semantic Segmentation

Transformation-Grounded Image Generation Network for Novel 3D View Synthesis

2 code implementations CVPR 2017 Eunbyung Park, Jimei Yang, Ersin Yumer, Duygu Ceylan, Alexander C. Berg

Instead of taking a 'blank slate' approach, we first explicitly infer the parts of the geometry visible both in the input and novel views and then re-cast the remaining synthesis problem as image completion.

Image Generation Novel View Synthesis

Diversified Texture Synthesis with Feed-forward Networks

no code implementations CVPR 2017 Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, Ming-Hsuan Yang

Recent progresses on deep discriminative and generative modeling have shown promising results on texture synthesis.

Texture Synthesis

Video Scene Parsing with Predictive Feature Learning

no code implementations ICCV 2017 Xiaojie Jin, Xin Li, Huaxin Xiao, Xiaohui Shen, Zhe Lin, Jimei Yang, Yunpeng Chen, Jian Dong, Luoqi Liu, Zequn Jie, Jiashi Feng, Shuicheng Yan

In this way, the network can effectively learn to capture video dynamics and temporal context, which are critical clues for video scene parsing, without requiring extra manual annotations.

Frame Representation Learning +1

Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction without 3D Supervision

2 code implementations NeurIPS 2016 Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo, Honglak Lee

We demonstrate the ability of the model in generating 3D volume from a single 2D image with three sets of experiments: (1) learning from single-class objects; (2) learning from multi-class objects and (3) testing on novel object classes.

3D Object Reconstruction

Object Tracking via Dual Linear Structured SVM and Explicit Feature Map

no code implementations CVPR 2016 Jifeng Ning, Jimei Yang, Shaojie Jiang, Lei Zhang, Ming-Hsuan Yang

Structured support vector machine (SSVM) based methods has demonstrated encouraging performance in recent object tracking benchmarks.

Object Tracking online learning

Weakly-supervised Disentangling with Recurrent Transformations for 3D View Synthesis

no code implementations NeurIPS 2015 Jimei Yang, Scott Reed, Ming-Hsuan Yang, Honglak Lee

An important problem for both graphics and vision is to synthesize novel views of a 3D object from a single image.

Multi-Objective Convolutional Learning for Face Labeling

no code implementations CVPR 2015 Sifei Liu, Jimei Yang, Chang Huang, Ming-Hsuan Yang

This paper formulates face labeling as a conditional random field with unary and pairwise classifiers.

PatchCut: Data-Driven Object Segmentation via Local Shape Transfer

no code implementations CVPR 2015 Jimei Yang, Brian Price, Scott Cohen, Zhe Lin, Ming-Hsuan Yang

The transferred local shape masks constitute a patch-level segmentation solution space and we thus develop a novel cascade algorithm, PatchCut, for coarse-to-fine object segmentation.

Object Discovery Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.