Search Results for author: Yu-Kun Lai

Found 79 papers, 27 papers with code

Texture-GS: Disentangling the Geometry and Texture for 3D Gaussian Splatting Editing

no code implementations15 Mar 2024 Tian-Xing Xu, WenBo Hu, Yu-Kun Lai, Ying Shan, Song-Hai Zhang

3D Gaussian splatting, emerging as a groundbreaking approach, has drawn increasing attention for its capabilities of high-fidelity reconstruction and real-time rendering.

Disentanglement

Mesh-based Gaussian Splatting for Real-time Large-scale Deformation

no code implementations7 Feb 2024 Lin Gao, Jie Yang, Bo-Tao Zhang, Jia-Mu Sun, Yu-Jie Yuan, Hongbo Fu, Yu-Kun Lai

Based on this representation, we further introduce a large-scale Gaussian deformation technique to enable deformable GS, which alters the parameters of 3D Gaussians according to the manipulation of the associated mesh.

VRMM: A Volumetric Relightable Morphable Head Model

no code implementations6 Feb 2024 Haotian Yang, Mingwu Zheng, Chongyang Ma, Yu-Kun Lai, Pengfei Wan, Haibin Huang

In this paper, we introduce the Volumetric Relightable Morphable Model (VRMM), a novel volumetric and parametric facial prior for 3D face modeling.

3D Face Reconstruction Self-Supervised Learning

Layered 3D Human Generation via Semantic-Aware Diffusion Model

no code implementations10 Dec 2023 Yi Wang, Jian Ma, Ruizhi Shao, Qiao Feng, Yu-Kun Lai, Yebin Liu, Kun Li

To keep the generated clothing consistent with the target text, we propose a semantic-confidence strategy for clothing that can eliminate the non-clothing content generated by the model.

R2Human: Real-Time 3D Human Appearance Rendering from a Single Image

no code implementations10 Dec 2023 Yuanwang Yang, Qiao Feng, Yu-Kun Lai, Kun Li

In this paper, we propose R2Human, the first approach for real-time inference and rendering of photorealistic 3D human appearance from a single image.

Neural Rendering

High-Quality Animatable Dynamic Garment Reconstruction from Monocular Videos

no code implementations2 Nov 2023 Xiongzheng Li, Jinsong Zhang, Yu-Kun Lai, Jingyu Yang, Kun Li

To alleviate the ambiguity estimating 3D garments from monocular videos, we design a multi-hypothesis deformation module that learns spatial representations of multiple plausible deformations.

Garment Reconstruction

Towards Grouping in Large Scenes with Occlusion-aware Spatio-temporal Transformers

no code implementations30 Oct 2023 Jinsong Zhang, Lingfeng Gu, Yu-Kun Lai, Xueyang Wang, Kun Li

To explore the potential spatio-temporal relationship, we propose spatio-temporal transformers to simultaneously extract trajectory information and fuse inter-person features in a hierarchical manner.

Feature Proliferation -- the "Cancer" in StyleGAN and its Treatments

1 code implementation ICCV 2023 Shuang Song, Yuanbang Liang, Jing Wu, Yu-Kun Lai, Yipeng Qin

Thanks to our discovery of Feature Proliferation, the proposed feature rescaling method is less destructive and retains more useful image features than the truncation trick, as it is more fine-grained and works in a lower-level feature space rather than a high-level latent space.

Image Generation

Towards Practical Capture of High-Fidelity Relightable Avatars

no code implementations8 Sep 2023 Haotian Yang, Mingwu Zheng, Wanquan Feng, Haibin Huang, Yu-Kun Lai, Pengfei Wan, Zhongyuan Wang, Chongyang Ma

Specifically, TRAvatar is trained with dynamic image sequences captured in a Light Stage under varying lighting conditions, enabling realistic relighting and real-time animation for avatars in diverse scenes.

Generating Animatable 3D Cartoon Faces from Single Portraits

no code implementations4 Jul 2023 Chuanyu Pan, Guowei Yang, TaiJiang Mu, Yu-Kun Lai

With the booming of virtual reality (VR) technology, there is a growing need for customized 3D avatars.

Towards Artistic Image Aesthetics Assessment: a Large-scale Dataset and a New Method

1 code implementation CVPR 2023 Ran Yi, Haoyuan Tian, Zhihao Gu, Yu-Kun Lai, Paul L. Rosin

To fill the gap in the field of artistic image aesthetics assessment (AIAA), we first introduce a large-scale AIAA dataset: Boldbrush Artistic Image Dataset (BAID), which consists of 60, 337 artistic images covering various art forms, with more than 360, 000 votes from online users.

Recent Advances of Deep Robotic Affordance Learning: A Reinforcement Learning Perspective

no code implementations9 Mar 2023 Xintong Yang, Ze Ji, Jing Wu, Yu-Kun Lai

As a popular concept proposed in the field of psychology, affordance has been regarded as one of the important abilities that enable humans to understand and interact with the environment.

reinforcement-learning Reinforcement Learning (RL)

MBPTrack: Improving 3D Point Cloud Tracking with Memory Networks and Box Priors

no code implementations ICCV 2023 Tian-Xing Xu, Yuan-Chen Guo, Yu-Kun Lai, Song-Hai Zhang

To address these issues, we present MBPTrack, which adopts a Memory mechanism to utilize past information and formulates localization in a coarse-to-fine scheme using Box Priors given in the first frame.

3D Single Object Tracking Autonomous Driving +1

Pose-Controllable 3D Facial Animation Synthesis using Hierarchical Audio-Vertex Attention

no code implementations24 Feb 2023 Bin Liu, Xiaolin Wei, Bo Li, Junjie Cao, Yu-Kun Lai

In this paper, a novel pose-controllable 3D facial animation synthesis method is proposed by utilizing hierarchical audio-vertex attention.

Attribute Face Model

SceneHGN: Hierarchical Graph Networks for 3D Indoor Scene Generation with Fine-Grained Geometry

no code implementations16 Feb 2023 Lin Gao, Jia-Mu Sun, Kaichun Mo, Yu-Kun Lai, Leonidas J. Guibas, Jie Yang

We propose SCENEHGN, a hierarchical graph network for 3D indoor scenes that takes into account the full hierarchy from the room level to the object level, then finally to the object part level.

Scene Generation

E3Sym: Leveraging E(3) Invariance for Unsupervised 3D Planar Reflective Symmetry Detection

2 code implementations ICCV 2023 Ren-Wu Li, Ling-Xiao Zhang, Chunpeng Li, Yu-Kun Lai, Lin Gao

E3Sym establishes robust point correspondences through the use of E(3) invariant features extracted from a lightweight neural network, from which the dense symmetry prediction is produced.

Symmetry Detection

Abstract Demonstrations and Adaptive Exploration for Efficient and Stable Multi-step Sparse Reward Reinforcement Learning

1 code implementation19 Jul 2022 Xintong Yang, Ze Ji, Jing Wu, Yu-Kun Lai

Although Deep Reinforcement Learning (DRL) has been popular in many disciplines including robotics, state-of-the-art DRL algorithms still struggle to learn long-horizon, multi-step and sparse reward tasks, such as stacking several blocks given only a task-completion reward signal.

Exploring and Exploiting Hubness Priors for High-Quality GAN Latent Sampling

1 code implementation13 Jun 2022 Yuanbang Liang, Jing Wu, Yu-Kun Lai, Yipeng Qin

Despite the extensive studies on Generative Adversarial Networks (GANs), how to reliably sample high-quality images from their latent spaces remains an under-explored topic.

Vocal Bursts Intensity Prediction

FOF: Learning Fourier Occupancy Field for Monocular Real-time Human Reconstruction

no code implementations5 Jun 2022 Qiao Feng, Yebin Liu, Yu-Kun Lai, Jingyu Yang, Kun Li

Based on FOF, we design the first 30+FPS high-fidelity real-time monocular human reconstruction framework.

StylizedNeRF: Consistent 3D Scene Stylization as Stylized NeRF via 2D-3D Mutual Learning

no code implementations CVPR 2022 Yi-Hua Huang, Yue He, Yu-Jie Yuan, Yu-Kun Lai, Lin Gao

We first pre-train a standard NeRF of the 3D scene to be stylized and replace its color prediction module with a style network to obtain a stylized NeRF.

Image Stylization

BBDM: Image-to-image Translation with Brownian Bridge Diffusion Models

1 code implementation CVPR 2023 Bo Li, Kaitao Xue, Bin Liu, Yu-Kun Lai

In this paper, a novel image-to-image translation method based on the Brownian Bridge Diffusion Model (BBDM) is proposed, which models image-to-image translation as a stochastic Brownian bridge process, and learns the translation between two domains directly through the bidirectional diffusion process rather than a conditional generation process.

Image-to-Image Translation Translation

NeRF-Editing: Geometry Editing of Neural Radiance Fields

no code implementations CVPR 2022 Yu-Jie Yuan, Yang-tian Sun, Yu-Kun Lai, Yuewen Ma, Rongfei Jia, Lin Gao

In this paper, we propose a method that allows users to perform controllable shape deformation on the implicit representation of the scene, and synthesizes the novel view images of the edited scene without re-training the network.

Neural Rendering Novel View Synthesis

Playing Lottery Tickets in Style Transfer Models

no code implementations25 Mar 2022 Meihao Kong, Jing Huo, Wenbin Li, Jing Wu, Yu-Kun Lai, Yang Gao

(2) Using iterative magnitude pruning, we find the matching subnetworks at 89. 2% sparsity in AdaIN and 73. 7% sparsity in SANet, which demonstrates that style transfer models can play lottery tickets too.

Style Transfer

Quality Metric Guided Portrait Line Drawing Generation from Unpaired Training Data

1 code implementation8 Feb 2022 Ran Yi, Yong-Jin Liu, Yu-Kun Lai, Paul L. Rosin

In this paper, we propose a novel method to automatically transform face photos to portrait drawings using unpaired training data with two new features; i. e., our method can (1) learn to generate high quality portrait drawings in multiple styles using a single network and (2) generate portrait drawings in a "new style" unseen in the training data.

MuSCLe: A Multi-Strategy Contrastive Learning Framework for Weakly Supervised Semantic Segmentation

no code implementations18 Jan 2022 Kunhao Yuan, Gerald Schaefer, Yu-Kun Lai, Yifan Wang, Xiyao Liu, Lin Guan, Hui Fang

Weakly supervised semantic segmentation (WSSS) has gained significant popularity since it relies only on weak labels such as image level annotations rather than pixel level annotations required by supervised semantic segmentation (SSS) methods.

Contrastive Learning Segmentation +2

High-Fidelity Human Avatars From a Single RGB Camera

no code implementations CVPR 2022 Hao Zhao, Jinsong Zhang, Yu-Kun Lai, Zerong Zheng, Yingdi Xie, Yebin Liu, Kun Li

To cope with the complexity of textures and generate photo-realistic results, we propose a reference-based neural rendering network and exploit a bottom-up sharpening-guided fine-tuning strategy to obtain detailed textures.

Neural Rendering Vocal Bursts Intensity Prediction

E-ffective: A Visual Analytic System for Exploring the Emotion and Effectiveness of Inspirational Speeches

no code implementations28 Oct 2021 Kevin Maher, Zeyuan Huang, Jiancheng Song, Xiaoming Deng, Yu-Kun Lai, Cuixia Ma, Hao Wang, Yong-Jin Liu, Hongan Wang

We further studied the usability of the system by speaking novices and experts on assisting analysis of inspirational speech effectiveness.

Robust Pose Transfer with Dynamic Details using Neural Video Rendering

no code implementations27 Jun 2021 Yang-tian Sun, Hao-Zhi Huang, Xuan Wang, Yu-Kun Lai, Wei Liu, Lin Gao

Moreover, we introduce a concise temporal loss in the training stage to suppress the detail flickering that is made more visible due to high-quality dynamic details generated by our method.

Neural Rendering Pose Transfer +1

Hierarchical Layout-Aware Graph Convolutional Network for Unified Aesthetics Assessment

1 code implementation CVPR 2021 Dongyu She, Yu-Kun Lai, Gaoxiong Yi, Kun Xu

The first LA-GCN module constructs an aesthetics-related graph in the coordinate space and performs reasoning over spatial nodes.

An Open-Source Multi-Goal Reinforcement Learning Environment for Robotic Manipulation with Pybullet

2 code implementations12 May 2021 Xintong Yang, Ze Ji, Jing Wu, Yu-Kun Lai

This work re-implements the OpenAI Gym multi-goal robotic manipulation environment, originally based on the commercial Mujoco engine, onto the open-source Pybullet engine.

Multi-Goal Reinforcement Learning OpenAI Gym +1

PISE: Person Image Synthesis and Editing with Decoupled GAN

1 code implementation CVPR 2021 Jinsong Zhang, Kun Li, Yu-Kun Lai, Jingyu Yang

The results of qualitative and quantitative experiments demonstrate the superiority of our model on human pose transfer.

Human Parsing Pose Transfer

Deep Deformation Detail Synthesis for Thin Shell Models

no code implementations23 Feb 2021 Lan Chen, Lin Gao, Jie Yang, Shibiao Xu, Juntao Ye, Xiaopeng Zhang, Yu-Kun Lai

Moreover, as such methods only add details, they require coarse meshes to be close to fine meshes, which can be either impossible, or require unrealistic constraints when generating fine meshes.

Single Image 3D Shape Retrieval via Cross-Modal Instance and Category Contrastive Learning

no code implementations ICCV 2021 Ming-Xian Lin, Jie Yang, He Wang, Yu-Kun Lai, Rongfei Jia, Binqiang Zhao, Lin Gao

Inspired by the great success in recent contrastive learning works on self-supervised representation learning, we propose a novel IBSR pipeline leveraging contrastive learning.

3D Shape Retrieval Contrastive Learning +4

MLVSNet: Multi-Level Voting Siamese Network for 3D Visual Tracking

1 code implementation ICCV 2021 Zhoutao Wang, Qian Xie, Yu-Kun Lai, Jing Wu, Kun Long, Jun Wang

To deal with sparsity in outdoor 3D point clouds, we propose to perform Hough voting on multi-level features to get more vote centers and retain more useful information, instead of voting only on the final level feature as in previous methods.

3D Object Detection object-detection +1

PoNA: Pose-guided Non-local Attention for Human Pose Transfer

1 code implementation13 Dec 2020 Kun Li, Jinsong Zhang, Yebin Liu, Yu-Kun Lai, Qionghai Dai

In each block, we propose a pose-guided non-local attention (PoNA) mechanism with a long-range dependency scheme to select more important regions of image features to transfer.

Generative Adversarial Network Person Re-Identification +1

Multiscale Mesh Deformation Component Analysis with Attention-based Autoencoders

no code implementations4 Dec 2020 Jie Yang, Lin Gao, Qingyang Tan, Yihua Huang, Shihong Xia, Yu-Kun Lai

The attention mechanism is designed to learn to softly weight multi-scale deformation components in active deformation regions, and the stacked attention-based autoencoder is learned to represent the deformation components at different scales.

TM-NET: Deep Generative Networks for Textured Meshes

no code implementations13 Oct 2020 Lin Gao, Tong Wu, Yu-Jie Yuan, Ming-Xian Lin, Yu-Kun Lai, Hao Zhang

We introduce a conditional autoregressive model for texture generation, which can be conditioned on both part geometry and textures already generated for other parts to achieve texture compatibility.

Graphics

RISA-Net: Rotation-Invariant Structure-Aware Network for Fine-Grained 3D Shape Retrieval

1 code implementation2 Oct 2020 Rao Fu, Jie Yang, Jiawei Sun, Fang-Lue Zhang, Yu-Kun Lai, Lin Gao

Fine-grained 3D shape retrieval aims to retrieve 3D shapes similar to a query shape in a repository with models belonging to the same class, which requires shape descriptors to be capable of representing detailed geometric information to discriminate shapes with globally similar structures.

3D Object Retrieval 3D Shape Retrieval +1

NPRportrait 1.0: A Three-Level Benchmark for Non-Photorealistic Rendering of Portraits

no code implementations1 Sep 2020 Paul L. Rosin, Yu-Kun Lai, David Mould, Ran Yi, Itamar Berger, Lars Doyle, Seungyong Lee, Chuan Li, Yong-Jin Liu, Amir Semmo, Ariel Shamir, Minjung Son, Holger Winnemoller

Despite the recent upsurge of activity in image-based non-photorealistic rendering (NPR), and in particular portrait image stylisation, due to the advent of neural style transfer, the state of performance evaluation in this field is limited, especially compared to the norms in the computer vision and machine learning communities.

Style Transfer

DSG-Net: Learning Disentangled Structure and Geometry for 3D Shape Generation

1 code implementation12 Aug 2020 Jie Yang, Kaichun Mo, Yu-Kun Lai, Leonidas J. Guibas, Lin Gao

While significant progress has been made, especially with recent deep generative models, it remains a challenge to synthesize high-quality shapes with rich geometric details and complex structure, in a controllable manner.

3D Shape Generation

Image-based Portrait Engraving

1 code implementation12 Aug 2020 Paul L. Rosin, Yu-Kun Lai

This paper describes a simple image-based method that applies engraving stylisation to portraits using ordered dithering.

Face Detection

Adaptive 3D Face Reconstruction from a Single Image

no code implementations8 Jul 2020 Kun Li, Jing Yang, Nianhong Jiao, Jinsong Zhang, Yu-Kun Lai

3D face reconstruction from a single image is a challenging problem, especially under partial occlusions and extreme poses.

3D Face Reconstruction Pose Estimation

3D Reconstruction of Clothes using a Human Body Model and its Application to Image-based Virtual Try-On

no code implementations CVPRW 2020 Matiur Rahman Minar, Thai Thanh Tuan, Heejune Ahn, Paul Rosin, Yu-Kun Lai

Due to the correspondence, resulting 3D clothing models can be easily transferred to the target human models with different poses and shapes estimated from 2D images.

3D Reconstruction Virtual Try-on

CP-VTON+: Clothing Shape and Texture Preserving Image-Based Virtual Try-On

2 code implementations CVPRW 2020 Matiur Rahman Minar, Thai Thanh Tuan, Heejune Ahn, Paul Rosin, Yu-Kun Lai

Recently proposed Image-based virtual try-on (VTON) approaches have several challenges regarding diverse human poses and cloth styles.

 Ranked #1 on Virtual Try-on on VITON (IS metric)

Virtual Try-on

Manifold Alignment for Semantically Aligned Style Transfer

1 code implementation ICCV 2021 Jing Huo, Shiyin Jin, Wenbin Li, Jing Wu, Yu-Kun Lai, Yinghuan Shi, Yang Gao

In this paper, we make a new assumption that image features from the same semantic region form a manifold and an image with multiple semantic regions follows a multi-manifold distribution.

Semantic Segmentation Style Transfer

Deep Line Art Video Colorization with a Few References

no code implementations24 Mar 2020 Min Shi, Jia-Qi Zhang, Shu-Yu Chen, Lin Gao, Yu-Kun Lai, Fang-Lue Zhang

The color transform network takes the target line art images as well as the line art and color images of one or more reference images as input, and generates corresponding target color images.

Colorization

3D-CariGAN: An End-to-End Solution to 3D Caricature Generation from Face Photos

1 code implementation15 Mar 2020 Zipeng Ye, Mengfei Xia, Yanan sun, Ran Yi, MinJing Yu, Juyong Zhang, Yu-Kun Lai, Yong-Jin Liu

The most challenging issue for our system is that the source domain of face photos (characterized by normal 2D faces) is significantly different from the target domain of 3D caricatures (characterized by 3D exaggerated face shapes and textures).

Caricature

A Survey on Deep Geometry Learning: From a Representation Perspective

no code implementations19 Feb 2020 Yun-Peng Xiao, Yu-Kun Lai, Fang-Lue Zhang, Chunpeng Li, Lin Gao

However, the performance for different applications largely depends on the representation used, and there is no unique representation that works well for all applications.

Graphics

MW-GAN: Multi-Warping GAN for Caricature Generation with Multi-Style Geometric Exaggeration

no code implementations7 Jan 2020 Haodi Hou, Jing Huo, Jing Wu, Yu-Kun Lai, Yang Gao

Given an input face photo, the goal of caricature generation is to produce stylized, exaggerated caricatures that share the same identity as the photo.

Caricature Style Transfer

Learning-based Real-time Detection of Intrinsic Reflectional Symmetry

no code implementations1 Nov 2019 Yi-Ling Qiao, Lin Gao, Shu-Zhi Liu, Ligang Liu, Yu-Kun Lai, Xilin Chen

In this paper, we propose \YL{a} learning-based approach to intrinsic reflectional symmetry detection.

Symmetry Detection

PRS-Net: Planar Reflective Symmetry Detection Net for 3D Models

1 code implementation15 Oct 2019 Lin Gao, Ling-Xiao Zhang, Hsien-Yu Meng, Yi-Hui Ren, Yu-Kun Lai, Leif Kobbelt

In this paper, we present a novel learning framework to automatically discover global planar reflective symmetry of a 3D shape.

Symmetry Detection

SDM-NET: Deep Generative Network for Structured Deformable Mesh

no code implementations13 Aug 2019 Lin Gao, Jie Yang, Tong Wu, Yu-Jie Yuan, Hongbo Fu, Yu-Kun Lai, Hao Zhang

At the structural level, we train a Structured Parts VAE (SP-VAE), which jointly learns the part structure of a shape collection and the part geometries, ensuring a coherence between global shape structure and surface details.

Mesh Variational Autoencoders with Edge Contraction Pooling

1 code implementation7 Aug 2019 Yu-Jie Yuan, Yu-Kun Lai, Jie Yang, Hongbo Fu, Lin Gao

3D shape analysis is an important research topic in computer vision and graphics.

Learning from Web Data: the Benefit of Unsupervised Object Localization

no code implementations21 Dec 2018 Xiaoxiao Sun, Liang Zheng, Yu-Kun Lai, Jufeng Yang

In this work, we first systematically study the built-in gap between the web and standard datasets, i. e. different data distributions between the two kinds of data.

Fine-Grained Image Classification General Classification +2

VV-Net: Voxel VAE Net with Group Convolutions for Point Cloud Segmentation

1 code implementation 2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2020 Hsien-Yu Meng, Lin Gao, Yu-Kun Lai, Dinesh Manocha

Our approach results in a good volumetric representation that effectively tackles noisy point cloud datasets and is more robust for learning.

Graphics

Learning Bidirectional LSTM Networks for Synthesizing 3D Mesh Animation Sequences

no code implementations4 Oct 2018 Yi-Ling Qiao, Lin Gao, Yu-Kun Lai, Shihong Xia

In this paper, we present a novel method for learning to synthesize 3D mesh animation sequences with long short-term memory (LSTM) blocks and mesh-based convolutional neural networks (CNNs).

Graphics

A Deep Learning Driven Active Framework for Segmentation of Large 3D Shape Collections

no code implementations17 Jul 2018 David George, Xianguha Xie, Yu-Kun Lai, Gary KL Tam

First, we a propose a fast and relatively accurate feature-based deep learning model to provide dataset-wide segmentation predictions.

Active Learning Segmentation +1

HDFD --- A High Deformation Facial Dynamics Benchmark for Evaluation of Non-Rigid Surface Registration and Classification

no code implementations9 Jul 2018 Gareth Andrews, Sam Endean, Roberto Dyke, Yu-Kun Lai, Gwenno Ffrancon, Gary KL Tam

In this paper, we present a novel facial dynamic dataset HDFD which addresses the gap of existing datasets, including 4D funny faces with substantial non-isometric deformation, and 4D visual-audio faces of spoken phrases in a minority language (Welsh).

General Classification

Content-Sensitive Supervoxels via Uniform Tessellations on Video Manifolds

no code implementations CVPR 2018 Ran Yi, Yong-Jin Liu, Yu-Kun Lai

We propose an efficient Lloyd-like method with a splitting-merging scheme to compute a uniform tessellation on M, which induces the CSS in X. Theoretically our method has a good competitive ratio O(1).

Weakly Supervised Coupled Networks for Visual Sentiment Analysis

1 code implementation CVPR 2018 Jufeng Yang, Dongyu She, Yu-Kun Lai, Paul L. Rosin, Ming-Hsuan Yang

The second branch utilizes both the holistic and localized information by coupling the sentiment map with deep features for robust classification.

General Classification Robust classification +1

CartoonGAN: Generative Adversarial Networks for Photo Cartoonization

7 code implementations CVPR 2018 Yang Chen, Yu-Kun Lai, Yong-Jin Liu

Two novel losses suitable for cartoonization are proposed: (1) a semantic content loss, which is formulated as a sparse regularization in the high-level feature maps of the VGG network to cope with substantial style variation between photos and cartoons, and (2) an edge-promoting adversarial loss for preserving clear edges.

Generative Adversarial Network Real-to-Cartoon translation

Alive Caricature from 2D to 3D

1 code implementation CVPR 2018 Qianyi Wu, Juyong Zhang, Yu-Kun Lai, Jianmin Zheng, Jianfei Cai

Caricature is an art form that expresses subjects in abstract, simple and exaggerated view.

Caricature

Mesh-based Autoencoders for Localized Deformation Component Analysis

no code implementations13 Sep 2017 Qingyang Tan, Lin Gao, Yu-Kun Lai, Jie Yang, Shihong Xia

Spatially localized deformation components are very useful for shape analysis and synthesis in 3D geometry processing.

Graphics

Sparse Data Driven Mesh Deformation

no code implementations5 Sep 2017 Lin Gao, Yu-Kun Lai, Jie Yang, Ling-Xiao Zhang, Leif Kobbelt, Shihong Xia

This along with a suitably chosen deformation basis including spatially localized deformation modes leads to significant advantages, including more meaningful, reliable, and efficient deformations because fewer and localized deformation modes are applied.

Graphics

Automatic Semantic Style Transfer using Deep Convolutional Neural Networks and Soft Masks

1 code implementation31 Aug 2017 Huihuang Zhao, Paul L. Rosin, Yu-Kun Lai

This paper presents an automatic image synthesis method to transfer the style of an example image to a content image.

Image Generation Style Transfer

Learning to Rank Retargeted Images

no code implementations CVPR 2017 Yang Chen, Yong-Jin Liu, Yu-Kun Lai

Observing that it is challenging even for human subjects to give consistent scores for retargeting results of different source images, in this paper we propose a learning-based OQA method that predicts the ranking of a set of retargeted images with the same source image.

Image Retargeting Learning-To-Rank

Robust Non-Rigid Registration with Reweighted Position and Transformation Sparsity

no code implementations15 Mar 2017 Kun Li, Jingyu Yang, Yu-Kun Lai, Daoliang Guo

Non-rigid registration is challenging because it is ill-posed with high degrees of freedom and is thus sensitive to noise and outliers.

Position

Cannot find the paper you are looking for? You can Submit a new open access paper.