Search Results for author: Yong-Jin Liu

Found 43 papers, 20 papers with code

TNANet: A Temporal-Noise-Aware Neural Network for Suicidal Ideation Prediction with Noisy Physiological Data

no code implementations23 Jan 2024 Niqi Liu, Fang Liu, Wenqi Ji, Xinxin Du, Xu Liu, Guozhen Zhao, Wenting Mu, Yong-Jin Liu

Current methods predominantly focus on image and text data or address artificially introduced noise, neglecting the complexities of natural noise in time-series analysis.

Binary Classification Photoplethysmography (PPG) +2

Text-Image Conditioned Diffusion for Consistent Text-to-3D Generation

no code implementations19 Dec 2023 Yuze He, Yushi Bai, Matthieu Lin, Jenny Sheng, Yubin Hu, Qi Wang, Yu-Hui Wen, Yong-Jin Liu

By lifting the pre-trained 2D diffusion models into Neural Radiance Fields (NeRFs), text-to-3D generation methods have made great progress.

Text to 3D

Generalized Label-Efficient 3D Scene Parsing via Hierarchical Feature Aligned Pre-Training and Region-Aware Fine-tuning

1 code implementation1 Dec 2023 Kangcheng Liu, Yong-Jin Liu, Kai Tang, Ming Liu, Baoquan Chen

Deep neural network models have achieved remarkable progress in 3D scene understanding while trained in the closed-set setting and with full labels.

Contrastive Learning Few-Shot Learning +2

SMaRt: Improving GANs with Score Matching Regularity

no code implementations30 Nov 2023 Mengfei Xia, Yujun Shen, Ceyuan Yang, Ran Yi, Wenping Wang, Yong-Jin Liu

In this work, we revisit the mathematical foundations of GANs, and theoretically reveal that the native adversarial loss for GAN training is insufficient to fix the problem of subsets with positive Lebesgue measure of the generated data manifold lying out of the real data manifold.

valid

Augmenting Unsupervised Reinforcement Learning with Self-Reference

no code implementations16 Nov 2023 Andrew Zhao, Erle Zhu, Rui Lu, Matthieu Lin, Yong-Jin Liu, Gao Huang

Our approach achieves state-of-the-art results in terms of Interquartile Mean (IQM) performance and Optimality Gap reduction on the Unsupervised Reinforcement Learning Benchmark for model-free methods, recording an 86% IQM and a 16% Optimality Gap.

Attribute reinforcement-learning +1

Towards More Accurate Diffusion Model Acceleration with A Timestep Aligner

no code implementations14 Oct 2023 Mengfei Xia, Yujun Shen, Changsong Lei, Yu Zhou, Ran Yi, Deli Zhao, Wenping Wang, Yong-Jin Liu

By viewing the generation of diffusion models as a discretized integrating process, we argue that the quality drop is partly caused by applying an inaccurate integral direction to a timestep interval.

Denoising

MMPI: a Flexible Radiance Field Representation by Multiple Multi-plane Images Blending

no code implementations30 Sep 2023 Yuze He, Peng Wang, Yubin Hu, Wang Zhao, Ran Yi, Yong-Jin Liu, Wenping Wang

In this paper, we explore the potential of MPI and show that MPI can synthesize high-quality novel views of complex scenes with diverse camera distributions and view directions, which are not only limited to simple forward-facing scenes.

Autonomous Driving Novel View Synthesis

DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation and Head Pose Generation via Diffusion Models

no code implementations30 Sep 2023 Zhiyao Sun, Tian Lv, Sheng Ye, Matthieu Gaetan Lin, Jenny Sheng, Yu-Hui Wen, MinJing Yu, Yong-Jin Liu

The generation of stylistic 3D facial animations driven by speech poses a significant challenge as it requires learning a many-to-many mapping between speech, style, and the corresponding natural facial motion.

FF-LOGO: Cross-Modality Point Cloud Registration with Feature Filtering and Local to Global Optimization

no code implementations16 Sep 2023 Nan Ma, Mohan Wang, Yiheng Han, Yong-Jin Liu

We propose a cross-modality point cloud registration framework FF-LOGO: a cross-modality point cloud registration method with feature filtering and local-global optimization.

Feature Correlation Point Cloud Registration

Indoor Scene Reconstruction with Fine-Grained Details Using Hybrid Representation and Normal Prior Enhancement

1 code implementation14 Sep 2023 Sheng Ye, Yubin Hu, Matthieu Lin, Yu-Hui Wen, Wang Zhao, Yong-Jin Liu, Wenping Wang

To enhance the normal priors, we introduce a simple yet effective image sharpening and denoising technique, coupled with a network that estimates the pixel-wise uncertainty of the predicted surface normal vectors.

Denoising Indoor Scene Reconstruction

ExpeL: LLM Agents Are Experiential Learners

1 code implementation20 Aug 2023 Andrew Zhao, Daniel Huang, Quentin Xu, Matthieu Lin, Yong-Jin Liu, Gao Huang

The recent surge in research interest in applying large language models (LLMs) to decision-making tasks has flourished by leveraging the extensive world knowledge embedded in LLMs.

Decision Making Transfer Learning +1

O$^2$-Recon: Completing 3D Reconstruction of Occluded Objects in the Scene with a Pre-trained 2D Diffusion Model

1 code implementation18 Aug 2023 Yubin Hu, Sheng Ye, Wang Zhao, Matthieu Lin, Yuze He, Yu-Hui Wen, Ying He, Yong-Jin Liu

In this paper, we propose a novel framework, empowered by a 2D diffusion-based in-painting model, to reconstruct complete surfaces for the hidden parts of objects.

3D Reconstruction Blocking

A Mixture of Surprises for Unsupervised Reinforcement Learning

1 code implementation13 Oct 2022 Andrew Zhao, Matthieu Gaetan Lin, Yangguang Li, Yong-Jin Liu, Gao Huang

However, both strategies rely on a strong assumption: the entropy of the environment's dynamics is either high or low.

reinforcement-learning Reinforcement Learning (RL) +1

KRF: Keypoint Refinement with Fusion Network for 6D Pose Estimation

1 code implementation7 Oct 2022 Irvin Haozhe Zhan, Yiheng Han, Yu-Ping Wang, Long Zeng, Yong-Jin Liu

The CIKP method introduces color information into registration and registers point cloud around each keypoint to increase stability.

6D Pose Estimation

Continuously Controllable Facial Expression Editing in Talking Face Videos

no code implementations17 Sep 2022 Zhiyao Sun, Yu-Hui Wen, Tian Lv, Yanan sun, Ziyang Zhang, Yaoyuan Wang, Yong-Jin Liu

In this paper, we propose a high-quality facial expression editing method for talking face videos, allowing the user to control the target emotion in the edited video continuously.

Image-to-Image Translation Video Generation

ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving Cameras in the Wild

1 code implementation19 Jul 2022 Wang Zhao, Shaohui Liu, Hengkai Guo, Wenping Wang, Yong-Jin Liu

In addition, our method is able to retain reasonable accuracy of camera poses on fully static scenes, which consistently outperforms strong state-of-the-art dense correspondence based methods with end-to-end deep learning, demonstrating the potential of dense indirect methods based on optical flow and point trajectories.

Motion Segmentation Optical Flow Estimation +1

Dynamic Neural Textures: Generating Talking-Face Videos with Continuously Controllable Expressions

no code implementations13 Apr 2022 Zipeng Ye, Zhiyao Sun, Yu-Hui Wen, Yanan sun, Tian Lv, Ran Yi, Yong-Jin Liu

In this paper, we propose a method to generate talking-face videos with continuously controllable expressions in real-time.

Video Generation

PD-Flow: A Point Cloud Denoising Framework with Normalizing Flows

1 code implementation11 Mar 2022 Aihua Mao, Zihui Du, Yu-Hui Wen, Jun Xuan, Yong-Jin Liu

By considering noisy point clouds as a joint distribution of clean points and noise, the denoised results can be derived from disentangling the noise counterpart from latent point representation, and the mapping between Euclidean and latent spaces is modeled by normalizing flows.

Denoising Disentanglement

Quality Metric Guided Portrait Line Drawing Generation from Unpaired Training Data

1 code implementation8 Feb 2022 Ran Yi, Yong-Jin Liu, Yu-Kun Lai, Paul L. Rosin

In this paper, we propose a novel method to automatically transform face photos to portrait drawings using unpaired training data with two new features; i. e., our method can (1) learn to generate high quality portrait drawings in multiple styles using a single network and (2) generate portrait drawings in a "new style" unseen in the training data.

Video-based Facial Micro-Expression Analysis: A Survey of Datasets, Features and Algorithms

no code implementations30 Jan 2022 Xianye Ben, Yi Ren, Junping Zhang, Su-Jing Wang, Kidiyo Kpalma, Weixiao Meng, Yong-Jin Liu

Unlike the conventional facial expressions, micro-expressions are involuntary and transient facial expressions capable of revealing the genuine emotions that people attempt to hide.

A Confidence-based Iterative Solver of Depths and Surface Normals for Deep Multi-view Stereo

1 code implementation ICCV 2021 Wang Zhao, Shaohui Liu, Yi Wei, Hengkai Guo, Yong-Jin Liu

Experimental results on ScanNet and RGB-D Scenes V2 demonstrate state-of-the-art performance of the proposed deep MVS system on multi-view depth estimation, with our proposed solver consistently improving the depth quality over both conventional and deep learning based MVS pipelines.

Depth Estimation

E-ffective: A Visual Analytic System for Exploring the Emotion and Effectiveness of Inspirational Speeches

no code implementations28 Oct 2021 Kevin Maher, Zeyuan Huang, Jiancheng Song, Xiaoming Deng, Yu-Kun Lai, Cuixia Ma, Hao Wang, Yong-Jin Liu, Hongan Wang

We further studied the usability of the system by speaking novices and experts on assisting analysis of inspirational speech effectiveness.

PU-Flow: a Point Cloud Upsampling Network with Normalizing Flows

1 code implementation13 Jul 2021 Aihua Mao, Zihui Du, Junhui Hou, Yaqi Duan, Yong-Jin Liu, Ying He

Point cloud upsampling aims to generate dense point clouds from given sparse ones, which is a challenging task due to the irregular and unordered nature of point sets.

Autoregressive Stylized Motion Synthesis With Generative Flow

no code implementations CVPR 2021 Yu-Hui Wen, Zhipeng Yang, Hongbo Fu, Lin Gao, Yanan sun, Yong-Jin Liu

Motion style transfer is an important problem in many computer graphics and computer vision applications, including human animation, games, and robotics.

Motion Style Transfer Style Transfer

AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis

1 code implementation ICCV 2021 Yudong Guo, Keyu Chen, Sen Liang, Yong-Jin Liu, Hujun Bao, Juyong Zhang

Generating high-fidelity talking head video by fitting with the input audio sequence is a challenging problem that receives considerable attentions recently.

Talking Face Generation

NPRportrait 1.0: A Three-Level Benchmark for Non-Photorealistic Rendering of Portraits

no code implementations1 Sep 2020 Paul L. Rosin, Yu-Kun Lai, David Mould, Ran Yi, Itamar Berger, Lars Doyle, Seungyong Lee, Chuan Li, Yong-Jin Liu, Amir Semmo, Ariel Shamir, Minjung Son, Holger Winnemoller

Despite the recent upsurge of activity in image-based non-photorealistic rendering (NPR), and in particular portrait image stylisation, due to the advent of neural style transfer, the state of performance evaluation in this field is limited, especially compared to the norms in the computer vision and machine learning communities.

Style Transfer

Learning to Accelerate Decomposition for Multi-Directional 3D Printing

1 code implementation17 Mar 2020 Chen-Ming Wu, Yong-Jin Liu, Charlie C. L. Wang

Different printing directions are employed in different regions to fabricate a model with tremendously less support (or even no support in many cases). To obtain optimized decomposition, a large beam width needs to be used in the search algorithm, leading to a very time-consuming computation.

3D-CariGAN: An End-to-End Solution to 3D Caricature Generation from Face Photos

1 code implementation15 Mar 2020 Zipeng Ye, Mengfei Xia, Yanan sun, Ran Yi, MinJing Yu, Juyong Zhang, Yu-Kun Lai, Yong-Jin Liu

The most challenging issue for our system is that the source domain of face photos (characterized by normal 2D faces) is significantly different from the target domain of 3D caricatures (characterized by 3D exaggerated face shapes and textures).

Caricature

Audio-driven Talking Face Video Generation with Learning-based Personalized Head Pose

1 code implementation24 Feb 2020 Ran Yi, Zipeng Ye, Juyong Zhang, Hujun Bao, Yong-Jin Liu

In this paper, we address this problem by proposing a deep neural network model that takes an audio signal A of a source person and a very short video V of a target person as input, and outputs a synthesized high-quality talking face video with personalized head pose (making use of the visual information in V), expression and lip synchronization (by considering both A and V).

3D Face Animation Video Generation

A Configuration-Space Decomposition Scheme for Learning-based Collision Checking

no code implementations17 Nov 2019 Yiheng Han, Wang Zhao, Jia Pan, Zipeng Ye, Ran Yi, Yong-Jin Liu

Motion planning for robots of high degrees-of-freedom (DOFs) is an important problem in robotics with sampling-based methods in configuration space C as one popular solution.

BIG-bench Machine Learning Motion Planning +1

Attention-aware Multi-stroke Style Transfer

1 code implementation CVPR 2019 Yuan Yao, Jianqiang Ren, Xuansong Xie, Weidong Liu, Yong-Jin Liu, Jun Wang

Neural style transfer has drawn considerable attention from both academic and industrial field.

Style Transfer

Efficient sparse semismooth Newton methods for the clustered lasso problem

no code implementations22 Aug 2018 Meixia Lin, Yong-Jin Liu, Defeng Sun, Kim-Chuan Toh

Based on the new formulation, we derive an efficient procedure for its computation.

Content-Sensitive Supervoxels via Uniform Tessellations on Video Manifolds

no code implementations CVPR 2018 Ran Yi, Yong-Jin Liu, Yu-Kun Lai

We propose an efficient Lloyd-like method with a splitting-merging scheme to compute a uniform tessellation on M, which induces the CSS in X. Theoretically our method has a good competitive ratio O(1).

CartoonGAN: Generative Adversarial Networks for Photo Cartoonization

7 code implementations CVPR 2018 Yang Chen, Yu-Kun Lai, Yong-Jin Liu

Two novel losses suitable for cartoonization are proposed: (1) a semantic content loss, which is formulated as a sparse regularization in the high-level feature maps of the VGG network to cope with substantial style variation between photos and cartoons, and (2) an edge-promoting adversarial loss for preserving clear edges.

Generative Adversarial Network Real-to-Cartoon translation

Learning to Rank Retargeted Images

no code implementations CVPR 2017 Yang Chen, Yong-Jin Liu, Yu-Kun Lai

Observing that it is challenging even for human subjects to give consistent scores for retargeting results of different source images, in this paper we propose a learning-based OQA method that predicts the ranking of a set of retargeted images with the same source image.

Image Retargeting Learning-To-Rank

Cannot find the paper you are looking for? You can Submit a new open access paper.