Search Results for author: Junhui Liu

Found 10 papers, 4 papers with code

TKwinFormer: Top k Window Attention in Vision Transformers for Feature Matching

no code implementations29 Aug 2023 Yun Liao, Yide Di, Hao Zhou, Kaijun Zhu, Mingyu Lu, Yijia Zhang, Qing Duan, Junhui Liu

Local feature matching remains a challenging task, primarily due to difficulties in matching sparse keypoints and low-texture regions.

Preserving background sound in noise-robust voice conversion via multi-task learning

no code implementations6 Nov 2022 Jixun Yao, Yi Lei, Qing Wang, Pengcheng Guo, Ziqian Ning, Lei Xie, Hai Li, Junhui Liu, Danming Xie

Background sound is an informative form of art that is helpful in providing a more immersive experience in real-application voice conversion (VC) scenarios.

Multi-Task Learning Voice Conversion

ClothFormer:Taming Video Virtual Try-on in All Module

1 code implementation26 Apr 2022 Jianbin Jiang, Tan Wang, He Yan, Junhui Liu

Moreover, there are two other key challenges: 1) how to generate accurate warping when occlusions appear in the clothing region; 2) how to generate clothes and non-target body parts (e. g. arms, neck) in harmony with the complicated background; To address them, we propose a novel video virtual try-on framework, ClothFormer, which successfully synthesizes realistic, harmonious, and spatio-temporal consistent results in complicated environment.

Optical Flow Estimation Virtual Try-on

Migrating Face Swap to Mobile Devices: A lightweight Framework and A Supervised Training Solution

1 code implementation13 Apr 2022 Haiming Yu, Hao Zhu, Xiangju Lu, Junhui Liu

In this work, we propose MobileFSGAN, a novel lightweight GAN for face swap that can run on mobile devices with much fewer parameters while achieving competitive performance.

Attribute Image Generation

ClothFormer: Taming Video Virtual Try-On in All Module

no code implementations CVPR 2022 Jianbin Jiang, Tan Wang, He Yan, Junhui Liu

Moreover, there are two other key challenges: 1) how to generate accurate warping when occlusions appear in the clothing region; 2) how to generate clothes and non-target body parts (e. g. arms, neck) in harmony with the complicated background; To address them, we propose a novel video virtual try-on framework, ClothFormer, which successfully synthesizes realistic, harmonious, and spatio-temporal consistent results in complicated environment.

Optical Flow Estimation Virtual Try-on

Boundary Content Graph Neural Network for Temporal Action Proposal Generation

no code implementations ECCV 2020 Yueran Bai, Yingying Wang, Yunhai Tong, Yang Yang, Qiyue Liu, Junhui Liu

To address this issue, we propose a novel Boundary Content Graph Neural Network (BC-GNN) to model the insightful relations between the boundary and action content of temporal proposals by the graph neural networks.

Action Detection Action Understanding +1

FASPell: A Fast, Adaptable, Simple, Powerful Chinese Spell Checker Based On DAE-Decoder Paradigm

1 code implementation WS 2019 Yuzhong Hong, Xianguo Yu, Neng He, Nan Liu, Junhui Liu

We propose a Chinese spell checker {--} FASPell based on a new paradigm which consists of a denoising autoencoder (DAE) and a decoder.

Chinese Spell Checking Denoising +1

Cartoon Face Recognition: A Benchmark Dataset

1 code implementation31 Jul 2019 Yi Zheng, Yifan Zhao, Mengyuan Ren, He Yan, Xiangju Lu, Junhui Liu, Jia Li

Recent years have witnessed increasing attention in cartoon media, powered by the strong demands of industrial applications.

Domain Adaptation Face Detection +4

Cannot find the paper you are looking for? You can Submit a new open access paper.