1 code implementation • 24 Jun 2024 • Xueyu Liu, Guangze Shi, Rui Wang, Yexin Lai, Jianan Zhang, Lele Sun, Quan Yang, Yongfei Wu, Ming Li, Weixia Han, Wen Zheng
Experimental results on our collected 2538 TEM images confirm that GBMSeg achieves superior segmentation performance with a Dice similarity coefficient (DSC) of 87. 27% using only one labeled reference image in a training-free manner, outperforming recently proposed one-shot or few-shot methods.
no code implementations • 29 May 2023 • Wen Zheng, Natasa Milic-Frayling, Ke Zhou
We guide the model training through a Contextual Knowledge Learning (CKL) process which involves Latent Vectors for context and knowledge, respectively.
no code implementations • 3 Feb 2023 • Chaowei Fang, Dingwen Zhang, Wen Zheng, Xue Li, Le Yang, Lechao Cheng, Junwei Han
We set up novel evaluation benchmarks based on a series of testing sets with evolving distributions.
Ranked #66 on Long-tail Learning on CIFAR-100-LT (ρ=100)
no code implementations • 20 Nov 2022 • Yige Yuan, Bingbing Xu, HuaWei Shen, Qi Cao, Keting Cen, Wen Zheng, Xueqi Cheng
Guided by the bound, we design a GCL framework named InfoAdv with enhanced generalization ability, which jointly optimizes the generalization metric and InfoMax to strike the right balance between pretext task fitting and the generalization ability on downstream tasks.
1 code implementation • 19 Feb 2022 • Xin Wen, Peng Xiang, Zhizhong Han, Yan-Pei Cao, Pengfei Wan, Wen Zheng, Yu-Shen Liu
It moves each point of incomplete input to obtain a complete point cloud, where total distance of point moving paths (PMPs) should be the shortest.
Ranked #1 on Point Cloud Completion on Completion3D
1 code implementation • 18 Feb 2022 • Peng Xiang, Xin Wen, Yu-Shen Liu, Yan-Pei Cao, Pengfei Wan, Wen Zheng, Zhizhong Han
Our insight into the detailed geometry is to introduce a skip-transformer in the SPD to learn the point splitting patterns that can best fit the local regions.
Ranked #5 on Point Cloud Completion on ShapeNet
3 code implementations • NeurIPS 2021 • Mingcong Liu, Qiang Li, Zekui Qin, Guoxin Zhang, Pengfei Wan, Wen Zheng
Specifically, we first train a self-supervised style encoder on the generic artistic dataset to extract the representations of arbitrary styles.
2 code implementations • ICCV 2021 • Peng Xiang, Xin Wen, Yu-Shen Liu, Yan-Pei Cao, Pengfei Wan, Wen Zheng, Zhizhong Han
However, previous methods usually suffered from discrete nature of point cloud and unstructured prediction of points in local regions, which makes it hard to reveal fine local geometric details on the complete shape.
no code implementations • CVPR 2022 • Zhaoqing Wang, Qiang Li, Guoxin Zhang, Pengfei Wan, Wen Zheng, Nannan Wang, Mingming Gong, Tongliang Liu
By considering the spatial correspondence, dense self-supervised representation learning has achieved superior performance on various dense prediction tasks.
no code implementations • 8 May 2021 • Yumeng Zhang, Li Chen, Yufeng Liu, Xiaoyan Guo, Wen Zheng, Junhai Yong
Deep learning methods have achieved excellent performance in pose estimation, but the lack of robustness causes the keypoints to change drastically between similar images.
no code implementations • ICCV 2021 • Zijian Yu, Xuhui Li, Huijuan Huang, Wen Zheng, Li Chen
Image matting refers to the estimation of the opacity of foreground objects.
1 code implementation • CVPR 2021 • Xin Wen, Zhizhong Han, Yan-Pei Cao, Pengfei Wan, Wen Zheng, Yu-Shen Liu
We provide a comprehensive evaluation in experiments, which shows that our model with the learned bidirectional geometry correspondence outperforms state-of-the-art unpaired completion methods.
1 code implementation • CVPR 2021 • Xingyu Chen, Yufeng Liu, Chongyang Ma, Jianlong Chang, Huayan Wang, Tian Chen, Xiaoyan Guo, Pengfei Wan, Wen Zheng
In the root-relative mesh recovery task, we exploit semantic relations among joints to generate a 3D mesh from the extracted 2D cues.
Ranked #18 on 3D Hand Pose Estimation on FreiHAND
1 code implementation • CVPR 2021 • Xin Wen, Peng Xiang, Zhizhong Han, Yan-Pei Cao, Pengfei Wan, Wen Zheng, Yu-Shen Liu
As a result, the network learns a strict and unique correspondence on point-level, which can capture the detailed topology and structure relationships between the incomplete shape and the complete target, and thus improves the quality of the predicted complete shape.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Wen Zheng, Natasa Milic-Frayling, Ke Zhou
This paper is concerned with improving dialogue generation models through injection of knowledge, e. g., content relevant to the post that can increase the quality of responses.
no code implementations • ECCV 2020 • Tian Chen, Shijie An, Yuan Zhang, Chongyang Ma, Huayan Wang, Xiaoyan Guo, Wen Zheng
Monocular depth estimation plays a crucial role in 3D recognition and understanding.
1 code implementation • 7 Jun 2020 • Qiang Li, Zekui Qin, Wenbo Zhang, Wen Zheng
Visual object tracking aims to estimate the location of an arbitrary target in a video sequence given its initial bounding box.
no code implementations • 11 Sep 2019 • Yumeng Zhang, Li Chen, Yufeng Liu, Junhai Yong, Wen Zheng
During training, based on the relation between these common characteristics and 3D pose learned from fully-annotated synthetic datasets, it is beneficial for the network to restore the 3D pose of weakly labeled real-world datasets with the aid of 2D annotations and depth images.
no code implementations • 27 Feb 2019 • Hongmin Xu, Qiang Li, Wenbo Zhang, Wen Zheng
Multi-Style Transfer (MST) intents to capture the high-level visual vocabulary of different styles and expresses these vocabularies in a joint model to transfer each specific style.
2 code implementations • ICCV 2019 • Xiong Zhang, Qiang Li, Hong Mo, Wenbo Zhang, Wen Zheng
In this paper, we present a HAnd Mesh Recovery (HAMR) framework to tackle the problem of reconstructing the full 3D mesh of a human hand from a single RGB image.
no code implementations • CVPR 2016 • Qiaosong Wang, Wen Zheng, Robinson Piramuthu
We propose an unsupervised bottom-up saliency detection approach by exploiting novel graph structure and background priors.