Search Results for author: Changsheng Lu

Found 6 papers, 3 papers with code

Few-shot Shape Recognition by Learning Deep Shape-aware Features

no code implementations3 Dec 2023 Wenlong Shi, Changsheng Lu, Ming Shao, Yinjie Zhang, Siyu Xia, Piotr Koniusz

Thirdly, we propose a decoding module to include the supervision of shape masks and edges and align the original and reconstructed shape features, enforcing the learned features to be more shape-aware.

Image Reconstruction

From Saliency to DINO: Saliency-guided Vision Transformer for Few-shot Keypoint Detection

no code implementations6 Apr 2023 Changsheng Lu, Hao Zhu, Piotr Koniusz

Unlike current deep keypoint detectors that are trained to recognize limited number of body parts, few-shot keypoint detection (FSKD) attempts to localize any keypoints, including novel or base keypoints, depending on the reference samples.

Keypoint Detection

Few-shot Keypoint Detection with Uncertainty Learning for Unseen Species

1 code implementation CVPR 2022 Changsheng Lu, Piotr Koniusz

Current non-rigid object keypoint detectors perform well on a chosen kind of species and body parts, and require a large amount of labelled keypoints for training.

Fine-Grained Visual Recognition Keypoint Detection +1

Industrial Scene Text Detection with Refined Feature-attentive Network

1 code implementation25 Oct 2021 Tongkun Guan, Chaochen Gu, Changsheng Lu, Jingzheng Tu, Qi Feng, Kaijie Wu, Xinping Guan

Then, an attentive refinement network is developed by the attention map to rectify the location deviation of candidate boxes.

Scene Text Detection Text Detection

Arc-support Line Segments Revisited: An Efficient and High-quality Ellipse Detection

3 code implementations8 Oct 2018 Changsheng Lu, Siyu Xia, Ming Shao, Yun Fu

Over the years many ellipse detection algorithms spring up and are studied broadly, while the critical issue of detecting ellipses accurately and efficiently in real-world images remains a challenge.

Clustering valid

Cannot find the paper you are looking for? You can Submit a new open access paper.