Search Results for author: Fangyi Chen

Found 6 papers, 4 papers with code

Enhanced Training of Query-Based Object Detection via Selective Query Recollection

2 code implementations CVPR 2023 Fangyi Chen, Han Zhang, Kai Hu, Yu-Kai Huang, Chenchen Zhu, Marios Savvides

This paper investigates a phenomenon where query-based object detectors mispredict at the last decoding stage while predicting correctly at an intermediate stage.

Attribute Object +2

Unitail: Detecting, Reading, and Matching in Retail Scene

no code implementations1 Apr 2022 Fangyi Chen, Han Zhang, Zaiwang Li, Jiachen Dou, Shentong Mo, Hao Chen, Yongxin Zhang, Uzair Ahmed, Chenchen Zhu, Marios Savvides

To make full use of computer vision technology in stores, it is required to consider the actual needs that fit the characteristics of the retail scene.

Benchmarking Dense Object Detection +2

Semantic Relation Reasoning for Shot-Stable Few-Shot Object Detection

no code implementations CVPR 2021 Chenchen Zhu, Fangyi Chen, Uzair Ahmed, Zhiqiang Shen, Marios Savvides

In this work, we investigate utilizing this semantic relation together with the visual information and introduce explicit relation reasoning into the learning of novel object detection.

Few-Shot Object Detection Novel Object Detection +2

Solving Missing-Annotation Object Detection with Background Recalibration Loss

2 code implementations12 Feb 2020 Han Zhang, Fangyi Chen, Zhiqiang Shen, Qiqi Hao, Chenchen Zhu, Marios Savvides

In this paper, we introduce a superior solution called Background Recalibration Loss (BRL) that can automatically re-calibrate the loss signals according to the pre-defined IoU threshold and input image.

Object object-detection +1

Soft Anchor-Point Object Detection

2 code implementations ECCV 2020 Chenchen Zhu, Fangyi Chen, Zhiqiang Shen, Marios Savvides

In this work, we boost the performance of the anchor-point detector over the key-point counterparts while maintaining the speed advantage.

Dense Object Detection feature selection +2

Cannot find the paper you are looking for? You can Submit a new open access paper.