no code implementations • 20 Nov 2024 • Kuiran Wang, Xuehui Yu, Wenwen Yu, Guorong Li, Xiangyuan Lan, Qixiang Ye, Jianbin Jiao, Zhenjun Han
The bounding box will be used as input of single object trackers.
no code implementations • 23 Jul 2024 • Pengfei Chen, Lingxi Xie, Xinyue Huo, Xuehui Yu, Xiaopeng Zhang, Yingfei Sun, Zhenjun Han, Qi Tian
The Segment Anything model (SAM) has shown a generalized ability to group image pixels into patches, but applying it to semantic-aware segmentation still faces major challenges.
1 code implementation • 7 Jun 2024 • Xuehui Yu, Mhairi Dunion, Xin Li, Stefano V. Albrecht
To improve RL generalisation to different tasks, we first introduce Skill-aware Mutual Information (SaMI), an optimisation objective that aids in distinguishing context embeddings according to skills, thereby equipping RL agents with the ability to identify and execute different skills across tasks.
no code implementations • 3 Jun 2024 • Xuehui Yu, Yi Guan, Rujia Shen, Xin Li, Chen Tang, Jingchi Jiang
To tackle these issues, we introduce the Causal Prompting Reinforcement Learning (CPRL) framework, designed for highly suboptimal and resource-constrained online scenarios.
2 code implementations • 30 Jan 2024 • Xuehui Yu, Pengfei Chen, Kuiran Wang, Xumeng Han, Guorong Li, Zhenjun Han, Qixiang Ye, Jianbin Jiao
CPR reduces the semantic variance by selecting a semantic centre point in a neighbourhood region to replace the initial annotated point.
no code implementations • 18 Jan 2024 • Zipeng Wang, Xuehui Yu, Xumeng Han, Wenwen Yu, Zhixun Huang, Jianbin Jiao, Zhenjun Han
Nevertheless, weakly supervised semantic segmentation methods are proficient in utilizing intra-class feature consistency to capture the boundary contours of the same semantic regions.
1 code implementation • CVPR 2024 • Zhaoyang Wei, Pengfei Chen, Xuehui Yu, Guorong Li, Jianbin Jiao, Zhenjun Han
In this paper, we introduce a cost-effective category-specific segmenter using SAM.
1 code implementation • 6 Dec 2023 • Xumeng Han, Longhui Wei, Xuehui Yu, Zhiyang Dou, Xin He, Kuiran Wang, Zhenjun Han, Qi Tian
The recent Segment Anything Model (SAM) has emerged as a new paradigmatic vision foundation model, showcasing potent zero-shot generalization and flexible prompting.
no code implementations • 22 Nov 2023 • Guangming Cao, Xuehui Yu, Wenwen Yu, Xumeng Han, Xue Yang, Guorong Li, Jianbin Jiao, Zhenjun Han
In this study, we introduce P2RBox, which employs point prompt to generate rotated box (RBox) annotation for oriented object detection.
1 code implementation • ICCV 2023 • Di wu, Pengfei Chen, Xuehui Yu, Guorong Li, Zhenjun Han, Jianbin Jiao
Object detection via inaccurate bounding boxes supervision has boosted a broad interest due to the expensive high-quality annotation data or the occasional inevitability of low annotation quality (\eg tiny objects).
no code implementations • 15 Sep 2022 • Xuehui Yu, Jingchi Jiang, Xinmiao Yu, Yi Guan, Xue Li
Complex systems are ubiquitous in the real world and tend to have complicated and poorly understood dynamics.
3 code implementations • 14 Jul 2022 • Pengfei Chen, Xuehui Yu, Xumeng Han, Najmul Hassan, Kai Wang, Jiachen Li, Jian Zhao, Humphrey Shi, Zhenjun Han, Qixiang Ye
However, the performance gap between point supervised object detection (PSOD) and bounding box supervised detection remains large.
2 code implementations • CVPR 2022 • Xuehui Yu, Pengfei Chen, Di wu, Najmul Hassan, Guorong Li, Junchi Yan, Humphrey Shi, Qixiang Ye, Zhenjun Han
In this study, we propose a POL method using coarse point annotations, relaxing the supervision signals from accurate key points to freely spotted points.
no code implementations • 31 Dec 2021 • Xuehui Yu, Di wu, Qixiang Ye, Jianbin Jiao, Zhenjun Han
As a result, we propose a point self-refinement approach that iteratively updates point annotations in a self-paced way.
2 code implementations • 7 Jul 2021 • Xumeng Han, Xuehui Yu, Guorong Li, Jian Zhao, Gang Pan, Qixiang Ye, Jianbin Jiao, Zhenjun Han
While extensive research has focused on the framework design and loss function, this paper shows that sampling strategy plays an equally important role.
no code implementations • 6 Feb 2021 • Nan Jiang, Xuehui Yu, Xiaoke Peng, Yuqi Gong, Zhenjun Han
Detecting tiny objects ( e. g., less than 20 x 20 pixels) in large-scale images is an important yet open problem.
1 code implementation • 21 Jan 2021 • Nan Jiang, Kuiran Wang, Xiaoke Peng, Xuehui Yu, Qiang Wang, Junliang Xing, Guorong Li, Jian Zhao, Guodong Guo, Zhenjun Han
The releasing of such a large-scale dataset could be a useful initial step in research of tracking UAVs.
no code implementations • 4 Nov 2020 • Yuqi Gong, Xuehui Yu, Yao Ding, Xiaoke Peng, Jian Zhao, Zhenjun Han
We propose a novel concept, fusion factor, to control information that deep layers deliver to shallow layers, for adapting FPN to tiny object detection.
1 code implementation • 16 Sep 2020 • Xuehui Yu, Zhenjun Han, Yuqi Gong, Nan Jiang, Jian Zhao, Qixiang Ye, Jie Chen, Yuan Feng, Bin Zhang, Xiaodi Wang, Ying Xin, Jingwei Liu, Mingyuan Mao, Sheng Xu, Baochang Zhang, Shumin Han, Cheng Gao, Wei Tang, Lizuo Jin, Mingbo Hong, Yuchao Yang, Shuiwang Li, Huan Luo, Qijun Zhao, Humphrey Shi
The 1st Tiny Object Detection (TOD) Challenge aims to encourage research in developing novel and accurate methods for tiny object detection in images which have wide views, with a current focus on tiny person detection.
2 code implementations • 23 Dec 2019 • Xuehui Yu, Yuqi Gong, Nan Jiang, Qixiang Ye, Zhenjun Han
In this paper, we introduce a new benchmark, referred to as TinyPerson, opening up a promising directionfor tiny object detection in a long distance and with mas-sive backgrounds.