no code implementations • 8 Mar 2022 • Quan Cui, Bingchen Zhao, Zhao-Min Chen, Borui Zhao, RenJie Song, Jiajun Liang, Boyan Zhou, Osamu Yoshie
This work simultaneously considers the discriminability and transferability properties of deep representations in the typical supervised learning task, i. e., image classification.
1 code implementation • 17 Dec 2021 • Quan Cui, Boyan Zhou, Yu Guo, Weidong Yin, Hao Wu, Osamu Yoshie
However, these works require a tremendous amount of data and computational resources (e. g., billion-level web data and hundreds of GPUs), which prevent researchers with limited resources from reproduction and further exploration.
1 code implementation • 21 Apr 2021 • Xin Huang, Xinxin Wang, Wenyu Lv, Xiaying Bai, Xiang Long, Kaipeng Deng, Qingqing Dang, Shumin Han, Qiwen Liu, Xiaoguang Hu, dianhai yu, Yanjun Ma, Osamu Yoshie
To meet these two concerns, we comprehensively evaluate a collection of existing refinements to improve the performance of PP-YOLO while almost keep the infer time unchanged.
1 code implementation • CVPR 2021 • Zheng Ge, Songtao Liu, Zeming Li, Osamu Yoshie, Jian Sun
Recent advances in label assignment in object detection mainly seek to independently define positive/negative training samples for each ground-truth (gt) object.
Ranked #50 on
Object Detection
on COCO test-dev
no code implementations • 13 Feb 2021 • Anqing Jiang, LiangYao Chen, Osamu Yoshie
Machine learning, especially deep learning, is dramatically changing the methods associated with optical thin-film inverse design.
1 code implementation • 12 Jan 2021 • Zheng Ge, JianFeng Wang, Xin Huang, Songtao Liu, Osamu Yoshie
A joint loss is then defined as the weighted summation of cls and reg losses as the assigning indicator.
no code implementations • ECCV 2020 • Quan Cui, Qing-Yuan Jiang, Xiu-Shen Wei, Wu-Jun Li, Osamu Yoshie
Retrieving content relevant images from a large-scale fine-grained dataset could suffer from intolerably slow query speed and highly redundant storage cost, due to high-dimensional real-valued embeddings which aim to distinguish subtle visual differences of fine-grained objects.
no code implementations • 23 May 2020 • Zheng Ge, Zequn Jie, Xin Huang, Chengzheng Li, Osamu Yoshie
The first imbalance lies in the large number of low-quality RPN proposals, which makes the R-CNN module (i. e., post-classification layers) become highly biased towards the negative proposals in the early training stage.
no code implementations • CVPR 2020 • Xin Huang, Zheng Ge, Zequn Jie, Osamu Yoshie
To acquire the visible parts, a novel Paired-Box Model (PBM) is proposed to simultaneously predict the full and visible boxes of a pedestrian.
no code implementations • 16 Mar 2020 • Zheng Ge, Zequn Jie, Xin Huang, Rong Xu, Osamu Yoshie
PS-RCNN first detects slightly/none occluded objects by an R-CNN module (referred as P-RCNN), and then suppress the detected instances by human-shaped masks so that the features of heavily occluded instances can stand out.
Ranked #2 on
Object Detection
on WiderPerson
no code implementations • 7 Dec 2018 • Anqing Jiang, Osamu Yoshie, LiangYao Chen
This model can converge the global optimum of the optical thin film structure, this will greatly improve the design efficiency of multi-layer films.