Search Results for author: Zhuoran Yu

Found 9 papers, 2 papers with code

Scale-Equalizing Pyramid Convolution for Object Detection

2 code implementations CVPR 2020 Xinjiang Wang, Shilong Zhang, Zhuoran Yu, Litong Feng, Wayne Zhang

Inspired by this, a convolution across the pyramid level is proposed in this study, which is termed pyramid convolution and is a modified 3-D convolution.

Object object-detection +1

Group R-CNN for Weakly Semi-supervised Object Detection with Points

1 code implementation CVPR 2022 Shilong Zhang, Zhuoran Yu, Liyang Liu, Xinjiang Wang, Aojun Zhou, Kai Chen

The core of this task is to train a point-to-box regressor on well-labeled images that can be used to predict credible bounding boxes for each point annotation.

Object Detection Representation Learning +1

Scale Calibrated Training: Improving Generalization of Deep Networks via Scale-Specific Normalization

no code implementations31 Aug 2019 Zhuoran Yu, Aojun Zhou, Yukun Ma, Yudian Li, Xiaohan Zhang, Ping Luo

Experiment results show that SCT improves accuracy of single Resnet-50 on ImageNet by 1. 7% and 11. 5% accuracy when testing on image sizes of 224 and 128 respectively.

Data Augmentation Image Classification +1

CrossMatch: Improving Semi-Supervised Object Detection via Multi-Scale Consistency

no code implementations29 Sep 2021 Zhuoran Yu, Yen-Cheng Liu, Chih-Yao Ma, Zsolt Kira

Inspired by the fact that teacher/student pseudo-labeling approaches result in a weak and sparse gradient signal due to the difficulty of confidence-thresholding, CrossMatch leverages \textit{multi-scale feature extraction} in object detection.

Object object-detection +2

EnergyMatch: Energy-based Pseudo-Labeling for Semi-Supervised Learning

no code implementations13 Jun 2022 Zhuoran Yu, Yin Li, Yong Jae Lee

However, it has been shown that softmax-based confidence scores in deep networks can be arbitrarily high for samples far from the training data, and thus, the pseudo-labels for even high-confidence unlabeled samples may still be unreliable.

Out-of-Distribution Detection

InPL: Pseudo-labeling the Inliers First for Imbalanced Semi-supervised Learning

no code implementations13 Mar 2023 Zhuoran Yu, Yin Li, Yong Jae Lee

Without relying on model confidence, we propose to measure whether an unlabeled sample is likely to be ``in-distribution''; i. e., close to the current training data.

Out-of-Distribution Detection

Denoising and Selecting Pseudo-Heatmaps for Semi-Supervised Human Pose Estimation

no code implementations29 Sep 2023 Zhuoran Yu, Manchen Wang, Yanbei Chen, Paolo Favaro, Davide Modolo

First, we introduce a denoising scheme to generate reliable pseudo-heatmaps as targets for learning from unlabeled data.

Denoising Pose Estimation +1

Diversify, Don't Fine-Tune: Scaling Up Visual Recognition Training with Synthetic Images

no code implementations4 Dec 2023 Zhuoran Yu, Chenchen Zhu, Sean Culatana, Raghuraman Krishnamoorthi, Fanyi Xiao, Yong Jae Lee

We present a new framework leveraging off-the-shelf generative models to generate synthetic training images, addressing multiple challenges: class name ambiguity, lack of diversity in naive prompts, and domain shifts.

Domain Generalization Text-to-Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.