Search Results for author: Yun-Hao Cao

Found 10 papers, 8 papers with code

On Improving the Algorithm-, Model-, and Data- Efficiency of Self-Supervised Learning

no code implementations30 Apr 2024 Yun-Hao Cao, Jianxin Wu

In this paper, we propose an efficient single-branch SSL method based on non-parametric instance discrimination, aiming to improve the algorithm, model, and data efficiency of SSL.

Self-Supervised Learning

Towards Better Vision-Inspired Vision-Language Models

no code implementations CVPR 2024 Yun-Hao Cao, Kaixiang Ji, Ziyuan Huang, Chuanyang Zheng, Jiajia Liu, Jian Wang, Jingdong Chen, Ming Yang

In this paper we present a vision-inspired vision-language connection module dubbed as VIVL which efficiently exploits the vision cue for VL models.

Three Guidelines You Should Know for Universally Slimmable Self-Supervised Learning

1 code implementation CVPR 2023 Yun-Hao Cao, Peiqin Sun, Shuchang Zhou

We propose universally slimmable self-supervised learning (dubbed as US3L) to achieve better accuracy-efficiency trade-offs for deploying self-supervised models across different devices.

Instance Segmentation object-detection +3

Synergistic Self-supervised and Quantization Learning

1 code implementation12 Jul 2022 Yun-Hao Cao, Peiqin Sun, Yechang Huang, Jianxin Wu, Shuchang Zhou

In this paper, we propose a method called synergistic self-supervised and quantization learning (SSQL) to pretrain quantization-friendly self-supervised models facilitating downstream deployment.

Quantization Self-Supervised Learning

Worst Case Matters for Few-Shot Recognition

1 code implementation13 Mar 2022 Minghao Fu, Yun-Hao Cao, Jianxin Wu

Few-shot recognition learns a recognition model with very few (e. g., 1 or 5) images per category, and current few-shot learning methods focus on improving the average accuracy over many episodes.

Few-Shot Image Classification Few-Shot Learning

Training Vision Transformers with Only 2040 Images

2 code implementations26 Jan 2022 Yun-Hao Cao, Hao Yu, Jianxin Wu

Vision Transformers (ViTs) is emerging as an alternative to convolutional neural networks (CNNs) for visual recognition.

Inductive Bias

A Random CNN Sees Objects: One Inductive Bias of CNN and Its Applications

1 code implementation17 Jun 2021 Yun-Hao Cao, Jianxin Wu

That is, a CNN has an inductive bias to naturally focus on objects, named as Tobias ("The object is at sight") in this paper.

Inductive Bias Object +3

Rethinking Self-Supervised Learning: Small is Beautiful

1 code implementation25 Mar 2021 Yun-Hao Cao, Jianxin Wu

Self-supervised learning (SSL), in particular contrastive learning, has made great progress in recent years.

Contrastive Learning Self-Supervised Learning

Rethinking the Route Towards Weakly Supervised Object Localization

1 code implementation CVPR 2020 Chen-Lin Zhang, Yun-Hao Cao, Jianxin Wu

Weakly supervised object localization (WSOL) aims to localize objects with only image-level labels.

Ranked #2 on Weakly-Supervised Object Localization on CUB-200-2011 (Top-1 Localization Accuracy metric)

General Classification Object +1

Neural Random Subspace

1 code implementation18 Nov 2019 Yun-Hao Cao, Jianxin Wu, Hanchen Wang, Joan Lasenby

The random subspace method, known as the pillar of random forests, is good at making precise and robust predictions.

Deep Learning Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.