Search Results for author: Haohang Xu

Found 11 papers, 5 papers with code

Masked Autoencoders are Robust Data Augmentors

1 code implementation10 Jun 2022 Haohang Xu, Shuangrui Ding, Xiaopeng Zhang, Hongkai Xiong, Qi Tian

Specifically, MRA consistently enhances the performance on supervised, semi-supervised as well as few-shot classification.

Image Augmentation Image Classification +1

Semantic-Aware Generation for Self-Supervised Visual Representation Learning

1 code implementation25 Nov 2021 Yunjie Tian, Lingxi Xie, Xiaopeng Zhang, Jiemin Fang, Haohang Xu, Wei Huang, Jianbin Jiao, Qi Tian, Qixiang Ye

In this paper, we propose a self-supervised visual representation learning approach which involves both generative and discriminative proxies, where we focus on the former part by requiring the target network to recover the original image based on the mid-level features.

Representation Learning Semantic Segmentation

Motion-aware Contrastive Video Representation Learning via Foreground-background Merging

1 code implementation CVPR 2022 Shuangrui Ding, Maomao Li, Tianyu Yang, Rui Qian, Haohang Xu, Qingyi Chen, Jue Wang, Hongkai Xiong

To alleviate such bias, we propose \textbf{F}oreground-b\textbf{a}ckground \textbf{Me}rging (FAME) to deliberately compose the moving foreground region of the selected video onto the static background of others.

Action Recognition Contrastive Learning +1

Bag of Instances Aggregation Boosts Self-supervised Distillation

1 code implementation ICLR 2022 Haohang Xu, Jiemin Fang, Xiaopeng Zhang, Lingxi Xie, Xinggang Wang, Wenrui Dai, Hongkai Xiong, Qi Tian

Here bag of instances indicates a set of similar samples constructed by the teacher and are grouped within a bag, and the goal of distillation is to aggregate compact representations over the student with respect to instances in a bag.

Contrastive Learning Self-Supervised Learning

Multi-dataset Pretraining: A Unified Model for Semantic Segmentation

no code implementations8 Jun 2021 Bowen Shi, Xiaopeng Zhang, Haohang Xu, Wenrui Dai, Junni Zou, Hongkai Xiong, Qi Tian

This is achieved by first pretraining the network via the proposed pixel-to-prototype contrastive loss over multiple datasets regardless of their taxonomy labels, and followed by fine-tuning the pretrained model over specific dataset as usual.

Semantic Segmentation

Seed the Views: Hierarchical Semantic Alignment for Contrastive Representation Learning

no code implementations4 Dec 2020 Haohang Xu, Xiaopeng Zhang, Hao Li, Lingxi Xie, Hongkai Xiong, Qi Tian

In this paper, we propose a hierarchical semantic alignment strategy via expanding the views generated by a single image to \textbf{Cross-samples and Multi-level} representation, and models the invariance to semantically similar images in a hierarchical way.

Contrastive Learning Representation Learning +2

K-Shot Contrastive Learning of Visual Features with Multiple Instance Augmentations

no code implementations27 Jul 2020 Haohang Xu, Hongkai Xiong, Guo-Jun Qi

In this paper, we propose the $K$-Shot Contrastive Learning (KSCL) of visual features by applying multiple augmentations to investigate the sample variations within individual instances.

Contrastive Learning

FLAT: Few-Shot Learning via Autoencoding Transformation Regularizers

no code implementations29 Dec 2019 Haohang Xu, Hongkai Xiong, Guo-Jun Qi

To this end, we present a novel regularization mechanism by learning the change of feature representations induced by a distribution of transformations without using the labels of data examples.

Data Augmentation Few-Shot Learning +1

AETv2: AutoEncoding Transformations for Self-Supervised Representation Learning by Minimizing Geodesic Distances in Lie Groups

no code implementations16 Nov 2019 Feng Lin, Haohang Xu, Houqiang Li, Hongkai Xiong, Guo-Jun Qi

For this reason, we should use the geodesic to characterize how an image transform along the manifold of a transformation group, and adopt its length to measure the deviation between transformations.

Representation Learning Self-Supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.