Search Results for author: Quan Feng

Found 4 papers, 0 papers with code

Deep Semi-supervised Learning with Double-Contrast of Features and Semantics

no code implementations28 Nov 2022 Quan Feng, Jiayu Yao, Zhison Pan, Guojun Zhou

Therefore, a more realistic strategy is to leverage semi-supervised learning (SSL) with a small amount of labeled data and a large amount of unlabeled data.

Representation Learning

Learning Downstream Task by Selectively Capturing Complementary Knowledge from Multiple Self-supervisedly Learning Pretexts

no code implementations11 Apr 2022 Jiayu Yao, Qingyuan Wu, Quan Feng, Songcan Chen

Self-supervised learning (SSL), as a newly emerging unsupervised representation learning paradigm, generally follows a two-stage learning pipeline: 1) learning invariant and discriminative representations with auto-annotation pretext(s), then 2) transferring the representations to assist downstream task(s).

Representation Learning Self-Supervised Learning

Learning Multi-Tasks with Inconsistent Labels by using Auxiliary Big Task

no code implementations7 Jan 2022 Quan Feng, Songcan Chen

Multi-task learning is to improve the performance of the model by transferring and exploiting common knowledge among tasks.

Multi-Task Learning

Learning Twofold Heterogeneous Multi-Task by Sharing Similar Convolution Kernel Pairs

no code implementations29 Jan 2021 Quan Feng, Songcan Chen

However, to the best of our knowledge, there is limited study on twofold heterogeneous MTL (THMTL) scenario where the input and the output spaces are both inconsistent or heterogeneous.

Multi-Task Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.