Search Results for author: Tiexin Qin

Found 4 papers, 3 papers with code

LibFewShot: A Comprehensive Library for Few-shot Learning

1 code implementation10 Sep 2021 Wenbin Li, Chuanqi Dong, Pinzhuo Tian, Tiexin Qin, Xuesong Yang, Ziyi Wang, Jing Huo, Yinghuan Shi, Lei Wang, Yang Gao, Jiebo Luo

Furthermore, based on LibFewShot, we provide comprehensive evaluations on multiple benchmark datasets with multiple backbone architectures to evaluate common pitfalls and effects of different training tricks.

Data Augmentation Few-Shot Image Classification +1

Diversity Helps: Unsupervised Few-shot Learning via Distribution Shift-based Data Augmentation

1 code implementation13 Apr 2020 Tiexin Qin, Wenbin Li, Yinghuan Shi, Yang Gao

Importantly, we highlight the value and importance of the distribution diversity in the augmentation-based pretext few-shot tasks, which can effectively alleviate the overfitting problem and make the few-shot model learn more robust feature representations.

Data Augmentation Few-Shot Learning

Automatic Data Augmentation via Deep Reinforcement Learning for Effective Kidney Tumor Segmentation

no code implementations22 Feb 2020 Tiexin Qin, Ziyuan Wang, Kelei He, Yinghuan Shi, Yang Gao, Dinggang Shen

Conventional data augmentation realized by performing simple pre-processing operations (\eg, rotation, crop, \etc) has been validated for its advantage in enhancing the performance for medical image segmentation.

Data Augmentation Medical Image Segmentation +1

Automatic Data Augmentation by Learning the Deterministic Policy

1 code implementation18 Oct 2019 Yinghuan Shi, Tiexin Qin, Yong liu, Jiwen Lu, Yang Gao, Dinggang Shen

By introducing an unified optimization goal, DeepAugNet intends to combine the data augmentation and the deep model training in an end-to-end training manner which is realized by simultaneously training a hybrid architecture of dueling deep Q-learning algorithm and a surrogate deep model.

Data Augmentation Q-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.