Search Results for author: Xinzhe Li

Found 8 papers, 5 papers with code

Precipitation Prediction Using an Ensemble of Lightweight Learners

1 code implementation30 Nov 2023 Xinzhe Li, Sun Rui, Yiming Niu, Yao Liu

Specifically, the framework consists of a precipitation predictor with multiple lightweight heads (learners) and a controller that combines the outputs from these heads.

Ensemble Learning

One-for-All: Towards Universal Domain Translation with a Single StyleGAN

no code implementations22 Oct 2023 Yong Du, Jiahui Zhan, Shengfeng He, Xinzhe Li, Junyu Dong, Sheng Chen, Ming-Hsuan Yang

In this paper, we propose a novel translation model, UniTranslator, for transforming representations between visually distinct domains under conditions of limited training data and significant visual differences.

Translation

Make Text Unlearnable: Exploiting Effective Patterns to Protect Personal Data

1 code implementation2 Jul 2023 Xinzhe Li, Ming Liu, Shang Gao

This paper addresses the ethical concerns arising from the use of unauthorized public data in deep learning models and proposes a novel solution.

Question Answering text-classification +1

Can Pretrained Language Models Derive Correct Semantics from Corrupt Subwords under Noise?

1 code implementation27 Jun 2023 Xinzhe Li, Ming Liu, Shang Gao

For Pretrained Language Models (PLMs), their susceptibility to noise has recently been linked to subword segmentation.

Segmentation

A Survey on Out-of-Distribution Evaluation of Neural NLP Models

no code implementations27 Jun 2023 Xinzhe Li, Ming Liu, Shang Gao, Wray Buntine

Adversarial robustness, domain generalization and dataset biases are three active lines of research contributing to out-of-distribution (OOD) evaluation on neural NLP models.

Adversarial Robustness Domain Generalization

Learning to Self-Train for Semi-Supervised Few-Shot Classification

1 code implementation NeurIPS 2019 Xinzhe Li, Qianru Sun, Yaoyao Liu, Shibao Zheng, Qin Zhou, Tat-Seng Chua, Bernt Schiele

On each task, we train a few-shot model to predict pseudo labels for unlabeled data, and then iterate the self-training steps on labeled and pseudo-labeled data with each step followed by fine-tuning.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.