Semi-Supervised Text Classification

22 papers with code • 2 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Adversarial Training Methods for Semi-Supervised Text Classification

tensorflow/models 25 May 2016

We extend adversarial and virtual adversarial training to the text domain by applying perturbations to the word embeddings in a recurrent neural network rather than to the original input itself.

Deconvolutional Paragraph Representation Learning

dreasysnail/deconv_paragraph_represention NeurIPS 2017

Learning latent representations from long text sequences is an important first step in many natural language processing applications.

Did You Really Just Have a Heart Attack? Towards Robust Detection of Personal Health Mentions in Social Media

emory-irlab/PHM2017 26 Feb 2018

The first, critical, task for these applications is classifying whether a personal health event was mentioned, which we call the (PHM) problem.

Adversarial Dropout for Recurrent Neural Networks

sungraepark/adversarial_dropout_text_classification 22 Apr 2019

Successful application processing sequential data, such as text and speech, requires an improved generalization performance of recurrent neural networks (RNNs).

Semi-Supervised Learning with Normalizing Flows

izmailovpavel/flowgmm ICML 2020

Normalizing flows transform a latent distribution through an invertible neural network for a flexible and pleasingly simple approach to generative modelling, while preserving an exact likelihood.

MixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification

GT-SALT/MixText ACL 2020

This paper presents MixText, a semi-supervised learning method for text classification, which uses our newly designed data augmentation method called TMix.

DisCo: Distilled Student Models Co-training for Semi-supervised Text Mining

litesslhub/disco 20 May 2023

Many text mining models are constructed by fine-tuning a large deep pre-trained language model (PLM) in downstream tasks.

Rethinking Semi-supervised Learning with Language Models

amzn/pretraining-or-self-training 22 May 2023

Semi-supervised learning (SSL) is a popular setting aiming to effectively utilize unlabelled data to improve model performance in downstream natural language processing (NLP) tasks.

Variational Pretraining for Semi-supervised Text Classification

allenai/vampire ACL 2019

We accompany this paper with code to pretrain and use VAMPIRE embeddings in downstream tasks.