|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with "task descriptions" in natural language (e. g., Radford et al., 2019).
Text classification tends to be difficult when data are deficient or when it is required to adapt to unseen classes.
Ranked #1 on Multi-Domain Sentiment Classification on ARSC
Our work aims to make it possible to classify an entire corpus of unlabeled documents using a human-in-the-loop approach, where the content owner manually classifies just one or two documents per category and the rest can be automatically classified.
Therefore, we should be able to learn a general representation of each class in the support set and then compare it to new queries.
Ranked #1 on Few-Shot Text Classification on ODIC 5-way (10-shot)
Few-shot text classification is a fundamental NLP task in which a model aims to classify text into a large number of categories, given only a few training examples per category.