RGL: A Simple yet Effective Relation Graph Augmented Prompt-based Tuning Approach for Few-Shot Learning

ACL ARR January 2022  ·  Anonymous ·

Pre-trained language models (PLMs) which carry generic knowledge can be a good starting point for adapting to downstream applications. However, it is difficult to generalize PLMs to new tasks with only a limited number of labeled samples given. In this work, we show that Relation Graph augmented Learning RGL method can obtain better performance in few-shot natural language understanding tasks. During learning, RGL constructs a relation graph based on the label consistency between samples in the same batch, and learns to solve the resultant node classification and link prediction problems of the relation graphs. In this way, RGL fully exploits the limited supervised information, which can boost the tuning effectiveness. Extensive experiments on benchmark tasks show that RGL consistently improve the performance of prompt-based tuning strategies.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here