Vector-Quantized Input-Contextualized Soft Prompts for Natural Language Understanding

23 May 2022  ·  Rishabh Bhardwaj, Amrita Saha, Steven C. H. Hoi ·

Prompt Tuning (PT) has been largely successful as a parameter-efficient way of conditioning large-scale pre-trained language models towards a downstream task. More recently, soft prompt tuning has aimed to learn a fixed set of task-specific continuous vectors, i.e., soft tokens that remain static across the task samples. However, a fixed prompt may not generalize well to the diverse kinds of inputs the task comprises. With this motivation, we propose a novel way of prompting, Vector-quantized Input-contextualized Prompt Tuning or VIP. Essentially, VIP focuses on two aspects i) input-adaptation: input-specific contextualization of the soft tokens; and ii) vector quantization: we pass the tokens through a quantizer which effectively reduces representation variance by sampling prompts from a compact latent space. Over a wide range of natural language understanding tasks (SuperGLUE, QA, Relation Classification, NER, NLI), our proposed VIP framework beats the PT model by a margin of 1.19\%. Additionally, on Out-of-domain QA and Multi-Task setups over 4 different tasks spanning over 12 domains, we find that VIP outperforms PT by 0.75\%.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here