Prompt Engineering
56 papers with code • 10 benchmarks • 9 datasets
Prompt engineering is the process of designing and refining the prompts used to generate text from language models, such as GPT-3 or similar models. The goal of prompt engineering is to improve the quality and relevance of the generated text by carefully crafting the prompts to elicit the desired responses from the model.
Prompt engineering involves several steps, including selecting the appropriate model architecture and parameters, designing the prompt format and structure, selecting the appropriate task and training data, and fine-tuning the model using the selected prompt and data.
Prompt engineering is a crucial step in the development of language models, as it can greatly influence the quality and effectiveness of the model's responses. By carefully designing and refining the prompts used to generate text, researchers and developers can improve the accuracy and relevance of the model's output, making it more useful for a wide range of applications, including chatbots, language translation, content creation, and more.
Libraries
Use these libraries to find Prompt Engineering models and implementationsDatasets
Most implemented papers
Learning Transferable Visual Models From Natural Language Supervision
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories.
Multitask Prompted Training Enables Zero-Shot Task Generalization
Large language models have recently been shown to attain reasonable zero-shot generalization on a diverse set of tasks (Brown et al., 2020).
GPT Understands, Too
On the SuperGlue benchmark, GPTs achieve comparable and sometimes better performance to similar-sized BERTs in supervised learning.
Learning to Prompt for Vision-Language Models
Large pre-trained vision-language models like CLIP have shown great potential in learning representations that are transferable across a wide range of downstream tasks.
Conditional Prompt Learning for Vision-Language Models
With the rise of powerful pre-trained vision-language models like CLIP, it becomes essential to investigate ways to adapt these models to downstream datasets.
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners
Large-scale pre-trained language models have contributed significantly to natural language processing by demonstrating remarkable abilities as few-shot learners.
Ask Me Anything: A simple strategy for prompting language models
Prompting is a brittle process wherein small modifications to the prompt can cause large variations in the model predictions, and therefore significant effort is dedicated towards designing a painstakingly "perfect prompt" for a task.
Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models
Prompting language models (LMs) with training examples and task descriptions has been seen as critical to recent successes in few-shot learning.
What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
GPT-3 shows remarkable in-context learning ability of large-scale language models (LMs) trained on hundreds of billion scale data.
ActionCLIP: A New Paradigm for Video Action Recognition
Moreover, to handle the deficiency of label texts and make use of tremendous web data, we propose a new paradigm based on this multimodal learning framework for action recognition, which we dub "pre-train, prompt and fine-tune".