Instruction Following

154 papers with code • 0 benchmarks • 8 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Instruction Following models and implementations

Most implemented papers

Self-Instruct: Aligning Language Models with Self-Generated Instructions

tatsu-lab/stanford_alpaca 20 Dec 2022

Applying our method to the vanilla GPT3, we demonstrate a 33% absolute improvement over the original model on Super-NaturalInstructions, on par with the performance of InstructGPT-001, which was trained with private user data and human annotations.

Habitat: A Platform for Embodied AI Research

facebookresearch/habitat-sim ICCV 2019

We present Habitat, a platform for research in embodied artificial intelligence (AI).

LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention

opengvlab/llama-adapter 28 Mar 2023

We present LLaMA-Adapter, a lightweight adaption method to efficiently fine-tune LLaMA into an instruction-following model.

QLoRA: Efficient Finetuning of Quantized LLMs

artidoro/qlora NeurIPS 2023

Our best model family, which we name Guanaco, outperforms all previous openly released models on the Vicuna benchmark, reaching 99. 3% of the performance level of ChatGPT while only requiring 24 hours of finetuning on a single GPU.

Mapping Instructions to Actions in 3D Environments with Visual Goal Prediction

clic-lab/ciff EMNLP 2018

We propose to decompose instruction execution to goal prediction and action generation.

Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks

allenai/natural-instructions 16 Apr 2022

This large and diverse collection of tasks enables rigorous benchmarking of cross-task generalization under instructions -- training models to follow instructions on a subset of tasks and evaluating them on the remaining unseen ones.

WizardLM: Empowering Large Language Models to Follow Complex Instructions

nlpxucan/wizardlm 24 Apr 2023

In this paper, we show an avenue for creating large amounts of instruction data with varying levels of complexity using LLM instead of humans.

LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model

zrrskywalker/llama-adapter 28 Apr 2023

This strategy effectively alleviates the interference between the two tasks of image-text alignment and instruction following and achieves strong multi-modal reasoning with only a small-scale image-text and instruction dataset.

L-Eval: Instituting Standardized Evaluation for Long Context Language Models

openlmlab/leval 20 Jul 2023

Recently, there has been growing interest in extending the context length of large language models (LLMs), aiming to effectively process long inputs of one turn or conversations with more extensive histories.

InstructionGPT-4: A 200-Instruction Paradigm for Fine-Tuning MiniGPT-4

waltonfuture/InstructionGPT-4 23 Aug 2023

To achieve this, we first propose several metrics to access the quality of multimodal instruction data.