Instruction Following

240 papers with code • 1 benchmarks • 11 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Instruction Following models and implementations
2 papers
9,853
2 papers
9,849
See all 7 libraries.

Most implemented papers

Self-Instruct: Aligning Language Models with Self-Generated Instructions

tatsu-lab/stanford_alpaca 20 Dec 2022

Applying our method to the vanilla GPT3, we demonstrate a 33% absolute improvement over the original model on Super-NaturalInstructions, on par with the performance of InstructGPT-001, which was trained with private user data and human annotations.

Habitat: A Platform for Embodied AI Research

facebookresearch/habitat-sim ICCV 2019

We present Habitat, a platform for research in embodied artificial intelligence (AI).

QLoRA: Efficient Finetuning of Quantized LLMs

artidoro/qlora NeurIPS 2023

Our best model family, which we name Guanaco, outperforms all previous openly released models on the Vicuna benchmark, reaching 99. 3% of the performance level of ChatGPT while only requiring 24 hours of finetuning on a single GPU.

LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention

opengvlab/llama-adapter 28 Mar 2023

We present LLaMA-Adapter, a lightweight adaption method to efficiently fine-tune LLaMA into an instruction-following model.

Visual Instruction Tuning

haotian-liu/LLaVA NeurIPS 2023

Instruction tuning large language models (LLMs) using machine-generated instruction-following data has improved zero-shot capabilities on new tasks, but the idea is less explored in the multimodal field.

Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks

allenai/natural-instructions 16 Apr 2022

This large and diverse collection of tasks enables rigorous benchmarking of cross-task generalization under instructions -- training models to follow instructions on a subset of tasks and evaluating them on the remaining unseen ones.

Mapping Instructions to Actions in 3D Environments with Visual Goal Prediction

clic-lab/ciff EMNLP 2018

We propose to decompose instruction execution to goal prediction and action generation.

Point-Bind & Point-LLM: Aligning Point Cloud with Multi-modality for 3D Understanding, Generation, and Instruction Following

ziyuguo99/point-bind_point-llm 1 Sep 2023

We introduce Point-Bind, a 3D multi-modality model aligning point clouds with 2D image, language, audio, and video.

WizardLM: Empowering Large Language Models to Follow Complex Instructions

nlpxucan/wizardlm 24 Apr 2023

In this paper, we show an avenue for creating large amounts of instruction data with varying levels of complexity using LLM instead of humans.

LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model

zrrskywalker/llama-adapter 28 Apr 2023

This strategy effectively alleviates the interference between the two tasks of image-text alignment and instruction following and achieves strong multi-modal reasoning with only a small-scale image-text and instruction dataset.