We present LLaMA-Adapter, a lightweight adaption method to efficiently fine-tune LLaMA into an instruction-following model.
Recent large language models (LLMs) in the general domain, such as ChatGPT, have shown remarkable success in following instructions and producing human-like responses.
Recent text-to-video generation approaches rely on computationally heavy training and require large-scale video datasets.
To replicate the success of text-to-image (T2I) generation, recent works employ large-scale video datasets to train a text-to-video (T2V) generator.
We propose PAniC-3D, a system to reconstruct stylized 3D character heads directly from illustrated (p)ortraits of (ani)me (c)haracters.
Prompt tuning, which only tunes continuous prompts with a frozen language model, substantially reduces per-task storage and memory usage at training.
However current research rarely studies the impact of different amounts of instruction data on model performance, especially in the real-world use cases.
To address these challenges, we introduce a system that can jointly optimize distributed execution and gradient checkpointing plans.
In this work, we investigate the problem of creating high-fidelity 3D content from only a single image.
We propose SmoothQuant, a training-free, accuracy-preserving, and general-purpose post-training quantization (PTQ) solution to enable 8-bit weight, 8-bit activation (W8A8) quantization for LLMs.