Fine-Tuning

MixLoRA is a type of PEFT method which construct a resource-efficient sparse MoE model based on LoRA.

Source: MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Multi-Task Learning 1 50.00%
Quantization 1 50.00%

Components


Component Type
Transformer
Transformers

Categories