This is the implementation for Full Parameter Fine-Tuning for Large Language Models with Limited Resources.
In this work, we propose a new optimizer, LOw-Memory Optimization (LOMO), which fuses the gradient computation and the parameter update in one step to reduce memory usage. Our approach enables the full parameter fine-tuning of a 7B model on a single RTX 3090, or a 65B model on a single machine with 8×RTX 3090, each with 24GB memory.
torch
deepspeed
transformers
peft
wandb
The minimum dependency is PyTorch, and others are used to reproduce our paper results.
We provide code for fine-tuning Large Language Models (LLMs) using three different approaches: LOMO, LoRA, and LoRA + LOMO.
- For full parameter fine-tuning using LOMO, the implementation is in
src/lomo_trainer.py
, and you can run:
deepspeed --master_port "$port" --include localhost:"$CUDA_VISIBLE_DEVICES" src/train_lomo.py config/args_lomo.yaml
- For LoRA and LoRA + LOMO, the implementation is in
src/lomo_lora_trainer.py
, and you can run:
deepspeed --master_port "$port" --include localhost:"$CUDA_VISIBLE_DEVICES" src/train_lomo_lora.py config/args_lomo_lora.yaml
In the code, we have included the lora_only
argument in src/arguments.py
, which controls whether to use LoRA only or LoRA + LOMO. Please note that when lora_only
is set to True
, the arguments related to LOMO will not work.
Besides, we provide a simple run.sh
script for convenience. You can execute the code using the following command:
bash run.sh
For data processing, we currently only provide the six datasets of SuperGLUE mentioned in the paper. If you wish to use new datasets, please modify the Dataset
and DataCollator
accordingly.
For evaluation, we currently only provide the eval_step
codes for multiple-choice QA and generation tasks. If you have other requirements, please modify the eval_step
code in the LOMOTrainer
or LOMOLoRATrainer
accordingly and provide the necessary compute_metrics
to the trainer.
We provide the sampled datasets used in our experiments here.
Due to the limited computational resources, we reported the highest results obtained from experiments conducted with the same random seed (42
).
We acknolwedge this limitation in our work and plan to conduct repeated experiments in the next version to address it.
Feel free to raise issues if you have any questions.
Our implementation relies on injecting hook functions into PyTorch's backward pass. As depicted in the figure, we register a customized hook function for each parameter. When the gradient of a parameter is computed (prior to writing it to the .grad attribute), its corresponding hook function is invoked. For more information about hook functions and the backward pass of the autograd graph, please refer to PyTorch's documentation. In summary, during the backward pass, we go through a tensor and its grad_fn, write the gradient into the .grad attribute, and then pass to the next tensor.
Our customized hook function scans all the parameters, updating a parameter if its .grad attribute is not empty, and then clears and frees the .grad attribute. Since the hook function for a parameter is called before its .grad attribute is set, the .grad attribute of the last parameter in the autograd graph is not ready when the last hook function is invoked. Therefore, we perform an additional scan to update the last parameter.
@article{lv2023full,
title={Full Parameter Fine-tuning for Large Language Models with Limited Resources},
author={Lv, Kai and Yang, Yuqing and Liu, Tengxiao and Gao, Qinghui and Guo, Qipeng and Qiu, Xipeng},
journal={arXiv preprint arXiv:2306.09782},
year={2023}
}