-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add finetune code for qwen2audio #61
base: main
Are you sure you want to change the base?
Conversation
感谢您贡献的微调代码, (之前我基于TRL库实现了Qwen2audio的微调,最近闲下来了,想提一个PR,发现了您已经提了~) |
对的,所以加了个个warning~ |
使用脚本进行全量微调,一块A800的80GB内存OOM,最大显存需要多少?
使用了一块A800-80GB进行全量微调,返回OOM错误,请问最大需要多少显存才能实现全量微调? |
So can we do LORA, or Full-finetune qwen2-audio now? Any document on how to prepare dataset and train (English)? Thank you! |
请问有朋友尝试过微调吗,可以告知一下所需的数据量以及计算资源吗?另外有朋友尝试用lora微调成功的吗?感谢告知 |
没有注意过,可以尝试lora |
lora 微调qwen2-audio 4*A 800 每张卡占用20G-21G 总消耗44G的样子 |
请问可以提供一下环境配置吗? |
感谢开源优秀的工作,近期调研相关内容,顺便增补了训练代码:
使用deepspeed和accelerate对模型进行微调,支持多机多节点、lora
code: https://github.com/Lollipop/Qwen2-Audio/blob/main/finetune/run.sh
Fine-tune the model using DeepSpeed and Accelerate, supporting multi-machine multi-node training and LoRA.
Code: https://github.com/Lollipop/Qwen2-Audio/blob/main/finetune/run.sh