You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to fine tune a model using FastLanguageModel, and I only want to train the last few layers. When I am passing the target module/layer, it is throwing error saying that it only allows "accepted_modules". The only accepted modules are : ["q_proj", "k_proj", "v_proj", "o_proj","gate_proj", "up_proj", "down_proj"].
Is there anyway I can train only last few layers using unsloth?
The text was updated successfully, but these errors were encountered:
When I do that, unsloth library throws these warning- Not an error, but Unsloth cannot patch MLP layers with our manual autograd engine since either LoRA adapters are not enabled or a bias term (like in Qwen) is used. Not an error, but Unsloth cannot patch Attention layers with our manual autograd engine since either LoRA adapters are not enabled or a bias term (like in Qwen) is used. Not an error, but Unsloth cannot patch O projection layer with our manual autograd engine since either LoRA adapters are not enabled or a bias term (like in Qwen) is used. Unsloth 2024.7 patched 32 layers with 4 QKV layers, 4 O layers and 4 MLP layers.
Is it okay to go ahead and fine tune the model ? What kind of effect would this warning have on training speed, model performance, etc ?
I am trying to fine tune a model using FastLanguageModel, and I only want to train the last few layers. When I am passing the target module/layer, it is throwing error saying that it only allows "accepted_modules". The only accepted modules are : ["q_proj", "k_proj", "v_proj", "o_proj","gate_proj", "up_proj", "down_proj"].
Is there anyway I can train only last few layers using unsloth?
The text was updated successfully, but these errors were encountered: