You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fining tuning multiple lora on a single GPU might encounter OOM issue. It is necessary to carefully adjust parameters such as batch_size and cutoff_len, but this still cannot guarantee to completely avoid OOM. Is it possible to run a tool first to provide a reference(or best) configuration for users based on their data?
The text was updated successfully, but these errors were encountered:
Fining tuning multiple lora on a single GPU might encounter OOM issue. It is necessary to carefully adjust parameters such as batch_size and cutoff_len, but this still cannot guarantee to completely avoid OOM. Is it possible to run a tool first to provide a reference(or best) configuration for users based on their data?
The text was updated successfully, but these errors were encountered: