-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add OPT model implementation, OPT model loading functionality from huggingface, and training OPT models with FSDP #4
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of points to ensure future extensibility, otherwise LGTM
@@ -19,8 +19,9 @@ | |||
models_parallelize_fns = { | |||
"llama2": parallelize_llama, | |||
"llama3": parallelize_llama, | |||
'opt': parallelize_llama, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could we change the name of parallelize_llama to parallelize_decoder_only? This looks like it's a bug even if it's not.
…s not passing by default
Example run output