We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In function train_one_epoch, in the file src/training/train.py from line 156 to 162, as shown below:
losses = loss(**inputs, **inputs_no_accum, output_dict=True) del inputs del inputs_no_accum total_loss = sum(losses.values()) losses["loss"] = total_loss backward(total_loss, scaler)
Shouldn't we take the average of loss for gradient accumulation before calling backward()?
The text was updated successfully, but these errors were encountered:
Potentially, but I'm not totally sure. I think a test would be useful here, i.e., with and without the scaling compared the non-accum baseline.
Sorry, something went wrong.
No branches or pull requests
In function train_one_epoch, in the file src/training/train.py from line 156 to 162, as shown below:
Shouldn't we take the average of loss for gradient accumulation before calling backward()?
The text was updated successfully, but these errors were encountered: