-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NaN errors during canal_pretrain process #10
Comments
Hi @puppy2000, I'm sorry to hear that you have gotten some troubles during the network training. The issue you have linked could be given by a different reason, as the DiceLoss is employed instead of the JaccardLoss. When NaNs appear, it can be challenging to identify the specific operation that caused them. One approach to pinpoint the operation responsible for NaNs is to debug all the operations performed before the occurrence of NaNs. Even if an epoch has been executed successfully, we cannot rule out the possibility that NaNs stem from the generated data. This is because random patches are extracted from the original volume. To ensure that both preds and gt do not contain any NaNs before the self.loss() call, please double-check them. Upon examining the JaccardLoss code, I noticed that I'm using eps = 1e-6 to prevent NaNs in the division. While this should work fine in float32, it may cause issues in float16, where 1 - 1e/6 = 1. I would try to execute the entire pipeline by myself again as soon as possible. If you come across any new developments or findings, please let me know. |
Hi,I double check the code,and I set the batch_size = 1 to debug.This time no NaNs error occur,and it seems the network is correctly trained.As you can see in this picture |
Sorry for bothering you.I get NaN errors during canal_pretrain process.
From another issue https://github.com/AImageLab-zip/alveolar_canal/issues/7
I find that my prediction will get NaN during training.Could you help me to find the problem.
The text was updated successfully, but these errors were encountered: