Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NaN errors during canal_pretrain process #10

Open
puppy2000 opened this issue Jun 27, 2023 · 2 comments
Open

NaN errors during canal_pretrain process #10

puppy2000 opened this issue Jun 27, 2023 · 2 comments

Comments

@puppy2000
Copy link

Sorry for bothering you.I get NaN errors during canal_pretrain process.
image
From another issue https://github.com/AImageLab-zip/alveolar_canal/issues/7
I find that my prediction will get NaN during training.Could you help me to find the problem.

@LucaLumetti
Copy link
Collaborator

Hi @puppy2000, I'm sorry to hear that you have gotten some troubles during the network training.

The issue you have linked could be given by a different reason, as the DiceLoss is employed instead of the JaccardLoss.

When NaNs appear, it can be challenging to identify the specific operation that caused them. One approach to pinpoint the operation responsible for NaNs is to debug all the operations performed before the occurrence of NaNs.

Even if an epoch has been executed successfully, we cannot rule out the possibility that NaNs stem from the generated data. This is because random patches are extracted from the original volume. To ensure that both preds and gt do not contain any NaNs before the self.loss() call, please double-check them.

Upon examining the JaccardLoss code, I noticed that I'm using eps = 1e-6 to prevent NaNs in the division. While this should work fine in float32, it may cause issues in float16, where 1 - 1e/6 = 1.

I would try to execute the entire pipeline by myself again as soon as possible. If you come across any new developments or findings, please let me know.

@puppy2000
Copy link
Author

puppy2000 commented Jun 28, 2023

Hi @puppy2000, I'm sorry to hear that you have gotten some troubles during the network training.

The issue you have linked could be given by a different reason, as the DiceLoss is employed instead of the JaccardLoss.

When NaNs appear, it can be challenging to identify the specific operation that caused them. One approach to pinpoint the operation responsible for NaNs is to debug all the operations performed before the occurrence of NaNs.

Even if an epoch has been executed successfully, we cannot rule out the possibility that NaNs stem from the generated data. This is because random patches are extracted from the original volume. To ensure that both preds and gt do not contain any NaNs before the self.loss() call, please double-check them.

Upon examining the JaccardLoss code, I noticed that I'm using eps = 1e-6 to prevent NaNs in the division. While this should work fine in float32, it may cause issues in float16, where 1 - 1e/6 = 1.

I would try to execute the entire pipeline by myself again as soon as possible. If you come across any new developments or findings, please let me know.

Hi,I double check the code,and I set the batch_size = 1 to debug.This time no NaNs error occur,and it seems the network is correctly trained.As you can see in this picture
image
So I wonder if it's because random patches are extracted from the original volume,and during DataParellel,some bad examples will cause NaNs error.Maybe it is caused by the badly generated laebl,because I observe that some generated label is not so well.I will check the code further.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants