-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DSIFN accuracy #5
Comments
Hi @WesleyZhang1991, yes, we do change the training strategy when training on the DSIFN dataset. As mentioned in the paper, instead of random cropping 256x256 images from 512x512 images during the training, we observed much-improved performance and stable training when the network is training on the training dataset which is created by making non-overlapping patches of 256x256 from original 512x512 images. So, for the DSIFN dataset, first, we created 256x256 non-overlapping patches from the original training data and then trained the network as we did it for the LEVIR-CD dataset. You can find the dataset processing file for DSIFN and LEVIR inside the data_preparation folder: ChangeFormer/data_preparation/ |
In the experiments of our own models, we suffer from severe overfitting for DSIFN-CD. The foreground IoU could achieve a IoU of 53% (which is similar to BIT) and then drops to about 50%(or even worse) with more training. The non-overlapping 256x256 patches strategy did not work for our cases. I notice that the training log of your code does not suffer from overfitting. The test accuracy(e.g. iou_1) improves stably until 200 epochs. I have two questions: Looking forward to your reply |
Hi @WesleyZhang1991,
I hope, this answers your question. Best, |
Hi @wgcban Please check whether DSIFN-CD test data is merged into training data and is used for training. Since they share the same naming format, e.g., '1_0.png' for both train and test. |
Hello, I am trying to reproduce your model code for BIT-CD on my side, but it looks incorrect when using WHU-CD dataset as well as DSIFN-CD dataset, what is the situation that the result appears too high than your result? But it looks normal when using the LEVID-CD dataset. Do I need to change the pre-training? |
Hi wgcban. |
Yes @YangFanghan |
The images in test set also exist in the train set in your DSIFN dataset, like 6_3. This is the reason for high accuracy。 |
Hi wgcban,
I notice that DSIFN-CD dataset has much higher accuracy than BIT[4] (IoU from BIT 52.97% to ChangeFormer 76.48%). On another dataset, LEVIR-CD, the difference is not as large as DSIFN-CD. Could you please explain the main source of the large improvement on DSIFN-CD? e.g. training strategy, data augmentation, model structure...
Thanks
Wesley
The text was updated successfully, but these errors were encountered: