-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trying to reproduce BiC #35
Comments
It does seem good to me. I have found some performance loss if using recent pytorch versions (no idea why...), have you tried with torch==1.2.0? |
Oh really? I am using torch 1.7.1. I will try with an older version and we'll see! Quite weird, though... |
At least, the version had a significant impact on PODNet NME (#31 (comment)). |
emmm, have you fixed the problem? I have same problem..... |
Hello, No, I haven't looked into the problem, have you try using torch==1.2.0 as I've suggested? If you are just looking to add more baselines to your paper, I'd suggest you to WA instead (also see DER's repo). Like BiC it's a classifier recalibration, but simpler and more efficient. PS: if you find something to improve my version of BiC, please share it with us. |
Thanks for your reply. 1.python == 3.6.13 torch==1.2.0
result in last setp:
2.python==3.8.5 torch==1.8.1
result in last setp:
It seems like that there is no much difference . emmmmmm,Both of their incremental_accuracy in last setp are about 0.50 which lower than the result reported in paper |
Hello, my dear author. It seems that your BIC implementation does not train the parameters of the bias layer, or I may not find it. Can you give me a suggestion.Thanks a lot. |
Hello Arthur,
Congratulations for the contribution in IL.
I am trying to reproduce the BiC results running the following command:
python -minclearn --options options/bic/bic_cifar100.yaml options/data/cifar100_3orders.yaml --increment 10 --initial-increment 50 --fixed-memory --temperature 2 --data-path data/ --device 0
The average incremental accuracy I am getting is ~51%, which is ~5% lower than this reported on paper. Is there anything wrong with the command?
Thank you in advance :)
The text was updated successfully, but these errors were encountered: