You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We observe a consistent performance lag when training AdaMixer with mmcv_full==1.3.5, especially with the longer training scheme. This phenomenon may be also widespread with mmcv_full>1.3.3.
For right reproduction, please use mmcv_full==1.3.3. We are actively investigating the reason behind. More information will be updated in this issue.
The text was updated successfully, but these errors were encountered:
I reproduce adamixer_r50_1x_coco.py using mmcv_full==1.3.9 and mmcv_full==1.3.3 respectively. Both mmcv_full==1.3.9 and mmcv_full==1.3.3 yield 42.3 mAP, which is 0.4 points lower than the number (42.7 mAP) reported in the paper. Are 0.4 points normal experiment noise? I conduct experiments using 8 V100 gpus.
The gap (0.4 AP) is a little bit large in my opinion but it is still acceptable. I reproduced adamixer_r50_1x_coco.py on 8 Titan XP GPUs once in my university lab and got 42.5 mAP (#5). Could you please provide the training log for more detailed information?
We observe a consistent performance lag when training AdaMixer with
mmcv_full==1.3.5
, especially with the longer training scheme. This phenomenon may be also widespread withmmcv_full>1.3.3
.For right reproduction, please use
mmcv_full==1.3.3
. We are actively investigating the reason behind. More information will be updated in this issue.The text was updated successfully, but these errors were encountered: