You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During training, I want the learning rate of the image_backbone to remain at 0.1 times the base learning rate. Therefore, I set the following in the configuration file:
param_scheduler= [
# learning rate scheduler# During the first 8 epochs, learning rate increases from lr to lr * 100# during the next 12 epochs, learning rate decreases from lr * 100 to lrdict(
type='CosineAnnealingLR',
T_max=8,
eta_min=lr*100,
begin=0,
end=8,
by_epoch=True,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
T_max=12,
eta_min=lr,
begin=8,
end=20,
by_epoch=True,
convert_to_iter_based=True),
# momentum scheduler# During the first 8 epochs, momentum increases from 0 to 0.85 / 0.95# during the next 12 epochs, momentum increases from 0.85 / 0.95 to 1dict(
type='CosineAnnealingMomentum',
T_max=8,
eta_min=0.85/0.95,
begin=0,
end=8,
by_epoch=True,
convert_to_iter_based=True),
dict(
type='CosineAnnealingMomentum',
T_max=12,
eta_min=1,
begin=8,
end=20,
by_epoch=True,
convert_to_iter_based=True)
]
Reproduces the problem - command or script
Consistent with the above.
Reproduces the problem - error message
At the beginning, the learning rate of img_backbone is indeed 0.1 times the base learning rate:
It looks like lr_mult only works at the beginning to set the learning rate. How can I make lr_mult work throughout the training process?
Additional information
I think that after adding lr_mult, the learning rate of the image backbone during the entire training process should be 0.1 times the basic learning rate.
The text was updated successfully, but these errors were encountered:
Prerequisite
Environment
mmcv==2.1.0
mmdet==3.3.0
mmdet3d==1.4.0
mmengine==0.10.5
Reproduces the problem - code sample
During training, I want the learning rate of the image_backbone to remain at 0.1 times the base learning rate. Therefore, I set the following in the configuration file:
And set the param_scheduler:
Reproduces the problem - command or script
Consistent with the above.
Reproduces the problem - error message
At the beginning, the learning rate of img_backbone is indeed 0.1 times the base learning rate:
However, img_backbone's learning rate slowly caught up during the training process:
It looks like
lr_mult
only works at the beginning to set the learning rate. How can I makelr_mult
work throughout the training process?Additional information
I think that after adding
lr_mult
, the learning rate of the image backbone during the entire training process should be 0.1 times the basic learning rate.The text was updated successfully, but these errors were encountered: