You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your great job!
We found something works not ideally in the training process, which is the increasing of the generator's cycle_consistency_loss. According to the essay, the loss forces the generator to reconstructed images more resemble to the original image. So we assume that the loss would decrease in the training process, but we found it actually increased while other generator losses were decreasing. Here we show the visualization of the losses.
In order to show the problem more concretely, here we also show 3 groups of images, with the order of 'origin-generated-reconstruction'. In the reconstruction, we found that some new features were generated in the generated image but fail to recover in the reconstruction(i.e. the front hair in the second group). Obviously it is not what we hope to happen:
So we want to ask that whether the increasing of the consistency loss happens in your training process. Further, do you have any suggestions for this phenomenon?
The text was updated successfully, but these errors were encountered:
@csh589 Hello, I use 29000 pictures and get a bad results like this. Some advices? Can you share you parameters? And how many pictures are in your datasets?
# train argsparser.add_argument('--face_dir', dest='face_dir', default='../data_prepare/_data/dataFile_all/',
help='the folder dir where store face images')
parser.add_argument('--au_pkl_dir', dest='au_pkl_dir', default='../data_prepare/_data/dataFile_all.pkl',
help='.pkl file store the au labels')
parser.add_argument('--batch_size', dest='batch_size', type=int, default=25, help='batch size of train')
parser.add_argument('--epoch', dest='epoch', type=int, default=30, help='epoch of this train')
parser.add_argument('--lambda_D_img', dest='lambda_D_img', type=float, default=1, help='')
parser.add_argument('--lambda_D_au', dest='lambda_D_au', type=float, default=4000, help='')
parser.add_argument('--lambda_D_gp', dest='lambda_D_gp', type=float, default=10, help='')
parser.add_argument('--lambda_cyc', dest='lambda_cyc', type=float, default=10, help='')
parser.add_argument('--lambda_mask', dest='lambda_mask', type=float, default=0.1, help='')
parser.add_argument('--lambda_mask_smooth', dest='lambda_mask_smooth', type=float, default=1e-5, help='')
Thank you for your great job!
We found something works not ideally in the training process, which is the increasing of the generator's cycle_consistency_loss. According to the essay, the loss forces the generator to reconstructed images more resemble to the original image. So we assume that the loss would decrease in the training process, but we found it actually increased while other generator losses were decreasing. Here we show the visualization of the losses.
In order to show the problem more concretely, here we also show 3 groups of images, with the order of 'origin-generated-reconstruction'. In the reconstruction, we found that some new features were generated in the generated image but fail to recover in the reconstruction(i.e. the front hair in the second group). Obviously it is not what we hope to happen:
So we want to ask that whether the increasing of the consistency loss happens in your training process. Further, do you have any suggestions for this phenomenon?
The text was updated successfully, but these errors were encountered: