You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thank you for your research.
MCNET make realiable result compare than other face animation models. I like it.
Do you have any plan to share 512 size model?
If not, any guide or advice for training 512 size model? (ex. number of keypoint, training time or epoch, or anything regarding config.yaml)
best regards.
The text was updated successfully, but these errors were encountered:
Hi, this model is very easy to extend to 512 size. However, a high-quality dataset is needed. Once you get the proper dataset, you need to modify the parameters in the config file, e.g., image size.
Hi @harlanhong
Is this explanation for training?
I can not use your pretrained model for 512x512 inference, right? (actually, tried, but got a runtime error concerning different sizes of input and model weights)
Hi, thank you for your research.
MCNET make realiable result compare than other face animation models. I like it.
Do you have any plan to share 512 size model?
If not, any guide or advice for training 512 size model? (ex. number of keypoint, training time or epoch, or anything regarding config.yaml)
best regards.
The text was updated successfully, but these errors were encountered: