You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm planning on create a custom dataset for a specific domain which is to be trained on this network and I had a couple of questions of how I should go about this. The plan is to structure the dataset as follows:
Obtain the videos, extract the frames of these videos and place the frames in the HR folders.
Use generate_mod_LR_bic.py to generate the corresponding LR frames.
Use create_lmdb_mp.py but adjusted to my dataset to generate lmdb files.
Edit train_zsm.yml for my dataset and run the train.py file.
Questions:
Does the HR images need to be a specific size? If they are different sizes, what would I need to change in the configuration files?
I would have thought that, given this network is used to solve the STVSR task, there should also be GT (HR) frames corresponding to the non-existent frames which are generated by ZSM. Therefore, the input frames (LR) should be of lower frame rate. But it seems like this isn't the case. Is this correct? So the LR and HR folders should contain the same number of files?
How do I train for a specific interpolation factor? Or is this even possible? Is the default interpolation factor 2?
The text was updated successfully, but these errors were encountered:
Hi,
I'm planning on create a custom dataset for a specific domain which is to be trained on this network and I had a couple of questions of how I should go about this. The plan is to structure the dataset as follows:
So my plan is :
Questions:
The text was updated successfully, but these errors were encountered: