You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I have a folder with 5,4k JPG files, and I want to train a StyleGAN 2 ADA model with a custom dataset made from this images. Problem is, I am working on a google collab notebook with only 40 GB free disk space (I'm from Spain and Google Collab PRO isnt avaible here yet...). After some preprocessing to crop and resize them to 1024x1024, the folder size is 400MB aprox.
The problem is, when decoding the JPEG to the TFRecords to create the dataset using:
Hello, I have a folder with 5,4k JPG files, and I want to train a StyleGAN 2 ADA model with a custom dataset made from this images. Problem is, I am working on a google collab notebook with only 40 GB free disk space (I'm from Spain and Google Collab PRO isnt avaible here yet...). After some preprocessing to crop and resize them to 1024x1024, the folder size is 400MB aprox.
The problem is, when decoding the JPEG to the TFRecords to create the dataset using:
!python dataset_tool.py create_from_images ./datasets/{dataset_name} {dataset_path}
The size of the TFRecord dataset grows up to more 40 GB , and the disk space in the free google colab notebook saturates.
Is there anyway to fix this or to change the dataset_tool.py code to encode the bytes in the TFRecord as JPG does?
The text was updated successfully, but these errors were encountered: