You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the tutorial you transform the images to 224 x 224 why is that? Is because the model was trained with 224x224 images the modal may not be able to label larger images?
The text was updated successfully, but these errors were encountered:
Yes, that is the specific input size that the model used in the example takes and how it was trained.
Each model has something called a signature that tells you the dimensions and type it takes for input and gives for output. Currently I recommend using the saved_model_cli script to help work that out. The specifics of how the input and output are encoded into the TFTensor varies from model to model, so you have to read the model documentation to find out what is expected.
In the tutorial you transform the images to 224 x 224 why is that? Is because the model was trained with 224x224 images the modal may not be able to label larger images?
The text was updated successfully, but these errors were encountered: