Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why transform to 224 x 224 #19

Open
pjfalbe opened this issue Aug 16, 2023 · 1 comment
Open

Why transform to 224 x 224 #19

pjfalbe opened this issue Aug 16, 2023 · 1 comment

Comments

@pjfalbe
Copy link

pjfalbe commented Aug 16, 2023

In the tutorial you transform the images to 224 x 224 why is that? Is because the model was trained with 224x224 images the modal may not be able to label larger images?

@zmughal
Copy link
Member

zmughal commented Aug 16, 2023

Yes, that is the specific input size that the model used in the example takes and how it was trained.

Each model has something called a signature that tells you the dimensions and type it takes for input and gives for output. Currently I recommend using the saved_model_cli script to help work that out. The specifics of how the input and output are encoded into the TFTensor varies from model to model, so you have to read the model documentation to find out what is expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants