Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A few questions about the implementation #25

Open
jewes opened this issue Jun 6, 2018 · 1 comment
Open

A few questions about the implementation #25

jewes opened this issue Jun 6, 2018 · 1 comment

Comments

@jewes
Copy link

jewes commented Jun 6, 2018

Hello abhaydoke09,
Thanks for sharing your nice work. I have a few questions about the implementation:

  1. The default input image size is 448x488, and this results in the shape of last convolution layer (conv5_3) is (-1, 28, 28, 512). In the following line, is the 784 the result of 28x28? If the input image size is reduced to 224x224, then the shape of conv5_3 becomes (-1, 14, 14, 512), should the 784 in the following be changed to 256 (which is 14 x 14)?
self.phi_I = tf.divide(self.phi_I,784.0)  
  1. It looks like the number of parameter in last fully connected layer is always 512512num_classes no matter what the input image size is. Is this expected?

Agains, thanks for the nice work!

@reckdk
Copy link

reckdk commented Jan 15, 2019

From these lines, '784' is the result of 28x28, which corresponds to 'feature dimension c' in Bilinear-CNN paper.
Please refer to #7 and these lines for more info.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants