You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was searching for STN implementations in GitHub and came across yours. I have a few queries regarding STN implementation only for the Attention mechanism which have fixed isotropic scaling say 0.5 and the localization network predicts only translation parameters (tx and ty).
Queries:
If you use the same Spatial Transformer module written by you will it work?
Should I use the localization network to predict only two parameters (tx and ty)?
I bring the theta of shape (2, ) to (2, 3) by following the steps below,
a. [tx, ty] * [0, 0, 1] --> [[0, 0, tx], [0, 0, ty]]
b. [[0, 0, tx], [0, 0, ty]] + [[0.5, 0, 0], [0, 0.5, 0]] --> [[0.5, 0, tx], [0, 0.5, ty]]
will it still be differentiable?
Followed by a spatial transformer network.
Will this work?
The text was updated successfully, but these errors were encountered:
I was searching for STN implementations in GitHub and came across yours. I have a few queries regarding STN implementation only for the Attention mechanism which have fixed isotropic scaling say 0.5 and the localization network predicts only translation parameters (tx and ty).
Queries:
a. [tx, ty] * [0, 0, 1] --> [[0, 0, tx], [0, 0, ty]]
b. [[0, 0, tx], [0, 0, ty]] + [[0.5, 0, 0], [0, 0.5, 0]] --> [[0.5, 0, tx], [0, 0.5, ty]]
will it still be differentiable?
Followed by a spatial transformer network.
Will this work?
The text was updated successfully, but these errors were encountered: