Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why Conv 3x3x3 and Conv 1x1x1 are used in the 3D LKA Block? #31

Open
xiaogege1210 opened this issue Oct 20, 2024 · 1 comment
Open

Comments

@xiaogege1210
Copy link

Hello, could you please explain why Conv 3x3x3 and Conv 1x1x1 are used in the 3D LKA Block instead of continuing to use Layer Norm and MLP as in the 2D LKA Block?

@xiaogege1210
Copy link
Author

The different strategy for the 3D LKA that you mentioned in the paper refers to replacing LN (Layer Normalization) and MLP (Multi-Layer Perceptron) with 3x3x3 and 1x1x1 convolutions, right? Additionally, I noticed in your paper that a separate deformable convolution layer was introduced after the depth-wise convolution in the 3D LKA. Is there a structural diagram available to take a look at?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant