You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, could you please explain why Conv 3x3x3 and Conv 1x1x1 are used in the 3D LKA Block instead of continuing to use Layer Norm and MLP as in the 2D LKA Block?
The text was updated successfully, but these errors were encountered:
The different strategy for the 3D LKA that you mentioned in the paper refers to replacing LN (Layer Normalization) and MLP (Multi-Layer Perceptron) with 3x3x3 and 1x1x1 convolutions, right? Additionally, I noticed in your paper that a separate deformable convolution layer was introduced after the depth-wise convolution in the 3D LKA. Is there a structural diagram available to take a look at?
Hello, could you please explain why Conv 3x3x3 and Conv 1x1x1 are used in the 3D LKA Block instead of continuing to use Layer Norm and MLP as in the 2D LKA Block?
The text was updated successfully, but these errors were encountered: