You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for this great work, you have mentioned the accuracy with both SA and AS blocks but under the absence of the co-attention fusion module in the paper and I wonder how did you get the result in this case? Did you have a direct FC layer at the end of the attention modules? How can we replicate that result?
The text was updated successfully, but these errors were encountered:
Thanks for this great work, you have mentioned the accuracy with both SA and AS blocks but under the absence of the co-attention fusion module in the paper and I wonder how did you get the result in this case? Did you have a direct FC layer at the end of the attention modules? How can we replicate that result?
The text was updated successfully, but these errors were encountered: