-
Notifications
You must be signed in to change notification settings - Fork 179
Issues: idiap/fast-transformers
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
TypeError: canonicalize_version() got an unexpected keyword argument 'strip_trailing_zero'
#132
opened Aug 28, 2024 by
luispintoc
Speed of linear attention slower than the attention implemented in pytorch
#130
opened Jun 24, 2024 by
yzeng58
[WinError 2] The system cannot find the file specified: build_ext
#129
opened Feb 29, 2024 by
cliffordkleinsr
Understanding how to define key, query and value for the cross attention calculation
#119
opened Dec 18, 2022 by
neuronphysics
how causal mask constructed in training batch model with linear causal attention?
#109
opened Nov 26, 2021 by
Howuhh
local_dot_product_cuda fails when queries and keys have different lengths
#98
opened Aug 8, 2021 by
tridao
Linear Transformers are Fast Weight Memory Systems
new-attention
Add a new attention implementation
#70
opened Mar 10, 2021 by
angeloskath
Feature request: L2 self-attention
new-attention
Add a new attention implementation
#60
opened Jan 19, 2021 by
ketyi
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.