Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Sageattention backend #10532

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open

Conversation

flozi00
Copy link
Contributor

@flozi00 flozi00 commented Nov 21, 2024

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

Signed-off-by: Florian Zimmermeister <[email protected]>
Signed-off-by: Florian Zimmermeister <[email protected]>
Signed-off-by: Florian Zimmermeister <[email protected]>
Signed-off-by: Florian Zimmermeister <[email protected]>
Signed-off-by: Florian Zimmermeister <[email protected]>
Signed-off-by: Florian Zimmermeister <[email protected]>
Signed-off-by: Florian Zimmermeister <[email protected]>
@flozi00
Copy link
Contributor Author

flozi00 commented Nov 21, 2024

@simon-mo @mgoin
What do you think about the deps ?
Copying the kernels to vLLM or using the pip / repo install ?

@simon-mo
Copy link
Collaborator

Generally if the attention backend has pip installable package we prefer that. But before investigate further, I'm curious whether there's performance benefit for LLM? Can you run some benchmarks (e.g. benchmark throughput for Llama 8B)? Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants