Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Add vLLM support to ChatQnA + DocSum Helm charts #610

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

eero-t
Copy link
Contributor

@eero-t eero-t commented Nov 25, 2024

Description

Add vLLM support to ChatQnA + DocSum Helm app charts.

Similarly to how it's already done in Agent component, these have now tgi.enabled and vllm.enabled flags for selecting which LLM will be used.

Notes:

  • Using vLLM with DocSum app requires building llm-docsum-vllm wrapper image from GenAIComps repo, as that image is currently missing from DockerHub
  • ChatQnA vLLM version will still use HF TEI for embedding & reranking

Issues

Fixes #608 partially.

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds new functionality)

New dependencies

Tests

Manual testing on top of "main" HEAD / v1.1 images.

@eero-t eero-t marked this pull request as draft November 25, 2024 18:04
@eero-t
Copy link
Contributor Author

eero-t commented Nov 25, 2024

Setting as draft. I have tested that DocSum with Gaudi vLLM works, and that ChatQnA Helm chart can be installed, but due to v1.1 image pulls currently taking so long in my test node, I haven't been able test ChatQnA with Gaudi vLLM properly yet.

vLLM CPU version testing would also be needed before merging this (I'm hoping somebody else here could check at least DocSum with CPU vLLM).

@eero-t
Copy link
Contributor Author

eero-t commented Nov 26, 2024

CI issues:

  • LLM-uservice: openai.NotFoundError: Error code: 404 - {'object': 'error', 'message': 'The model meta-llama/Meta-Llama-3-8B-Instruct does not exist.', 'type': 'NotFoundError', 'param': None, 'code': 404}
    • Bug already in the CI/repo, pre-existing Helm chart refers to model not present in CI => I can fix it
  • DocSum: 100.83.111.229:5000/opea/llm-docsum-vllm:latest: not found
    • Bug in OPEA image creation, that image is missing from DockerHub & CI => somebody else needs to fix that

@eero-t
Copy link
Contributor Author

eero-t commented Nov 26, 2024

This overlaps partly with #403.

Otherwise service throws an exception due to None variable value.

Signed-off-by: Eero Tamminen <[email protected]>
@eero-t
Copy link
Contributor Author

eero-t commented Nov 29, 2024

Added HPA support for ChatQnA / vLLM.

Signed-off-by: Eero Tamminen <[email protected]>
For now vLLM replaces just TGI, but as it supports also embedding,
also TEI-embed/-rerank may be replaceable later on.

Signed-off-by: Eero Tamminen <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Sync vLLM support from Examples repo k8s manifests to Helm charts
1 participant