-
-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Docs] Add dedicated tool calling page to docs #10554
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: mgoin <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
@K-Mistele to review? |
--chat-template examples/tool_chat_template_llama3_json.jinja | ||
``` | ||
|
||
Next make a request to extract structured data using function calling: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"to extract structured data" doesn't quite seem right; that would be more appropriate e.g. for guided decoding / JSON mode. Maybe, "Next, make a request to the model that should result in it using the available tools"?
- Making a request with `tool_choice="auto"` | ||
- Handling the structured response and executing the corresponding function | ||
|
||
You can also specify a particular function using named function calling by setting `tool_choice={"type": "function", "function": {"name": "get_weather"}}`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
worth noting here that this will use the guided decoding backend, so the first time this is used, there will be several seconds of latency (or more) as the FSM is compiled for the first time before it is cached.
## Named Function Calling | ||
vLLM supports named function calling in the chat completion API by default. It does so using Outlines through guided decoding, so this is | ||
enabled by default, and will work with any supported model. You are guaranteed a validly-parsable function call - not a | ||
high-quality one. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- for best results, we recommend ensuring that the expected output format / schema is specified in the prompt to ensure that the model's intended generation is aligned with the schema that it's being forced to generated by the guided decoding backend.
For tool use, this should be done by the chat template if it is already designed to support tool use.
Also adds a quickstart example for automatic function calling