-
Notifications
You must be signed in to change notification settings - Fork 692
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]: Error with Ollama AI provider "unmarshal: invalid character 'p' after top-level value" #1229
Comments
Hi! Have you tried calling the backend directly at http://localhost:11434/v1 without using k8sgpt? The error might indicate that your backend isn't responding correctly. |
@matthisholleville I have actually found the issue. The baseurl has to be curl http://localhost:11434/api/generate -d '{
"model": "llama3.1",
"prompt":"Why is the sky blue?"
}' FYI I am using ollama 0.3.0. |
Great! We could indeed validate the URL during the configuration or before calling the AI. |
@matthisholleville FYI I opened a PR on the docs here k8sgpt-ai/docs#119 |
k8sgpt auth remove --backends ollama |
Checklist
Affected Components
K8sGPT Version
v0.3.40
Kubernetes Version
v1.26.3
Host OS and its Version
Fedora 40
Steps to reproduce
I configured an Ollama backend by following documentation here https://docs.k8sgpt.ai/reference/providers/backend/ and running ...
(I tried with llama2 as well but the model should not matter here)
The I just followed the Getting Started guide by creating the broken pod but the analyze by using the ollama backend returns the following error:
Tried the same by using Azure OpenAI and everything works as expected:
Expected behaviour
Using ollama as backend doesn't return any unmarshal error but just an explanation about the broken pod.
Actual behaviour
No response
Additional Information
No response
The text was updated successfully, but these errors were encountered: