Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add option to use a local llm, such as Ollama #58

Closed
iplayfast opened this issue Mar 27, 2024 · 2 comments
Closed

Add option to use a local llm, such as Ollama #58

iplayfast opened this issue Mar 27, 2024 · 2 comments
Assignees

Comments

@iplayfast
Copy link

iplayfast commented Mar 27, 2024

It should be as simple as
From the users side:

  1. install ollama https://ollama.com/
  2. run ollama to load a model, (eg ollama run llama)

from your code side:

  1. update UChatGPTSetting.dfm to include ollama in the list
  2. if the settings is set to ollama, then update the edt_Url to 'http://localhost://11434/v1/chat/completions'
  3. Allow the user to specify their model or query ollama to see what models are currently present.
@AliDehbansiahkarbon
Copy link
Owner

Hi, an offline solution has been my concern since the beginning, but I was waiting for a simple, powerful, easy-to-install and use one.
I'm currently playing with https://opencopilot.so.

Still, your suggestion seems promising, probably I will add yours first.
Thank you.

@AliDehbansiahkarbon
Copy link
Owner

AliDehbansiahkarbon commented Apr 3, 2024

Ollama.com is Supported.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants