Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

stop should either be excluded, set to [] or utilized. Setting it to null caused errors #344

Open
foxbg opened this issue Jan 31, 2024 · 4 comments

Comments

@foxbg
Copy link

foxbg commented Jan 31, 2024

In the prompt stop is set to null. stop should either be excluded, set to [] or utilized.

{"messages": [{"role": "system", "content": "ChatDev is a software company powered by multiple intelligent agents, such as chief executive officer, chief human resources officer, chief product officer, chief technology officer, etc, with a multi-agent organizational structure and the mission of 'changing the digital world through programming'.\n......"}, {"role": "user", "content": "ChatDev ..."}], "model": "gpt-3.5-turbo-16k", "frequency_penalty": 0.0, "logit_bias": {}, "max_tokens": 15821, "n": 1, "presence_penalty": 0.0, "stop": null, "stream": false, "temperature": 0.2, "top_p": 1.0, "user": ""}HTTP/1.0 200 OK

Ref: LostRuins/koboldcpp#643

@LostRuins
Copy link

Also "max_tokens": 15821 is not a good idea. It should ideally be half or less of your maximum context length, 1k is a good value.

@thinkwee
Copy link
Collaborator

thinkwee commented May 7, 2024

Could you please try the latest version of ChatDev and provide more background information? I don't know the relationship between ChatDev and koboldcpp.

@foxbg
Copy link
Author

foxbg commented Jun 5, 2024

Hi,

Here is what I get with latest version

....
"model": "gpt-3.5-turbo-16k", "frequency_penalty": 0.0, "logit_bias": {}, "max_tokens": 15654, "n": 1, "presence_penalty": 0.0, "stop": null, "stream": false, "temperature": 0.2, "top_p": 1.0, "user": ""}
....
Processing Prompt [BLAS] (729 / 729 tokens)
Generating (6 / 15654 tokens)
(EOS token triggered! ID:2)
CtxLimit: 736/16384, Process:8.03s (11.0ms/T = 90.77T/s), Generate:4.20s (699.8ms/T = 1.43T/s), Total:12.23s (0.49T/s)must be str, not NoneType

the relationship is that koboldcpp is AI text-generation software for GGML and GGUF models build of llama.cpp. I'm using it to run with local model.
As noted above issue is most probably with "stop": null,

@LostRuins
Copy link

Where is the prompt? I don't see you sending any prompt.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants