We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Qwen2.5
Qwen2.5-3B-Instruct
使用tensorrt进行模型加速
无
例如进行回答时,问题:关于4K显示器的支持情况及分辨率和刷新率的信息?
大模型回答:�记本通常支持通过HDMI、DP等接口连接4K显示器并输出相应的分辨率和刷新率。
The text was updated successfully, but these errors were encountered:
在输出添加了decode(errors='ignore')没有变化
Sorry, something went wrong.
please provide steps to reproduce. do you have the generated token ids?
Same problem, gone after updating ollama.
No branches or pull requests
Model Series
Qwen2.5
What are the models used?
Qwen2.5-3B-Instruct
What is the scenario where the problem happened?
使用tensorrt进行模型加速
Is this a known issue?
Information about environment
无
Log output
Description
例如进行回答时,问题:关于4K显示器的支持情况及分辨率和刷新率的信息?
大模型回答:�记本通常支持通过HDMI、DP等接口连接4K显示器并输出相应的分辨率和刷新率。
The text was updated successfully, but these errors were encountered: