Skip to content

Commit

Permalink
fix some doc issues (#1067)
Browse files Browse the repository at this point in the history
* llama.cpp doc fixes

* fix some zh translation issues

* Update llama.cpp.po

---------

Co-authored-by: Ren Xuancheng <[email protected]>
  • Loading branch information
imba-tjd and jklj077 authored Nov 11, 2024
1 parent a912d23 commit f45f6b4
Show file tree
Hide file tree
Showing 4 changed files with 7 additions and 7 deletions.
2 changes: 1 addition & 1 deletion docs/locales/zh_CN/LC_MESSAGES/framework/Langchain.po
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ msgstr "基础用法"

#: ../../source/framework/Langchain.rst:11 b93bd8165fbe4340970f3942884a91dd
msgid "The implementation process of this project includes loading files -> reading text -> segmenting text -> vectorizing text -> vectorizing questions -> matching the top k most similar text vectors with the question vectors -> incorporating the matched text as context along with the question into the prompt -> submitting to the Qwen2.5-7B-Instruct to generate an answer. Below is an example:"
msgstr "您可以仅使用您的文档配合``langchain``来构建一个问答应用。该项目的实现流程包括加载文件 -> 阅读文本 -> 文本分段 -> 文本向量化 -> 问题向量化 -> 将最相似的前k个文本向量与问题向量匹配 -> 将匹配的文本作为上下文连同问题一起纳入提示 -> 提交给Qwen2.5-7B-Instruct生成答案。以下是一个示例:"
msgstr "您可以仅使用您的文档配合 ``langchain`` 来构建一个问答应用。该项目的实现流程包括加载文件 -> 阅读文本 -> 文本分段 -> 文本向量化 -> 问题向量化 -> 将最相似的前k个文本向量与问题向量匹配 -> 将匹配的文本作为上下文连同问题一起纳入提示 -> 提交给Qwen2.5-7B-Instruct生成答案。以下是一个示例:"

#: ../../source/framework/Langchain.rst:95 db8fe123a81d481c91f22710ead3993a
msgid "After loading the Qwen2.5-7B-Instruct model, you should specify the txt file for retrieval."
Expand Down
4 changes: 2 additions & 2 deletions docs/locales/zh_CN/LC_MESSAGES/run_locally/llama.cpp.po
Original file line number Diff line number Diff line change
Expand Up @@ -495,8 +495,8 @@ msgid "Enter interactive mode. You can interrupt model generation and append new
msgstr "进入互动模式。你可以中断模型生成并添加新文本。"

#: ../../source/run_locally/llama.cpp.md:309 fa961800b1584d93b9315ae358c0d70d
msgid "-i or --interactive-first"
msgstr "-i 或 --interactive-first"
msgid "-if or --interactive-first"
msgstr "-if 或 --interactive-first"

#: ../../source/run_locally/llama.cpp.md:309 ec896aaf5dfc44f99f2033044df8f4a0
msgid "Immediately wait for user input. Otherwise, the model will run at once and generate based on the prompt."
Expand Down
2 changes: 1 addition & 1 deletion docs/locales/zh_CN/LC_MESSAGES/run_locally/ollama.po
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ msgstr "用Ollama运行你自己的GGUF文件"

#: ../../source/run_locally/ollama.md:34 a45b6bcaab944f00ae23384aaf4bebfe
msgid "Sometimes you don't want to pull models and you just want to use Ollama with your own GGUF files. Suppose you have a GGUF file of Qwen2.5, `qwen2.5-7b-instruct-q5_0.gguf`. For the first step, you need to create a file called `Modelfile`. The content of the file is shown below:"
msgstr "有时您可能不想拉取模型,而是希望直接使用自己的GGUF文件来配合Ollama。假设您有一个名为`qwen2.5-7b-instruct-q5_0.gguf`的Qwen2.5的GGUF文件。在第一步中,您需要创建一个名为`Modelfile``的文件。该文件的内容如下所示:"
msgstr "有时您可能不想拉取模型,而是希望直接使用自己的GGUF文件来配合Ollama。假设您有一个名为`qwen2.5-7b-instruct-q5_0.gguf`的Qwen2.5的GGUF文件。在第一步中,您需要创建一个名为`Modelfile`的文件。该文件的内容如下所示:"

#: ../../source/run_locally/ollama.md:97 0300ccc8902641e689c5214717fb588d
msgid "Then create the ollama model by running:"
Expand Down
6 changes: 3 additions & 3 deletions docs/source/run_locally/llama.cpp.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,12 +175,12 @@ We provide a series of GGUF models in our Hugging Face organization, and to sear

Download the GGUF model that you want with `huggingface-cli` (you need to install it first with `pip install huggingface_hub`):
```bash
huggingface-cli download <model_repo> <gguf_file> --local-dir <local_dir> --local-dir-use-symlinks False
huggingface-cli download <model_repo> <gguf_file> --local-dir <local_dir>
```

For example:
```bash
huggingface-cli download Qwen/Qwen2.5-7B-Instruct-GGUF qwen2.5-7b-instruct-q5_k_m.gguf --local-dir . --local-dir-use-symlinks False
huggingface-cli download Qwen/Qwen2.5-7B-Instruct-GGUF qwen2.5-7b-instruct-q5_k_m.gguf --local-dir .
```

This will download the Qwen2.5-7B-Instruct model in GGUF format quantized with the scheme Q5_K_M.
Expand Down Expand Up @@ -306,7 +306,7 @@ We use some new options here:

:`-sp` or `--special`: Show the special tokens.
:`-i` or `--interactive`: Enter interactive mode. You can interrupt model generation and append new texts.
:`-i` or `--interactive-first`: Immediately wait for user input. Otherwise, the model will run at once and generate based on the prompt.
:`-if` or `--interactive-first`: Immediately wait for user input. Otherwise, the model will run at once and generate based on the prompt.
:`-p` or `--prompt`: In interactive mode, it is the contexts based on which the model predicts the continuation.
:`--in-prefix`: String to prefix user inputs with.
:`--in-suffix`: String to suffix after user inputs with.
Expand Down

0 comments on commit f45f6b4

Please sign in to comment.