You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since the release of version 0.13, the core functionality of code completion has become largely stable. This marks an opportune moment to consolidate current implementation and begin exploring for more enhancements
Prompt construction
Tabby generates code completion requests using the following information:
prefix: the code that precedes the cursor.
suffix: the code that follows the cursor.
lsp definitions: for the K identifiers located before the cursor, their definitions are retrieved using the IDE's internal LSP.
recently changed content: lines that have been recently modified (potentially from different files) within the current workspace.
repository-level relevant code snippets: relevant snippets are identified using the prefix and then scored with a combination of BM25 and embedding search algorithms.
It is important to note that prefix and suffix are guaranteed to be included in the LLM inference request, provided they exist. However, lsp definitions, recently changed content, and repository-level relevant code snippets are subject to the limitations of the context window, which is currently set to 1536 tokens. The priority in which these elements are filled corresponds to the order listed above.
To ensure low latency, we are currently using a relatively conservative max decoding tokens, 64.
Agent:
Can the number of lines of context code support a dynamic mechanism that can take the lines of a complete definition or function? Would this enhance the model's understanding ability?
Completion preview:
configure a min length for code shown to complete.
Sometimes when i press enter and want to press "TAB". Tabby in the same moment suggest }), which then i have to remove. So it would be nice to not suggest completions that are less then 3 chars or something I can configure.
Under construction
Since the release of version 0.13, the core functionality of code completion has become largely stable. This marks an opportune moment to consolidate current implementation and begin exploring for more enhancements
Prompt construction
Tabby generates code completion requests using the following information:
It is important to note that prefix and suffix are guaranteed to be included in the LLM inference request, provided they exist. However, lsp definitions, recently changed content, and repository-level relevant code snippets are subject to the limitations of the context window, which is currently set to 1536 tokens. The priority in which these elements are filled corresponds to the order listed above.
To ensure low latency, we are currently using a relatively conservative max decoding tokens, 64.
Ideas
Server
Agent
IDE / Extensions
The text was updated successfully, but these errors were encountered: