Skip to main content

Repository Context

The repository context is used to connect Tabby with a source code repository from Git, GitHub, GitLab, etc. Tabby fetches the source code from the repository, parses it into an AST, and stores it in the index. During LLM inference, this context is utilized for code completion, as well as chat and search functionalities.

Adding a Repository through Admin UI​

  1. Navigate to the Integrations > Repository Providers page.
Repository Providers
  1. Click Create to begin the process of adding a repository provider.
  • For Git, you only need to fill in the name and the URL of the repository.

    Git

    Local repositories are supported via the file:// protocol, but if running from a Docker container, you need to make it accessible with the --volume flag and use the internal Docker path.

  • For GitHub / GitLab, a personal access token is required to access private repositories.

    • Check the instructions in the corresponding tab to create a token.
    GitHub or GitLab
    • Once the token is set, you can add the repository by selecting it from the dropdown list.
    select-repo

Adding a Repository through configuration file​

~/.tabby/config.toml is the configuration file for Tabby. You can add repositories with it as well, and it's also the only way to add a repository for the Tabby OSS.

~/.tabby/config.toml
[[repositories]]
name = "tabby"
git_url = "https://github.com/TabbyML/tabby.git"

# git through ssh protocol.
[[repositories]]
name = "CTranslate2"
git_url = "git@github.com:OpenNMT/CTranslate2.git"

# local directory is also supported!
[[repositories]]
name = "repository_a"
git_url = "file:///home/users/repository_a"

Verifying the Repository Provider​

Once connected, the indexing job will start automatically. You can check the status of the indexing job on the Information > Jobs page.

Additionally, you can also visit the Code Browser page to view the connected repository.

code browser'

Internal: Vector Index​

When adding a document, it is converted into vectors that help quickly find relevant context. During searches or chats, queries and messages are also converted into vectors to locate the most similar documents.

Use the default embedding model​

The default embedding model is "Nomic-Embed-Text", which is a high-performing open embedding model with a large token context window.

Currently, "Nomic-Embed-Text" is the only supported local embedding model.

Using a remote embedding model provider​

You can add also a remote embedding model provider by adding a new section to the ~/.tabby/config.toml file.

[model.embedding.http]
kind = "openai/embedding"
api_endpoint = "https://api.openai.com"
api_key = "sk-..."
model_name = "text-embedding-3-small"

Following embedding model providers are supported:

  • openai/embedding
  • voyageai/embedding
  • llama.cpp/embedding
  • ollama/embedding