Skip to main content

⁉️ Frequently Asked Questions

How much VRAM a LLM model consumes?

By default, Tabby operates in int8 mode with CUDA, requiring approximately 8GB of VRAM for CodeLlama-7B.

For ROCm the actual limits are currently largely untested, but the same CodeLlama-7B seems to use 8GB of VRAM as well on a AMD Radeonβ„’ RX 7900 XTX according to the ROCm monitoring tools.

What GPUs are required for reduced-precision inference (e.g int8)?
  • int8: Compute Capability >= 7.0 or Compute Capability 6.1
  • float16: Compute Capability >= 7.0
  • bfloat16: Compute Capability >= 8.0

To determine the mapping between the GPU card type and its compute capability, please visit this page

How to utilize multiple NVIDIA GPUs?

Tabby only supports the use of a single GPU. To utilize multiple GPUs, you can initiate multiple Tabby instances and set CUDA_VISIBLE_DEVICES (for cuda) or HIP_VISIBLE_DEVICES (for rocm) accordingly.

My AMD device isn't supported by ROCm

You can use the HSA_OVERRIDE_GFX_VERSION variable if there is a similar GPU that is supported by ROCm you can set it to that.

For example for RDNA2 you can set it to 10.3.0 and to 11.0.0 for RDNA3.

How can I convert my own model for use with Tabby?

Since version 0.5.0, Tabby's inference now operates entirely on llama.cpp, allowing the use of any GGUF-compatible model format with Tabby. To enhance accessibility, we have curated models that we benchmarked, available at registry-tabby

Users are free to fork the repository to create their own registry. If a user's registry is located at https://github.com/USERNAME/registry-tabby, the model ID will be USERNAME/model.

For details on the registry format, please refer to models.json

Can I use local model with Tabby?

Tabby also supports loading models from a local directory that follow our specifications as outlined in MODEL_SPEC.md.