Skip to main content

Docker

This guide explains how to launch Tabby using docker.

To run Tabby with CUDA support in Docker, please install the NVIDIA Container Toolkit. Once installed, you can launch Tabby using the command below:

run.sh
docker run -d \
--name tabby \
--gpus all \
-p 8080:8080 \
-v $HOME/.tabby:/data \
registry.tabbyml.com/tabbyml/tabby \
serve \
--model StarCoder-1B \
--chat-model Qwen2-1.5B-Instruct \
--device cuda
For the systems with SELinux enabled, you may need to add the `:Z` option to the volume mount
run.sh
docker run -d \
--name tabby \
--gpus all \
-p 8080:8080 \
-v $HOME/.tabby:/data:Z \
registry.tabbyml.com/tabbyml/tabby \
serve \
--model StarCoder-1B \
--chat-model Qwen2-1.5B-Instruct \
--device cuda

After Tabby is running, you can access it at http://localhost:8080.

To view the logs, you can use the following command:

docker logs -f tabby