Skip to content

Latest commit

 

History

History
110 lines (81 loc) · 3.48 KB

File metadata and controls

110 lines (81 loc) · 3.48 KB
layout default
title Chapter 1: Getting Started and First Server
nav_order 1
parent Tabby Tutorial

Chapter 1: Getting Started and First Server

Welcome to Chapter 1: Getting Started and First Server. In this part of Tabby Tutorial: Self-Hosted AI Coding Assistant Architecture and Operations, you will build an intuitive mental model first, then move into concrete implementation details and practical production tradeoffs.

This chapter gets Tabby running with a clean local baseline so every later chapter can focus on architecture and operations instead of setup drift.

Learning Goals

  • choose an installation path that matches your environment
  • run a first Tabby server and create an account
  • connect an editor extension and verify completions
  • capture baseline checks for repeatable setup

Prerequisites

Requirement Why It Matters
Docker or a host runtime for Tabby binary quickest path to a stable server
GPU optional, CPU acceptable for initial validation avoid blocking first-time setup
modern editor (VS Code, JetBrains, Vim/Neovim) validate client integration early

Fastest Bootstrap: Docker

docker run -d \
  --name tabby \
  --gpus all \
  -p 8080:8080 \
  -v $HOME/.tabby:/data \
  registry.tabbyml.com/tabbyml/tabby \
    serve \
    --model StarCoder-1B \
    --chat-model Qwen2-1.5B-Instruct \
    --device cuda

Then open http://localhost:8080 and complete account registration.

Setup Validation Checklist

  1. server process is running and reachable on port 8080
  2. account registration completes in the web UI
  3. personal token is generated in the homepage
  4. editor extension connects using endpoint + token
  5. inline completions appear in a real repository

Early Failure Triage

Symptom Likely Cause First Fix
container exits quickly model/device mismatch switch to a smaller model and recheck runtime flags
extension cannot authenticate missing/invalid token regenerate token and update extension settings
slow or empty completions model backend not healthy verify server logs and reduce model size for baseline

Source References

Summary

You now have a working Tabby deployment with at least one connected editor client.

Next: Chapter 2: Architecture and Runtime Components

Source Code Walkthrough

rules/use-schema-result.yml

The severity interface in rules/use-schema-result.yml handles a key part of this chapter's functionality:

id: use-schema-result
message: Use schema::Result as API interface
severity: error
language: rust
files:
- ./ee/tabby-schema/src/**
ignores:
- ./ee/tabby-schema/src/lib.rs
- ./ee/tabby-schema/src/dao.rs
rule:
  any:
    - pattern: anyhow
      not:
        inside:
          kind: enum_variant
          stopBy: end
    - pattern: FieldResult

This interface is important because it defines how Tabby Tutorial: Self-Hosted AI Coding Assistant Architecture and Operations implements the patterns covered in this chapter.

How These Components Connect

flowchart TD
    A[severity]
Loading