Skip to content

Latest commit

 

History

History
222 lines (192 loc) · 4.15 KB

File metadata and controls

222 lines (192 loc) · 4.15 KB
hide
navigation
footer
<style> .md-main .md-main__inner.md-grid { flex-direction: row-reverse; } </style>

Fine-tuning

    <p>
        Fine-tune Llama 4 on a custom dataset using Axolotl.
    </p>
</a>

<a href="/examples/fine-tuning/trl"
   class="feature-cell">
    <h3>
        TRL
    </h3>

    <p>
        Fine-tune Llama 3.1 8B on a custom dataset using TRL.
    </p>
</a>

Clusters

    <p>
        Run multi-node NCCL tests with MPI
    </p>
</a>
<a href="/examples/clusters/rccl-tests"
   class="feature-cell sky">
    <h3>
        RCCL tests
    </h3>

    <p>
        Run multi-node RCCL tests with MPI
    </p>
</a>
<a href="/examples/clusters/a3mega"
   class="feature-cell sky">
    <h3>
        A3 Mega
    </h3>

    <p>
        Set up GCP A3 Mega clusters with optimized networking
    </p>
</a>
<a href="/examples/clusters/a3high"
   class="feature-cell sky">
    <h3>
        A3 High
    </h3>

    <p>
        Set up GCP A3 High clusters with optimized networking
    </p>
</a>

Inference

Deploy DeepSeek distilled models with SGLang

Deploy Llama 3.1 with vLLM

Deploy Llama 4 with TGI

Deploy a DeepSeek distilled model with NIM

Deploy DeepSeek R1 and its distilled version with TensorRT-LLM

Accelerators

    <p>
        Deploy and fine-tune LLMs on AMD
    </p>
</a>

<a href="/examples/accelerators/tpu"
   class="feature-cell sky">
    <h3>
        TPU
    </h3>

    <p>
        Deploy and fine-tune LLMs on TPU
    </p>
</a>

<a href="/examples/accelerators/intel"
   class="feature-cell sky">
    <h3>
        Intel Gaudi
    </h3>

    <p>
        Deploy and fine-tune LLMs on Intel Gaudi
    </p>
</a>

<a href="/examples/accelerators/tenstorrent"
   class="feature-cell sky">
    <h3>
        Tenstorrent
    </h3>

    <p>
        Deploy and fine-tune LLMs on Tenstorrent
    </p>
</a>

LLMs

    <p>
        Deploy and train Deepseek models
    </p>
</a>
<a href="/examples/llms/llama"
   class="feature-cell sky">
    <h3>
        Llama
    </h3>

    <p>
        Deploy Llama 4 models
    </p>
</a>

Misc

    <p>
        Use Docker and Docker Compose inside runs
    </p>
</a>