| hide |
|
|---|
<p>
Fine-tune Llama 4 on a custom dataset using Axolotl.
</p>
</a>
<a href="/examples/fine-tuning/trl"
class="feature-cell">
<h3>
TRL
</h3>
<p>
Fine-tune Llama 3.1 8B on a custom dataset using TRL.
</p>
</a>
<p>
Run multi-node NCCL tests with MPI
</p>
</a>
<a href="/examples/clusters/rccl-tests"
class="feature-cell sky">
<h3>
RCCL tests
</h3>
<p>
Run multi-node RCCL tests with MPI
</p>
</a>
<a href="/examples/clusters/a3mega"
class="feature-cell sky">
<h3>
A3 Mega
</h3>
<p>
Set up GCP A3 Mega clusters with optimized networking
</p>
</a>
<a href="/examples/clusters/a3high"
class="feature-cell sky">
<h3>
A3 High
</h3>
<p>
Set up GCP A3 High clusters with optimized networking
</p>
</a>
Deploy DeepSeek distilled models with SGLang
Deploy Llama 3.1 with vLLM
Deploy Llama 4 with TGI
Deploy a DeepSeek distilled model with NIM
Deploy DeepSeek R1 and its distilled version with TensorRT-LLM
<p>
Deploy and fine-tune LLMs on AMD
</p>
</a>
<a href="/examples/accelerators/tpu"
class="feature-cell sky">
<h3>
TPU
</h3>
<p>
Deploy and fine-tune LLMs on TPU
</p>
</a>
<a href="/examples/accelerators/intel"
class="feature-cell sky">
<h3>
Intel Gaudi
</h3>
<p>
Deploy and fine-tune LLMs on Intel Gaudi
</p>
</a>
<a href="/examples/accelerators/tenstorrent"
class="feature-cell sky">
<h3>
Tenstorrent
</h3>
<p>
Deploy and fine-tune LLMs on Tenstorrent
</p>
</a>