Skip to content

Commit 992b4e1

Browse files
Merge pull request #3640 from AI-Hypercomputer:jacoguzo__jaxAI_link
PiperOrigin-RevId: 899826638
2 parents a0f16f8 + 0d62d36 commit 992b4e1

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ See our guide on running MaxText in decoupled mode, without any GCP dependencies
8080

8181
MaxText provides a library of models and demonstrates how to perform pre-training or post-training with high performance and scale.
8282

83-
MaxText leverages [JAX AI libraries](https://docs.jaxstack.ai/en/latest/getting_started.html) and presents a cohesive and comprehensive demonstration of training at scale by using [Flax](https://flax.readthedocs.io/en/latest/) (neural networks), [Tunix](https://github.com/google/tunix) (post-training), [Orbax](https://orbax.readthedocs.io/en/latest/) (checkpointing), [Optax](https://optax.readthedocs.io/en/latest/) (optimization), and [Grain](https://google-grain.readthedocs.io/en/latest/) (dataloading).
83+
MaxText leverages [JAX AI libraries](https://docs.cloud.google.com/tpu/docs/jax-ai-stack) and presents a cohesive and comprehensive demonstration of training at scale by using [Flax](https://flax.readthedocs.io/en/latest/) (neural networks), [Tunix](https://github.com/google/tunix) (post-training), [Orbax](https://orbax.readthedocs.io/en/latest/) (checkpointing), [Optax](https://optax.readthedocs.io/en/latest/) (optimization), and [Grain](https://google-grain.readthedocs.io/en/latest/) (dataloading).
8484

8585
In addition to pure text-based LLMs, we also support multi-modal training with Gemma 3, Gemma 4, and Llama 4 VLMs.
8686

0 commit comments

Comments
 (0)