Skip to content

Commit 046e769

Browse files
darthezMaciej Rys
andauthored
docs: added trailing slashes to urls to fix seo indexing (#159)
## Description <!-- Provide a concise and descriptive summary of the changes implemented in this PR. --> ### Type of change - [x] Bug fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) - [x] Documentation update (improves or adds clarity to existing documentation) ### Tested on - [ ] iOS - [ ] Android ### Testing instructions <!-- Provide step-by-step instructions on how to test your changes. Include setup details if necessary. --> ### Screenshots <!-- Add screenshots here, if applicable --> ### Related issues <!-- Link related issues here using #issue-number --> ### Checklist - [x] I have performed a self-review of my code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have updated the documentation accordingly - [ ] My changes generate no new warnings ### Additional notes <!-- Include any additional information, assumptions, or context that reviewers might need to understand this PR. --> Co-authored-by: Maciej Rys <maciej.rys@swmansioncom>
1 parent 289d7cb commit 046e769

2 files changed

Lines changed: 3 additions & 1 deletion

File tree

docs/docs/llms/useLLM.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ description: "Learn how to use Llama models in your React Native applications wi
2222
React Native ExecuTorch supports Llama 3.2 models, including quantized versions. Before getting started, you’ll need to obtain the .pte binary—a serialized model—and the tokenizer. There are various ways to accomplish this:
2323

2424
- For your convienience, it's best if you use models exported by us, you can get them from our [HuggingFace repository](https://huggingface.co/software-mansion/react-native-executorch-llama-3.2). You can also use [constants](https://github.com/software-mansion/react-native-executorch/tree/main/src/constants/modelUrls.ts) shipped with our library.
25-
- If you want to export model by yourself, you can use a Docker image that we've prepared. To see how it works, check out [exporting Llama](./exporting-llama)
25+
- If you want to export model by yourself, you can use a Docker image that we've prepared. To see how it works, check out [exporting Llama](/react-native-executorch/docs/llms/exporting-llama)
2626
- Follow the official [tutorial](https://github.com/pytorch/executorch/blob/fe20be98c/examples/demo-apps/android/LlamaDemo/docs/delegates/xnnpack_README.md) made by ExecuTorch team to build the model and tokenizer yourself
2727

2828
## Initializing

docs/docusaurus.config.js

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,8 @@ const config = {
1212

1313
baseUrl: '/react-native-executorch/',
1414

15+
trailingSlash: true,
16+
1517
organizationName: 'software-mansion',
1618
projectName: 'react-native-executorch',
1719

0 commit comments

Comments
 (0)