Skip to content

Commit 98d39bb

Browse files
committed
f
1 parent 77eb308 commit 98d39bb

11 files changed

Lines changed: 4 additions & 11 deletions

src/AI/AI-llm-architecture/0.-basic-llm-concepts.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -297,4 +297,3 @@ During the backward pass:
297297
- **Efficiency:** Avoids redundant calculations by reusing intermediate results.
298298
- **Accuracy:** Provides exact derivatives up to machine precision.
299299
- **Ease of Use:** Eliminates manual computation of derivatives.
300-

src/AI/AI-llm-architecture/1.-tokenizing.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -96,4 +96,3 @@ print(token_ids[:50])
9696

9797
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
9898

99-

src/AI/AI-llm-architecture/2.-data-sampling.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -238,4 +238,3 @@ tensor([[ 367, 2885, 1464, 1807],
238238

239239
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
240240

241-

src/AI/AI-llm-architecture/3.-token-embeddings.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -216,4 +216,3 @@ print(input_embeddings.shape) # torch.Size([8, 4, 256])
216216

217217
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
218218

219-

src/AI/AI-llm-architecture/4.-attention-mechanisms.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -427,4 +427,3 @@ For another compact and efficient implementation you could use the [`torch.nn.Mu
427427

428428
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
429429

430-

src/AI/AI-llm-architecture/5.-llm-architecture.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -697,4 +697,4 @@ print("Output length:", len(out[0]))
697697

698698
## References
699699

700-
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
700+
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)

src/AI/AI-llm-architecture/6.-pre-training-and-loading-models.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -968,4 +968,3 @@ There 2 quick scripts to load the GPT2 weights locally. For both you can clone t
968968

969969
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
970970

971-

src/AI/AI-llm-architecture/7.0.-lora-improvements-in-fine-tuning.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,4 +60,4 @@ def replace_linear_with_lora(model, rank, alpha):
6060

6161
## References
6262

63-
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
63+
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)

src/AI/AI-llm-architecture/7.1.-fine-tuning-for-classification.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,4 +113,4 @@ You can find all the code to fine-tune GPT2 to be a spam classifier in [https://
113113

114114
## References
115115

116-
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
116+
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)

src/AI/AI-llm-architecture/7.2.-fine-tuning-to-follow-instructions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,4 +103,4 @@ You can find an example of the code to perform this fine tuning in [https://gith
103103

104104
## References
105105

106-
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
106+
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)

0 commit comments

Comments
 (0)