Skip to content

Commit 66789d5

Browse files
committed
d
1 parent 1bc667b commit 66789d5

11 files changed

Lines changed: 0 additions & 11 deletions

src/AI/AI-llm-architecture/0.-basic-llm-concepts.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -298,4 +298,3 @@ During the backward pass:
298298
- **Accuracy:** Provides exact derivatives up to machine precision.
299299
- **Ease of Use:** Eliminates manual computation of derivatives.
300300

301-

src/AI/AI-llm-architecture/1.-tokenizing.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -96,4 +96,3 @@ print(token_ids[:50])
9696

9797
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
9898

99-

src/AI/AI-llm-architecture/2.-data-sampling.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -238,4 +238,3 @@ tensor([[ 367, 2885, 1464, 1807],
238238

239239
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
240240

241-

src/AI/AI-llm-architecture/3.-token-embeddings.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -216,4 +216,3 @@ print(input_embeddings.shape) # torch.Size([8, 4, 256])
216216

217217
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
218218

219-

src/AI/AI-llm-architecture/4.-attention-mechanisms.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -429,4 +429,3 @@ For another compact and efficient implementation you could use the [`torch.nn.Mu
429429

430430

431431

432-

src/AI/AI-llm-architecture/5.-llm-architecture.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -699,4 +699,3 @@ print("Output length:", len(out[0]))
699699

700700
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
701701

702-

src/AI/AI-llm-architecture/6.-pre-training-and-loading-models.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -970,4 +970,3 @@ There 2 quick scripts to load the GPT2 weights locally. For both you can clone t
970970

971971

972972

973-

src/AI/AI-llm-architecture/7.0.-lora-improvements-in-fine-tuning.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,4 +62,3 @@ def replace_linear_with_lora(model, rank, alpha):
6262

6363
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
6464

65-

src/AI/AI-llm-architecture/7.1.-fine-tuning-for-classification.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -115,4 +115,3 @@ You can find all the code to fine-tune GPT2 to be a spam classifier in [https://
115115

116116
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
117117

118-

src/AI/AI-llm-architecture/7.2.-fine-tuning-to-follow-instructions.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,4 +105,3 @@ You can find an example of the code to perform this fine tuning in [https://gith
105105

106106
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
107107

108-

0 commit comments

Comments
 (0)