Skip to content

Commit 8a4cee7

Browse files
apartsinclaude
andcommitted
Complete Phase 4c: add section 34.9, fix xrefs, regenerate TOC
- Add section-34.9: Efficient Multi-Tool Orchestration and Tool Economy - Fix 12 broken cross-references across 5 new frontier sections - Regenerate TOC with all 332 sections - Mark all 10 Phase 4c frontier topics complete in tasks.md Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1 parent f38188d commit 8a4cee7

7 files changed

Lines changed: 497 additions & 20 deletions

File tree

part-10-frontiers/module-34-emerging-architectures/section-34.7.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -272,7 +272,7 @@ <h2>6. Connections to Other Chapters <span class="level-badge intermediate" titl
272272

273273
<ul>
274274
<li><strong>AI safety (<a class="cross-ref" href="../../part-9-safety-strategy/module-32-safety-ethics-regulation/section-32.1.html">Section 32.1</a>).</strong> Interpretability is one of the primary tools for building safety cases for AI systems. If you can understand what a model computes and why, you can make stronger claims about its safety.</li>
275-
<li><strong>Fine-tuning (<a class="cross-ref" href="../../part-5-customizing-llms/module-19-fine-tuning-theory/index.html">Chapter 19</a>).</strong> Understanding what features change during fine-tuning can explain why fine-tuning sometimes causes capability regressions or unexpected behavior changes.</li>
275+
<li><strong>Fine-tuning (<a class="cross-ref" href="../../part-4-training-adapting/module-14-fine-tuning-fundamentals/index.html">Chapter 19</a>).</strong> Understanding what features change during fine-tuning can explain why fine-tuning sometimes causes capability regressions or unexpected behavior changes.</li>
276276
<li><strong>Evaluation (<a class="cross-ref" href="../../part-8-evaluation-production/module-29-evaluation-observability/section-29.1.html">Section 29.1</a>).</strong> Interpretability provides an internal complement to external evaluation. Behavioral tests tell you what the model does; interpretability tells you how and why.</li>
277277
<li><strong>Reasoning (<a href="section-34.5.html">Section 34.5</a>).</strong> Circuit analysis of chain-of-thought reasoning may reveal whether CoT chains are causally involved in the model's computation or are post-hoc rationalizations.</li>
278278
</ul>

part-10-frontiers/module-34-emerging-architectures/section-34.9.html

Lines changed: 474 additions & 0 deletions
Large diffs are not rendered by default.

part-10-frontiers/module-35-ai-society/section-35.7.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ <h1>Memory Architectures That Improve Execution</h1>
2929

3030
<div class="prerequisites">
3131
<h3>Prerequisites</h3>
32-
<p>This section builds on the RAG foundations from <a class="cross-ref" href="../../part-5-retrieval-conversation/module-19-vector-databases/index.html">Chapter 19 (Vector Databases)</a> and <a class="cross-ref" href="../../part-5-retrieval-conversation/module-20-rag/index.html">Chapter 20 (RAG)</a>. It also extends the agent architecture patterns from <a class="cross-ref" href="../../part-6-agents-tool-use/module-25-agent-architectures/index.html">Chapter 25</a> and the conversation management concepts from <a class="cross-ref" href="../../part-5-retrieval-conversation/module-22-conversation-management/index.html">Chapter 22</a>.</p>
32+
<p>This section builds on the RAG foundations from <a class="cross-ref" href="../../part-5-retrieval-conversation/module-19-embeddings-vector-db/index.html">Chapter 19 (Vector Databases)</a> and <a class="cross-ref" href="../../part-5-retrieval-conversation/module-20-rag/index.html">Chapter 20 (RAG)</a>. It also extends the agent architecture patterns from <a class="cross-ref" href="../../part-6-agentic-ai/module-22-ai-agents/index.html">Chapter 25</a> and the conversation management concepts from <a class="cross-ref" href="../../part-5-retrieval-conversation/module-21-conversational-ai/index.html">Chapter 22</a>.</p>
3333
</div>
3434

3535
<div class="callout big-picture">
@@ -261,7 +261,7 @@ <h2>5. Connections to RAG and Vector Database Architecture <span class="level-ba
261261
</p>
262262

263263
<p>
264-
In practice, many production systems combine both: a RAG pipeline provides factual grounding from a knowledge base (as described in <a class="cross-ref" href="../../part-5-retrieval-conversation/module-20-rag/section-20.1.html">Chapter 20</a>), while a memory system provides experiential context from past interactions. The agent's prompt includes both retrieved documents and retrieved memories, giving it access to both "what is true" and "what has worked before." The vector database infrastructure (covered in <a class="cross-ref" href="../../part-5-retrieval-conversation/module-19-vector-databases/index.html">Chapter 19</a>) can serve both systems, though the indexing strategies and update patterns differ.
264+
In practice, many production systems combine both: a RAG pipeline provides factual grounding from a knowledge base (as described in <a class="cross-ref" href="../../part-5-retrieval-conversation/module-20-rag/section-20.1.html">Chapter 20</a>), while a memory system provides experiential context from past interactions. The agent's prompt includes both retrieved documents and retrieved memories, giving it access to both "what is true" and "what has worked before." The vector database infrastructure (covered in <a class="cross-ref" href="../../part-5-retrieval-conversation/module-19-embeddings-vector-db/index.html">Chapter 19</a>) can serve both systems, though the indexing strategies and update patterns differ.
265265
</p>
266266

267267
<h2>Exercises <span class="level-badge intermediate" title="Intermediate">Intermediate</span></h2>

part-10-frontiers/module-35-ai-society/section-35.8.html

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ <h1>Self-Improving and Adaptive Agents in Deployment Loops</h1>
2929

3030
<div class="prerequisites">
3131
<h3>Prerequisites</h3>
32-
<p>This section builds on the memory architectures from <a class="cross-ref" href="section-35.7.html">Section 35.7</a>, the agent patterns from <a class="cross-ref" href="../../part-6-agents-tool-use/module-25-agent-architectures/index.html">Chapter 25</a>, and the alignment fundamentals from <a class="cross-ref" href="../../part-4-training-adapting/module-17-alignment-rlhf-dpo/index.html">Chapter 17</a>. Familiarity with prompt engineering (<a class="cross-ref" href="../../part-3-prompt-engineering/module-09-prompt-engineering-basics/index.html">Chapter 9</a>) is essential for understanding prompt optimization techniques.</p>
32+
<p>This section builds on the memory architectures from <a class="cross-ref" href="section-35.7.html">Section 35.7</a>, the agent patterns from <a class="cross-ref" href="../../part-6-agentic-ai/module-22-ai-agents/index.html">Chapter 25</a>, and the alignment fundamentals from <a class="cross-ref" href="../../part-4-training-adapting/module-17-alignment-rlhf-dpo/index.html">Chapter 17</a>. Familiarity with prompt engineering (<a class="cross-ref" href="../../part-3-working-with-llms/module-11-prompt-engineering/index.html">Chapter 9</a>) is essential for understanding prompt optimization techniques.</p>
3333
</div>
3434

3535
<div class="callout big-picture">
@@ -142,7 +142,7 @@ <h3>1.2 Implicit Feedback from User Behavior</h3>
142142
<h2>2. Prompt Evolution and Self-Optimization <span class="level-badge advanced" title="Advanced">Advanced</span></h2>
143143

144144
<p>
145-
Prompt engineering (covered in <a class="cross-ref" href="../../part-3-prompt-engineering/module-09-prompt-engineering-basics/index.html">Chapter 9</a>) is typically a manual, iterative process. Self-optimizing agents automate this process by using execution feedback to modify their own prompts. Two frameworks have emerged as leading approaches to automated prompt optimization.
145+
Prompt engineering (covered in <a class="cross-ref" href="../../part-3-working-with-llms/module-11-prompt-engineering/index.html">Chapter 9</a>) is typically a manual, iterative process. Self-optimizing agents automate this process by using execution feedback to modify their own prompts. Two frameworks have emerged as leading approaches to automated prompt optimization.
146146
</p>
147147

148148
<h3>2.1 DSPy: Programmatic Prompt Optimization</h3>
@@ -241,7 +241,7 @@ <h3>3.2 Batch Reflection for Pattern Discovery</h3>
241241
<h2>4. Safety Guardrails for Self-Modification <span class="level-badge advanced" title="Advanced">Advanced</span></h2>
242242

243243
<p>
244-
Self-improving agents introduce a unique safety concern: if the agent can modify its own behavior, it might modify away the safety constraints that were originally imposed. This is not speculative; prompt injection attacks (see <a class="cross-ref" href="../../part-9-safety-strategy/module-31-security-adversarial/section-31.2.html">Section 31.2</a>) demonstrate that adversarial inputs can cause models to ignore their instructions. A self-modifying agent that optimizes its prompt for task performance might inadvertently remove safety instructions that reduce performance on the optimization metric but are essential for safe deployment.
244+
Self-improving agents introduce a unique safety concern: if the agent can modify its own behavior, it might modify away the safety constraints that were originally imposed. This is not speculative; prompt injection attacks (see <a class="cross-ref" href="../../part-6-agentic-ai/module-26-agent-safety-production/section-26.2.html">Section 31.2</a>) demonstrate that adversarial inputs can cause models to ignore their instructions. A self-modifying agent that optimizes its prompt for task performance might inadvertently remove safety instructions that reduce performance on the optimization metric but are essential for safe deployment.
245245
</p>
246246

247247
<h3>4.1 Bounded Optimization</h3>

part-10-frontiers/module-35-ai-society/section-35.9.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ <h1>The Future of Human-AI Collaboration</h1>
2929

3030
<div class="prerequisites">
3131
<h3>Prerequisites</h3>
32-
<p>This section draws on the societal impact discussion from <a class="cross-ref" href="section-35.3.html">Section 35.3</a>, the safety and ethics foundations from <a class="cross-ref" href="../../part-9-safety-strategy/module-32-safety-ethics-regulation/index.html">Chapter 32</a>, and the agent architecture patterns from <a class="cross-ref" href="../../part-6-agents-tool-use/module-25-agent-architectures/index.html">Chapter 25</a>. It also connects to the alignment research from <a class="cross-ref" href="section-35.1.html">Section 35.1</a>, particularly the scalable oversight problem.</p>
32+
<p>This section draws on the societal impact discussion from <a class="cross-ref" href="section-35.3.html">Section 35.3</a>, the safety and ethics foundations from <a class="cross-ref" href="../../part-9-safety-strategy/module-32-safety-ethics-regulation/index.html">Chapter 32</a>, and the agent architecture patterns from <a class="cross-ref" href="../../part-6-agentic-ai/module-22-ai-agents/index.html">Chapter 25</a>. It also connects to the alignment research from <a class="cross-ref" href="section-35.1.html">Section 35.1</a>, particularly the scalable oversight problem.</p>
3333
</div>
3434

3535
<div class="callout big-picture">
@@ -174,7 +174,7 @@ <h2>4. Organizational Transformation <span class="level-badge intermediate" titl
174174
<h3>4.1 New Roles</h3>
175175

176176
<p>
177-
Several new professional roles have emerged. The <strong>AI operator</strong> manages and monitors autonomous agent systems, analogous to a site reliability engineer for AI. The <strong>prompt engineer</strong> (see <a class="cross-ref" href="../../part-3-prompt-engineering/module-09-prompt-engineering-basics/index.html">Chapter 9</a>) designs and optimizes the instructions that guide AI behavior. The <strong>AI trainer</strong> curates data, provides feedback, and evaluates outputs to improve AI systems. The <strong>human-AI workflow designer</strong> determines which tasks to automate, which to assist, and which to leave fully human, then designs the handoff points and oversight mechanisms.
177+
Several new professional roles have emerged. The <strong>AI operator</strong> manages and monitors autonomous agent systems, analogous to a site reliability engineer for AI. The <strong>prompt engineer</strong> (see <a class="cross-ref" href="../../part-3-working-with-llms/module-11-prompt-engineering/index.html">Chapter 9</a>) designs and optimizes the instructions that guide AI behavior. The <strong>AI trainer</strong> curates data, provides feedback, and evaluates outputs to improve AI systems. The <strong>human-AI workflow designer</strong> determines which tasks to automate, which to assist, and which to leave fully human, then designs the handoff points and oversight mechanisms.
178178
</p>
179179

180180
<h3>4.2 Workflow Redesign</h3>

tasks.md

Lines changed: 13 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -81,18 +81,21 @@
8181
## Phase 4c: Frontier Topics (10 candidates under evaluation)
8282

8383
### Group A: Engineering Frontier
84-
- [ ] Reliability engineering for agents under production stress
85-
- [ ] Observability, testing, and CI/CD for agent workflows
86-
- [ ] Memory architectures that improve execution
87-
- [ ] Efficient multi-tool orchestration and tool economy
88-
- [ ] Self-improving and adaptive agents in deployment loops
84+
- [x] Reliability engineering for agents under production stress (section-35.5)
85+
- [x] Observability, testing, and CI/CD for agent workflows (section-35.6)
86+
- [x] Memory architectures that improve execution (section-35.7)
87+
- [x] Efficient multi-tool orchestration and tool economy (section-34.9)
88+
- [x] Self-improving and adaptive agents in deployment loops (section-35.8)
8989

9090
### Group B: Foundational/Theoretical
91-
- [ ] A theory of reasoning in LLMs
92-
- [ ] World models and internal representations of reality
93-
- [ ] Memory as a computational primitive
94-
- [ ] Mechanistic understanding and interpretability of learned computation
95-
- [ ] The nature of agency: when does a model become an agent?
91+
- [x] A theory of reasoning in LLMs (section-34.5)
92+
- [x] World models and internal representations of reality (covered in existing section-34.4)
93+
- [x] Memory as a computational primitive (section-34.6)
94+
- [x] Mechanistic understanding and interpretability of learned computation (section-34.7)
95+
- [x] The nature of agency: when does a model become an agent? (section-34.8)
96+
97+
### Additional
98+
- [x] The future of human-AI collaboration (section-35.9)
9699

97100
## Phase 5: Low Priority / Optional
98101

0 commit comments

Comments
 (0)