Skip to content

Commit dd502c2

Browse files
authored
Merge pull request #123 from Tuntii/docs/specialized-skills-update-9114956679939804033
docs: Add Phase 5 (Specialized Skills) and recipes for gRPC, SSR, and AI
2 parents 5671f0d + 0aa8728 commit dd502c2

File tree

9 files changed

+468
-7
lines changed

9 files changed

+468
-7
lines changed

docs/.agent/docs_coverage.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,11 @@
1616
| Observability | `crates/rustapi_extras.md` | `rustapi-extras/src/telemetry` | OK |
1717
| **Jobs** | | | |
1818
| Job Queue (Crate) | `crates/rustapi_jobs.md` | `rustapi-jobs` | OK |
19-
| Background Jobs (Recipe) | `recipes/background_jobs.md` | `rustapi-jobs` | NEW |
19+
| Background Jobs (Recipe) | `recipes/background_jobs.md` | `rustapi-jobs` | OK |
20+
| **Integrations** | | | |
21+
| gRPC | `recipes/grpc_integration.md` | `rustapi-grpc` | NEW |
22+
| SSR | `recipes/server_side_rendering.md` | `rustapi-view` | NEW |
23+
| AI / TOON | `recipes/ai_integration.md` | `rustapi-toon` | NEW |
2024
| **Learning** | | | |
2125
| Structured Path | `learning/curriculum.md` | N/A | OK |
2226
| **Recipes** | | | |

docs/.agent/last_run.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
{
22
"last_processed_ref": "v0.1.335",
33
"date": "2025-02-24",
4-
"notes": "Added Phase 4 (Enterprise Scale) to Learning Path, created Testing recipe, and updated File Uploads recipe."
4+
"notes": "Added Phase 5 (Specialized Skills) to Learning Path, and created recipes for gRPC, SSR, and AI Integration."
55
}

docs/.agent/run_report_2025-02-24.md

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -65,3 +65,42 @@ This run focuses on "Enterprise Scale" documentation, testing strategies, and im
6565
## 4. Open Questions / TODOs
6666
- **Status Page**: `recipes/status_page.md` exists but might need more visibility in the Learning Path (maybe in Module 11?).
6767
- **Observability**: A dedicated recipe for OpenTelemetry setup would be beneficial (currently covered in crate docs).
68+
69+
---
70+
71+
# Docs Maintenance Run Report: 2025-02-24 (Run 3)
72+
73+
## 1. Version Detection
74+
- **Repo Version**: `v0.1.335` (Unchanged)
75+
- **Result**: Continuing with Continuous Improvement phase.
76+
77+
## 2. Changes Summary
78+
This run focuses on "Specialized Skills" covering gRPC integration, Server-Side Rendering (SSR), and AI integration (TOON).
79+
80+
### New Content
81+
- **Cookbook Recipe**: `docs/cookbook/src/recipes/grpc_integration.md` - Guide for running HTTP and gRPC services side-by-side.
82+
- **Cookbook Recipe**: `docs/cookbook/src/recipes/server_side_rendering.md` - Guide for using `rustapi-view` with Tera templates.
83+
- **Cookbook Recipe**: `docs/cookbook/src/recipes/ai_integration.md` - Guide for using `rustapi-toon` for LLM-optimized responses.
84+
- **Learning Path Phase**: Added "Phase 5: Specialized Skills" to `docs/cookbook/src/learning/curriculum.md`.
85+
86+
### Updates
87+
- **Cookbook Summary**: Added new recipes to `docs/cookbook/src/SUMMARY.md`.
88+
- **Docs Coverage**: Updated `docs/.agent/docs_coverage.md` to include new integrations.
89+
90+
## 3. Improvement Details
91+
- **Learning Path**:
92+
- Added Modules 14 (SSR), 15 (gRPC), 16 (AI Integration).
93+
- Added "Phase 5 Capstone: The Intelligent Dashboard".
94+
- **gRPC Recipe**:
95+
- Detailed `rustapi-grpc` usage with `tonic`.
96+
- Example of shared shutdown logic.
97+
- **SSR Recipe**:
98+
- Example of `View` and `Context` usage.
99+
- Template structure and inheritance.
100+
- **AI Recipe**:
101+
- Explanation of TOON format and token savings.
102+
- Example of `LlmResponse` and content negotiation.
103+
104+
## 4. Open Questions / TODOs
105+
- **gRPC Multiplexing**: A more advanced guide on running HTTP and gRPC on the same port using `tower` would be valuable.
106+
- **Tera Filters**: Documenting how to add custom filters to the Tera instance in `rustapi-view`.

docs/cookbook/src/SUMMARY.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,16 +38,17 @@
3838
- [Background Jobs](recipes/background_jobs.md)
3939
- [Custom Middleware](recipes/custom_middleware.md)
4040
- [Real-time Chat](recipes/websockets.md)
41+
- [Server-Side Rendering (SSR)](recipes/server_side_rendering.md)
42+
- [AI Integration (TOON)](recipes/ai_integration.md)
4143
- [Production Tuning](recipes/high_performance.md)
4244
- [Resilience Patterns](recipes/resilience.md)
4345
- [Time-Travel Debugging (Replay)](recipes/replay.md)
4446
- [Deployment](recipes/deployment.md)
4547
- [HTTP/3 (QUIC)](recipes/http3_quic.md)
48+
- [gRPC Integration](recipes/grpc_integration.md)
4649
- [Automatic Status Page](recipes/status_page.md)
4750

4851
- [Troubleshooting: Common Gotchas](troubleshooting.md)
4952

5053
- [Part V: Learning & Examples](learning/README.md)
5154
- [Structured Curriculum](learning/curriculum.md)
52-
53-

docs/cookbook/src/learning/curriculum.md

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -231,6 +231,55 @@ This curriculum is designed to take you from a RustAPI beginner to an advanced u
231231

232232
---
233233

234+
## Phase 5: Specialized Skills
235+
236+
**Goal:** Master integration with AI, gRPC, and server-side rendering.
237+
238+
### Module 14: Server-Side Rendering (SSR)
239+
- **Prerequisites:** Phase 2.
240+
- **Reading:** [SSR Recipe](../recipes/server_side_rendering.md).
241+
- **Task:** Create a dashboard showing system status using `rustapi-view`.
242+
- **Expected Output:** HTML page rendered with Tera templates, displaying dynamic data.
243+
- **Pitfalls:** Forgetting to create the `templates/` directory.
244+
245+
#### 🧠 Knowledge Check
246+
1. Which template engine does RustAPI use?
247+
2. How do you pass data to a template?
248+
3. How does template reloading work in debug mode?
249+
250+
### Module 15: gRPC Microservices
251+
- **Prerequisites:** Phase 3.
252+
- **Reading:** [gRPC Recipe](../recipes/grpc_integration.md).
253+
- **Task:** Run a gRPC service alongside your HTTP API that handles internal user lookups.
254+
- **Expected Output:** Both servers running; HTTP endpoint calls gRPC method (simulated).
255+
- **Pitfalls:** Port conflicts if not configured correctly.
256+
257+
#### 🧠 Knowledge Check
258+
1. Which crate provides gRPC helpers for RustAPI?
259+
2. Can HTTP and gRPC share the same Tokio runtime?
260+
3. Why might you want to run both in the same process?
261+
262+
### Module 16: AI Integration (TOON)
263+
- **Prerequisites:** Phase 2.
264+
- **Reading:** [AI Integration Recipe](../recipes/ai_integration.md).
265+
- **Task:** Create an endpoint that returns standard JSON for browsers but TOON for `Accept: application/toon`.
266+
- **Expected Output:** `curl` requests with different headers return different formats.
267+
- **Pitfalls:** Not checking the `Accept` header in client code.
268+
269+
#### 🧠 Knowledge Check
270+
1. What is TOON and why is it useful for LLMs?
271+
2. How does `LlmResponse` decide which format to return?
272+
3. How much token usage can TOON save on average?
273+
274+
### 🏆 Phase 5 Capstone: "The Intelligent Dashboard"
275+
**Objective:** Combine SSR, gRPC, and AI features.
276+
**Requirements:**
277+
- **Backend:** Retrieve stats via gRPC from a "worker" service.
278+
- **Frontend:** Render a dashboard using SSR.
279+
- **AI Agent:** Expose a TOON endpoint for an AI agent to query the system status.
280+
281+
---
282+
234283
## Next Steps
235284

236285
* Explore the [Examples Repository](https://github.com/Tuntii/rustapi-rs-examples).
Lines changed: 108 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,108 @@
1+
# AI Integration
2+
3+
RustAPI offers native support for building AI-friendly APIs using the `rustapi-toon` crate. This allows you to serve optimized content for Large Language Models (LLMs) while maintaining standard JSON responses for traditional clients.
4+
5+
## The Problem: Token Costs
6+
7+
LLMs like GPT-4, Claude, and Gemini charge by the **token**. Standard JSON is verbose, containing many structural characters (`"`, `:`, `{`, `}`) that count towards this limit.
8+
9+
**JSON (55 tokens):**
10+
```json
11+
[
12+
{"id": 1, "role": "admin", "active": true},
13+
{"id": 2, "role": "user", "active": true}
14+
]
15+
```
16+
17+
**TOON (32 tokens):**
18+
```
19+
users[2]{id,role,active}:
20+
1,admin,true
21+
2,user,true
22+
```
23+
24+
## The Solution: Content Negotiation
25+
26+
RustAPI uses the `Accept` header to decide which format to return.
27+
- `Accept: application/json` -> Returns JSON.
28+
- `Accept: application/toon` -> Returns TOON.
29+
- `Accept: application/llm` (custom) -> Returns TOON.
30+
31+
This is handled automatically by the `LlmResponse<T>` type.
32+
33+
## Dependencies
34+
35+
```toml
36+
[dependencies]
37+
rustapi-rs = { version = "0.1.335", features = ["toon"] }
38+
serde = { version = "1.0", features = ["derive"] }
39+
```
40+
41+
## Implementation
42+
43+
```rust,no_run
44+
use rustapi_rs::prelude::*;
45+
use rustapi_toon::LlmResponse; // Handles negotiation
46+
use serde::Serialize;
47+
48+
#[derive(Serialize)]
49+
struct User {
50+
id: u32,
51+
username: String,
52+
role: String,
53+
}
54+
55+
// Simple handler returning a list of users
56+
#[rustapi_rs::get("/users")]
57+
async fn get_users() -> LlmResponse<Vec<User>> {
58+
let users = vec![
59+
User { id: 1, username: "Alice".into(), role: "admin".into() },
60+
User { id: 2, username: "Bob".into(), role: "editor".into() },
61+
];
62+
63+
// LlmResponse automatically serializes to JSON or TOON
64+
LlmResponse(users)
65+
}
66+
67+
#[tokio::main]
68+
async fn main() {
69+
let app = RustApi::new().route("/users", get(get_users));
70+
71+
println!("Server running on http://127.0.0.1:3000");
72+
app.run("127.0.0.1:3000").await.unwrap();
73+
}
74+
```
75+
76+
## Testing
77+
78+
**Standard Browser / Client:**
79+
```bash
80+
curl http://localhost:3000/users
81+
# Returns: [{"id":1,"username":"Alice",...}]
82+
```
83+
84+
**AI Agent / LLM:**
85+
```bash
86+
curl -H "Accept: application/toon" http://localhost:3000/users
87+
# Returns:
88+
# users[2]{id,username,role}:
89+
# 1,Alice,admin
90+
# 2,Bob,editor
91+
```
92+
93+
## Providing Context to AI
94+
95+
When building an MCP (Model Context Protocol) server or simply feeding data to an LLM, use the TOON format to maximize the context window.
96+
97+
```rust,ignore
98+
// Example: Generating a prompt with TOON data
99+
let data = get_system_status().await;
100+
let toon_string = rustapi_toon::to_string(&data).unwrap();
101+
102+
let prompt = format!(
103+
"Analyze the following system status and report anomalies:\n\n{}",
104+
toon_string
105+
);
106+
107+
// Send `prompt` to OpenAI API...
108+
```
Lines changed: 130 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,130 @@
1+
# gRPC Integration
2+
3+
RustAPI allows you to seamlessly integrate gRPC services alongside your HTTP API, running both on the same Tokio runtime or even the same port (with proper multiplexing, though separate ports are simpler). We use the `rustapi-grpc` crate, which provides helpers for [Tonic](https://github.com/hyperium/tonic).
4+
5+
## Dependencies
6+
7+
Add the following to your `Cargo.toml`:
8+
9+
```toml
10+
[dependencies]
11+
rustapi-rs = { version = "0.1.335", features = ["grpc"] }
12+
tonic = "0.10"
13+
prost = "0.12"
14+
tokio = { version = "1", features = ["macros", "rt-multi-thread"] }
15+
16+
[build-dependencies]
17+
tonic-build = "0.10"
18+
```
19+
20+
## Defining the Service (Proto)
21+
22+
Create a `proto/helloworld.proto` file:
23+
24+
```protobuf
25+
syntax = "proto3";
26+
27+
package helloworld;
28+
29+
service Greeter {
30+
rpc SayHello (HelloRequest) returns (HelloReply);
31+
}
32+
33+
message HelloRequest {
34+
string name = 1;
35+
}
36+
37+
message HelloReply {
38+
string message = 1;
39+
}
40+
```
41+
42+
## The Build Script
43+
44+
In `build.rs`:
45+
46+
```rust,no_run
47+
fn main() -> Result<(), Box<dyn std::error::Error>> {
48+
tonic_build::compile_protos("proto/helloworld.proto")?;
49+
Ok(())
50+
}
51+
```
52+
53+
## Implementation
54+
55+
Here is how to run both servers concurrently with shared shutdown.
56+
57+
```rust,no_run
58+
use rustapi_rs::prelude::*;
59+
use rustapi_rs::grpc::{run_rustapi_and_grpc_with_shutdown, tonic};
60+
use tonic::{Request, Response, Status};
61+
62+
// Import generated proto code (simplified for example)
63+
pub mod hello_world {
64+
tonic::include_proto!("helloworld");
65+
}
66+
use hello_world::greeter_server::{Greeter, GreeterServer};
67+
use hello_world::{HelloReply, HelloRequest};
68+
69+
// --- gRPC Implementation ---
70+
#[derive(Default)]
71+
pub struct MyGreeter {}
72+
73+
#[tonic::async_trait]
74+
impl Greeter for MyGreeter {
75+
async fn say_hello(
76+
&self,
77+
request: Request<HelloRequest>,
78+
) -> Result<Response<HelloReply>, Status> {
79+
let name = request.into_inner().name;
80+
let reply = hello_world::HelloReply {
81+
message: format!("Hello {} from gRPC!", name),
82+
};
83+
Ok(Response::new(reply))
84+
}
85+
}
86+
87+
// --- HTTP Implementation ---
88+
#[rustapi_rs::get("/health")]
89+
async fn health() -> Json<&'static str> {
90+
Json("OK")
91+
}
92+
93+
#[tokio::main]
94+
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
95+
// 1. Define HTTP App
96+
let http_app = RustApi::new().route("/health", get(health));
97+
let http_addr = "0.0.0.0:3000";
98+
99+
// 2. Define gRPC Service
100+
let grpc_addr = "0.0.0.0:50051".parse()?;
101+
let greeter = MyGreeter::default();
102+
103+
println!("HTTP listening on http://{}", http_addr);
104+
println!("gRPC listening on grpc://{}", grpc_addr);
105+
106+
// 3. Run both with shared shutdown (Ctrl+C)
107+
run_rustapi_and_grpc_with_shutdown(
108+
http_app,
109+
http_addr,
110+
tokio::signal::ctrl_c(),
111+
move |shutdown| {
112+
tonic::transport::Server::builder()
113+
.add_service(GreeterServer::new(greeter))
114+
.serve_with_shutdown(grpc_addr, shutdown)
115+
},
116+
).await?;
117+
118+
Ok(())
119+
}
120+
```
121+
122+
## How It Works
123+
124+
1. **Shared Runtime**: Both servers run on the same Tokio runtime, sharing thread pool resources efficiently.
125+
2. **Graceful Shutdown**: When `Ctrl+C` is pressed, `run_rustapi_and_grpc_with_shutdown` signals both the HTTP server and the gRPC server to stop accepting new connections and finish pending requests.
126+
3. **Simplicity**: You don't need to manually spawn tasks or manage channels for shutdown signals.
127+
128+
## Advanced: Multiplexing
129+
130+
To run both HTTP and gRPC on the **same port**, you would typically use a library like `tower` to inspect the `Content-Type` header (`application/grpc` vs others) and route accordingly. However, running on separate ports (e.g., 8080 for HTTP, 50051 for gRPC) is standard practice in Kubernetes and most deployment environments.

0 commit comments

Comments
 (0)