You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The "before" numbers show a server under severe stress: 25% of requests failed (likely timeouts), and p90/p95 hit the 10s timeout ceiling. After the optimizations, the server handles the same load comfortably with sub-30ms tail latency and zero failures.
275
275
@@ -279,25 +279,20 @@ To be clear: TanStack Start was not broken before these changes. Under normal tr
279
279
280
280
The following graphs show event-loop utilization[^elu] against throughput for each feature-focused endpoint, before and after the optimizations. Lower utilization at the same req/s means more headroom; higher req/s at the same utilization means more capacity.
281
281
282
-
#### links-100
282
+
For reference, the machine on which these were measured reaches 100% event-loop utilization at 100k req/s on an empty node http server.
283
+
284
+
#### 100 links per page
283
285
284
286

285
287
286
-
#### layouts-26-with-params
288
+
#### Deeply nested layout routes
287
289
288
290

289
291
290
-
#### empty (baseline)
292
+
#### Minimal route (baseline)
291
293
292
294

293
295
294
-
### Flamegraph evidence slots
295
-
296
-
-`<!-- FLAMEGRAPH: links-100 before -->`
297
-
-`<!-- FLAMEGRAPH: links-100 after -->`
298
-
-`<!-- FLAMEGRAPH: layouts-26-with-params before -->`
299
-
-`<!-- FLAMEGRAPH: layouts-26-with-params after -->`
300
-
301
296
## Conclusion
302
297
303
298
The biggest gains came from removing whole categories of work from the server hot path. The general lesson is simple: throughput improves when you eliminate repeated work, allocations, and unnecessary generality in the steady state.
0 commit comments