-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathagent-arch.txt
More file actions
1991 lines (1674 loc) · 99.1 KB
/
agent-arch.txt
File metadata and controls
1991 lines (1674 loc) · 99.1 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
================================================================================
hyper-stack-4j — Distributed Java LLM Inference Engine
Full Architecture Design Document
JDK 25 · Maven · Java-native · Commodity GPU Cluster
Last updated: 2026-03-14 (session 9)
================================================================================
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1. VISION
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
A fully Java-native distributed LLM inference engine that runs large language
models across a cluster of commodity GPUs — replacing the need for a single
expensive high-VRAM card with a network of affordable machines.
Core philosophy:
- No Python. No GIL. Real threads.
- No Spring Boot. No framework bloat.
- Commodity hardware over premium hardware
- Java distributed tooling (Hazelcast, gRPC) over NCCL/MPI
- Pipeline parallelism — LAN friendly, no InfiniBand required
- Open source, Java ecosystem first
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
2. HARDWARE STACK
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Compute Nodes (x16 old PCs)
GPU 4GB VRAM each — 16x4GB = 64GB total VRAM
CPU 8+ core modern (AMD/Intel)
RAM 16-32GB per node (KV cache JVM heap)
Storage NVMe SSD (fast shard loading)
Networking
NIC 10GbE (start) / 25GbE (ideal) — ~$30-100/each
Switch Managed, jumbo frames enabled — ~$200-500
Protocol RDMA — GPU to wire, bypasses CPU entirely
Total extra networking cost: ~$800-1000 for 16 machines.
Far cheaper than one 64GB GPU.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
3. MAVEN PROJECT STRUCTURE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Multi-module Maven project, JDK 25 throughout. All modules BUILD SUCCESS as of
2026-03-14 (session 9). 72+ production Java source files, test files, 355 @Test methods.
hyper-stack-4j/ <- parent POM
|
+-- api/ <- OpenAPI spec + generated models/interfaces
| +-- src/main/resources/openapi.yaml
| +-- src/main/proto/inference.proto
|
+-- registry/ <- NodeDescriptor, ShardPlanner, ShardMap,
| SeedScorer, InsufficientClusterVramException
+-- coordinator/ <- GenerationLoop, RequestScheduler,
| InferenceRequest, GenerationResult,
| TokenConsumer, RequestPriority,
| BatchConfig, BatchEntry,
| FaultTolerantPipeline, RetryPolicy,
| PipelineUnavailableException,
| HealthReactor, InferenceApiServer,
| SseTokenConsumer
+-- node/ <- InferencePipeline, LocalInferencePipeline,
| ForwardPassHandler, CyclicForwardPassHandler,
| CpuForwardPassHandler (real transformer math),
| GgufReader (GGUF v2/v3 binary parser),
| LlamaConfig (hyperparams from GGUF metadata),
| ActivationCodec, ActivationDtype,
| ForwardRequest, ForwardResult, ShardContext,
| NodeConfig
+-- kvcache/ <- KVCacheManager, GpuKVCache, CpuKVCache,
| PrefixCache, KVBlock, KVKey, KVCache,
| LayerRange
+-- tokenizer/ <- Tokenizer, StubTokenizer, DJLTokenizer,
| GgufTokenizer (SentencePiece BPE from GGUF),
| SimpleTokenizer,
| ChatMessage, ChatTemplate,
| ChatTemplateFormatter
+-- sampler/ <- Sampler, SamplingParams, SamplingStep,
| TemperatureStep, TopKStep, TopPStep,
| SoftmaxStep, RepetitionPenaltyStep,
| SampleStep
+-- health/ <- CircuitBreaker, CircuitState, HealthEvaluator,
| HealthEvent, HealthThresholds, NodeHealth
+-- player/ <- Model interaction layer; cluster harness + REPL
| +-- ClusterHarness <- forks 3 node JVMs; accepts optional MODEL_PATH
| +-- EmbeddedNodeServer <- gRPC NodeServiceImpl; uses CpuForwardPassHandler
| | when MODEL_PATH set, CyclicForwardPassHandler otherwise
| +-- NodeMain <- JVM entry point, prints READY:<nodeId>:<port>
| +-- ProcessPipelineClient <- InferencePipeline over real gRPC channels
| +-- ChatHistory <- conversation history for multi-turn REPL
| +-- ChatModelType <- derives chat template from GGUF path (tinyllama, llama3, etc.)
| +-- ConsoleMain <- interactive REPL (MODEL_PATH selects real model; remembers history)
| +-- LoadShardsParallelTest<- unit tests for parallel shard loading (2 tests)
| +-- ChatHistoryTest <- unit tests for ChatHistory (3 tests)
| +-- ChatModelTypeTest <- unit tests for ChatModelType (6 tests)
| Package: io.hyperstack4j.player
| Shade jar: player.jar (main: ConsoleMain)
|
+-- integration/ <- Multi-JVM cluster integration tests + live runner
+-- ModelLiveRunner <- standalone executable; runs 6 real-model checks;
| replaces TinyLlamaLiveIT; entry point for
| integration/target/player.jar (main class)
+-- InProcessClusterIT <- in-process 3-node test, zero network (6 tests)
+-- ThreeNodeClusterIT <- full 3-JVM test over real sockets (9 tests)
Package: io.hyperstack4j.integration
Shade jar: player.jar (main: ModelLiveRunner)
Depends on: player module (for ClusterHarness, ProcessPipelineClient, etc.)
Group ID: io.hyperstack4j
Artifact ID: hyper-stack-4j
Version: 0.1.0-SNAPSHOT
Key dependency versions:
java.version 25
hazelcast.version 5.4.0
grpc.version 1.63.0
protobuf.version 3.25.3
jcuda.version 12.0.0
djl.version 0.27.0
caffeine.version 3.1.8 <- ONLY cache library (see §6)
resilience4j.version 2.2.0
micrometer.version 1.13.0
javalin.version 6.3.0 <- REST HTTP server (no Spring)
openapi-generator.version 7.5.0
protobuf-maven-plugin 0.6.1 <- xolstice plugin (NOT os72)
flatbuffers.version 24.3.25
slf4j.version 2.0.13
logback.version 1.5.6
junit.version 5.10.2
mockito.version 5.12.0
assertj.version 3.26.0
REMOVED from original design (all caused transitive dependency failures):
spring-boot.version REMOVED — no Spring Boot, no Spring anything
ohc.version REMOVED — abandoned, dead NetBeans repo
ehcache.version REMOVED — JAXB transitive mess
chronicle-map.version REMOVED — same JAXB transitive mess
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
4. API MODULE — WHAT WAS BUILT
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
4.1 OpenAPI 3.0 REST Spec (openapi.yaml)
Client-facing REST API served by coordinator via Javalin.
Generator: openapi-generator-maven-plugin 7.5.0, jaxrs-spec (server mode)
Output: target/generated-sources/openapi/src/gen/java
Endpoints:
POST /v1/inference blocking inference
POST /v1/inference/stream SSE token-by-token streaming
POST /v1/models load model, triggers sharding
GET /v1/models list all models
GET /v1/models/{modelId} model status + shard assignment
DELETE /v1/models/{modelId} unload model, free VRAM
GET /v1/cluster/health cluster overview
GET /v1/cluster/nodes all node statuses
GET /v1/cluster/shardmap current layer assignments
Generated models (16 classes):
ApiError, RetryableError
ChatMessage, SamplingConfig
InferenceRequest, InferenceResponse, TokenEvent
LoadModelRequest, ModelDescriptor, ModelList
LayerRange, ShardAssignment, ShardMap
NodeDescriptor, NodeList
ClusterHealth
Generated interfaces (3):
InferenceApi — implemented in coordinator
ModelsApi — implemented in coordinator
ClusterApi — implemented in coordinator
4.2 gRPC Proto (inference.proto)
Internal node-to-node communication. Never exposed to clients.
Location: api/src/main/proto/inference.proto
Plugin: org.xolstice.maven.plugins:protobuf-maven-plugin:0.6.1
+ kr.motd.maven:os-maven-plugin:1.7.1 (platform detection)
Output: target/generated-sources/protobuf/java (messages)
target/generated-sources/protobuf/grpc-java (service stubs)
Java package: io.hyperstack4j.api.grpc
Services:
InferenceService — client -> coordinator (Infer, InferStream)
NodeService — coordinator -> each GPU node
(ForwardPass, LoadShard, UnloadShard, GetNodeStatus)
RegistryService — internal registry queries
(GetShardMap, RegisterNode, RecomputeShards)
4.3 API module dependencies
io.grpc:grpc-netty-shaded
io.grpc:grpc-protobuf
io.grpc:grpc-stub
com.google.protobuf:protobuf-java
jakarta.ws.rs:jakarta.ws.rs-api:3.1.0
jakarta.annotation:jakarta.annotation-api:2.1.1
jakarta.validation:jakarta.validation-api:3.0.2
org.hibernate.validator:hibernate-validator:8.0.1.Final
io.swagger:swagger-annotations:1.6.14
com.fasterxml.jackson.core:jackson-databind:2.17.1
com.fasterxml.jackson.datatype:jackson-datatype-jsr310:2.17.1
javax.annotation:javax.annotation-api:1.3.2 (gRPC generated code)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
5. SYSTEM ARCHITECTURE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[Client]
| REST (Javalin) or gRPC streaming
v
[Load Balancer] HAProxy / Nginx
|
+-------------------------+
v v
[Coordinator 1] [Coordinator 2]
LEADER STANDBY
|
+-- Javalin REST server (port 8080)
+-- Tokenizer (DJL SentencePiece)
+-- RequestScheduler (CompletableFuture, virtual threads)
+-- GenerationLoop (autoregressive)
+-- Sampler (pure Java pipeline)
+-- PrefixCache (Trie)
+-- InferencePipeline
|
| gRPC (data plane — activations)
| Hazelcast (control plane — commands, state, events)
|
===================================================
|| 10/25GbE RDMA Network ||
===================================================
| | | |
[Node 1] [Node 2] [Node 3] ... [Node 16]
Layer 0-1 Layer 2-3 Layer 4-5 Layer N
+ Embed GPU shard GPU shard + Output proj
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
6. KV CACHE — REVISED DESIGN
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Original three-tier design (GPU + CPU off-heap + Disk) was simplified.
DECISION: Two tiers, RAM only. No disk IO ever.
Tier 1 GPU VRAM JCuda CudaBuffer hot, active sequences
Tier 2 JVM heap Caffeine warm sequences, evicts via W-TinyLFU
Rationale:
- OHC (off-heap) : abandoned library, dead NetBeans repo blocks Maven
- Ehcache 3 : JAXB transitive dependency, same dead repo
- Chronicle Map : same transitive chain
- Disk tier : adds complexity for a use case (thousands of long sessions)
that doesn't apply to this deployment scale
- Caffeine : already in stack, pure JVM heap, GC-aware, W-TinyLFU,
zero external dependencies, bounded by -Xmx
Configuration:
kv-cache:
gpu:
capacity-fraction: 0.85
eviction: LRU
cpu:
max-bytes: 8589934592 # 8GB — tune with your -Xmx
eviction: W-TinyLFU # Caffeine default
Prefix cache (unchanged):
Trie structure, checked before every forward pass.
Win: 16 clients with same 500-token system prompt -> compute once, reuse 16x.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
7. REST / HTTP — REVISED DESIGN
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
DECISION: No Spring Boot. Spring is too heavy for what we need.
REST API server Javalin 6.x ~1MB jar, built on Jetty directly
no annotations, no magic, explicit routing
perfect fit for Virtual Threads
Metrics scrape JDK HttpServer (built-in since Java 6) + Micrometer
zero extra dependencies
Example coordinator server setup:
Javalin app = Javalin.create().start(8080);
app.post("/v1/inference", ctx -> inferenceHandler.infer(ctx));
app.post("/v1/inference/stream", ctx -> inferenceHandler.inferStream(ctx));
app.post("/v1/models", ctx -> modelsHandler.load(ctx));
app.get("/v1/cluster/health", ctx -> clusterHandler.health(ctx));
Example metrics endpoint:
HttpServer metricsServer = HttpServer.create(new InetSocketAddress(9091), 0);
metricsServer.createContext("/metrics", exchange -> {
String response = registry.scrape();
exchange.sendResponseHeaders(200, response.getBytes().length);
exchange.getResponseBody().write(response.getBytes());
});
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
8. ACTORS — DESIGN DECISIONS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
8.1 MODEL REGISTRY + SHARD PLANNER
Registry placement Hazelcast distributed IMap — no god objects, no SPOF
Seed node election IMQ-inspired scoring:
score = w1*connectivity + w2*stability
+ w3*betweennessCentrality + w4*vram
Min seed nodes 2 (never 1 — no SPOF)
Sharding strategy Greedy, contiguous layer blocks, VRAM-aware
Fair distribution Each eligible node guaranteed >= 1 layer (see §8.1a)
Dynamic resharding Yes — on node join/leave
Weight format GGUF (single file, quantization-aware)
Weight parser DJL / llama.cpp JNI
Embeddings Node 1 (first in pipeline)
Output projection Last Node (last in pipeline)
Node registration Push — nodes self-register on startup
VRAM reporting JCuda self-reported, 10% headroom reserved
8.1a Fair layer distribution (ShardPlanner)
-----------------------------------------------
The greedy algorithm is capped to prevent a large-VRAM node from consuming
all remaining layers and starving later nodes. For each assignment:
remainingLayers = totalLayers - currentLayer
remainingNodes = eligible.size() - assignments.size()
maxLayers = min(layersFit, remainingLayers - (remainingNodes - 1))
endLayer = currentLayer + maxLayers
The term (remainingNodes - 1) reserves at least one layer per node still
waiting to be assigned. Without this cap, a node with abundant VRAM could
exhaust the layer budget, leaving subsequent nodes with nothing to do and
causing ShardPlanner to throw InsufficientClusterVramException even when the
cluster has sufficient total VRAM.
8.2 COORDINATOR + SCHEDULER
Batching Static micro-batching (same-step, configurable)
Preemption Configurable — ABORT strategy first
Coordinator count 2 (leader + standby)
Leader election Hazelcast CP FencedLock
Queue full response HTTP 503 + Retry-After estimate
Client protocol REST (Javalin) + gRPC streaming
Data plane gRPC (activations)
Control plane Hazelcast (commands, state, events)
Priority queuing PriorityBlockingQueue — HIGH=3, NORMAL=1, LOW=1
Concurrency Java 25 Virtual Threads + CompletableFuture
BatchConfig:
----------------------------------------
defaults() maxBatchSize=8, batchWindowMs=50 (production)
disabled() maxBatchSize=1, batchWindowMs=0 (original per-request dispatch)
of(n, ms) custom
When disabled: every request dispatched immediately on its own virtual thread
— identical to the original RequestScheduler.
When enabled: a single background virtual thread (batch-collector) runs:
1. Block on queue.poll(batchWindowMs) — wait for first request
2. Drain up to (maxBatchSize - 1) more with non-blocking poll()
3. Dispatch the batch on a new virtual thread (batch-gen-*)
4. Each GenerationResult completes the corresponding CompletableFuture
5. Immediately resume collecting the next batch
The batch dispatch thread runs concurrently with collection — the collector
never blocks on GPU work. Multiple batches can be in-flight simultaneously.
InferencePipeline.forwardBatch() (new default method):
----------------------------------------
Default: calls forward() N times serially — all existing impls work unchanged.
Override in GpuForwardPassHandler: one CUDA batched matmul for the whole batch.
The GPU utilization gain comes entirely from this override.
GenerationLoop.generateBatch(List<BatchEntry>) (new method):
----------------------------------------
1. Encode all N prompts, resolve prefix-cache startPos per request
2. Per decode step: collect active requests, call forwardBatch() once
3. Sample, stream, and mark-done per request independently
4. Loop until all requests have hit EOS or their own maxTokens
5. Cache prefixes + evict KV blocks per request
6. Return List<GenerationResult> in entry order
Key property: a request that hits EOS at step 3 exits without blocking
the remaining requests from continuing to step 4, 5, etc.
Reactive scheduler (RequestScheduler — batching disabled):
----------------------------------------
Every request — streaming or blocking — is dispatched on its own Virtual
Thread. The public API is fully reactive via CompletableFuture.
submit(request, consumer) -> CompletableFuture<GenerationResult>
1. CompletableFuture<GenerationResult> stored in ConcurrentHashMap BEFORE queue.offer()
2. request.offer()'d into PriorityBlockingQueue (HIGH priority first)
3. Virtual thread spawned — calls generationLoop.generate()
4. On success: future.complete(result)
On failure: future.completeExceptionally(e) <- never silently swallowed
5. Returns future immediately to caller
submitAndWait(request) -> GenerationResult
Delegates to submit(request, TokenConsumer.discard()).join()
The caller blocks ONLY on its own future — N concurrent callers run fully
in parallel with no shared lock or sequential bottleneck between them.
When queue is full: QueueFullException thrown with retryAfterSeconds hint.
REST layer translates this to HTTP 503 + Retry-After header.
Autoregressive loop (GenerationLoop.generate):
1. Tokenizer.encode(chatTemplate.format(messages)) -> int[] promptIds
2. kvKey = request.kvCacheKey() (sessionId if session, requestId if stateless)
3. if session: PrefixCache.findLongestPrefix(promptIds) -> startPos (skip cached tokens)
if stateless: startPos = 0
4. Prefill: pipeline.forward(kvKey, slice, p) for p in startPos..promptLen-2
5. Decode loop:
pipeline.forward(kvKey, allTokens, startPos+step) -> logits
Sampler.sample(logits, params, history) -> nextToken
if nextToken == EOS or isEosMarker(piece): break
TokenConsumer.accept(piece, tokenId, step) <- SSE / gRPC stream
6. Post-generation:
session: cachePrefix(promptIds, promptIds.length, kvKey); do NOT evict
stateless: kvCache.evict(kvKey)
8.3 KV CACHE MANAGER
See section 6 above.
8.4 HEALTH MONITOR
Node liveness Hazelcast memberRemoved event (automatic)
GPU health probe JCuda every 5s, published to Hazelcast IMap
Circuit breaker per node, wraps all forward pass calls
VRAM warning 90% (configurable) → logged, future: reduce batch size
VRAM critical 98% (configurable) → circuit open → reshard
Metrics Micrometer + Prometheus via JDK HttpServer
Admin endpoints None (Spring removed) — Prometheus scrape only
8.4a FAULT TOLERANCE
─────────────────────────────────────────────────────────────────────────────
Three new classes in coordinator handle the complete failure cascade:
RetryPolicy (record):
none() — 1 attempt, fail immediately
once() — 2 attempts, 50ms backoff (default)
aggressive() — 3 attempts, 100ms backoff (HIGH priority requests)
FaultTolerantPipeline (implements InferencePipeline):
Wraps a List<NodePipeline> — each NodePipeline is (nodeId, pipeline, CircuitBreaker).
forward() algorithm:
For each node (in order):
if !circuit.isCallPermitted() → skip (OPEN circuit)
try forward pass
success → circuit.recordSuccess(), return result
failure → circuit.recordFailure(), sleep backoff, try next node
if tried >= maxAttempts → stop
if tried == 0 → throw CIRCUIT_OPEN (all nodes blocked)
if all failed → throw RETRIES_EXHAUSTED (tried N, all threw)
forwardBatch() same policy — routes entire batch to one node for KV cache coherence.
Health hooks (called by HealthReactor, no-op if nodeId unknown):
onVramCritical(nodeId) → circuit.forceOpen()
onNodeStale(nodeId) → circuit.forceOpen()
onNodeRecovered(nodeId) → circuit.reset()
HealthReactor:
owns a HealthEvaluator + FaultTolerantPipeline + (optional) RequestScheduler
onHealthProbe(NodeHealth) → evaluator.evaluate() → for each HealthEvent:
VRAM_CRITICAL → pipeline.onVramCritical(nodeId)
NODE_STALE → pipeline.onNodeStale(nodeId)
NODE_RECOVERED → pipeline.onNodeRecovered(nodeId)
VRAM_WARNING → log only (eviction handled by KVCacheManager)
If pipeline.isFullyUnavailable() after any OPEN event → scheduler.shutdown()
onNodeRemoved(nodeId) → pipeline.onNodeStale(nodeId) + evaluator.forget(nodeId)
Wiring (production):
nodeHealthMap.addEntryListener(event -> reactor.onHealthProbe(event.getValue()), true);
GenerationLoop uses FaultTolerantPipeline as its InferencePipeline —
all forward passes go through the circuit-breaking layer transparently.
PipelineUnavailableException:
reason: CIRCUIT_OPEN | RETRIES_EXHAUSTED
attemptsMade: int
isRetryable(): true iff RETRIES_EXHAUSTED
REST layer → HTTP 503 + Retry-After header
─────────────────────────────────────────────────────────────────────────────
8.5 TOKENIZER
Library GgufTokenizer (built-in, no JNI) — primary path
DJLTokenizer (SentencePiece JNI) — alternative
Model coverage LLaMA 2/3, TinyLlama, Mistral, Gemma
Chat templates ChatTemplateFormatter.forModelType(modelId) — case-insensitive
ChatTemplate registry (ChatTemplate.BUILT_IN):
"llama3" → <|begin_of_text|>...<|eot_id|>...<|start_header_id|>assistant<|end_header_id|>
"mistral" → [INST] ... [/INST]
"gemma" → <start_of_turn>user\n...<end_of_turn>\n<start_of_turn>model\n
"chatml" → <|im_start|>role\n...<|im_end|>\n ← default fallback
"tinyllama" → <|user|>\n{content}</s>\n<|assistant|>\n ← Zephyr format
"zephyr" → (alias for tinyllama — same instance)
IMPORTANT: TinyLlama-1.1B-Chat-v1.0 is fine-tuned on the Zephyr template,
NOT ChatML. Sending ChatML tokens produces complete garbage. Always use
modelId="tinyllama" or "zephyr" for this model.
decodeToken() contract: MUST return a space-separated string, not raw
SentencePiece pieces. GgufTokenizer.decodeToken() replaces ▁ (U+2581)
with a real space. This applies in both streaming (token by token) and
batch (full decode()) paths — they must both replace ▁ independently.
Performance ~50k sentences/sec
Thread safety StubTokenizer uses HashMap + unsynchronized nextId —
NOT thread-safe for concurrent encode() calls with new
tokens. Pre-warm (or use GgufTokenizer) in production.
8.6 SAMPLER
Pure Java, zero external deps.
Pipeline: temperature -> topK -> topP -> softmax -> repetition penalty -> sample
Preset profiles: defaults / deterministic / creative
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
9. INTEGRATION TEST INFRASTRUCTURE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
The integration module provides two test suites, both run by maven-failsafe-plugin
under mvn verify.
Real-model interaction (cluster harness, REPL, gRPC node server) lives in the
player module (io.hyperstack4j.player). The integration module depends on player
and uses its infrastructure for wiring tests.
9.1 InProcessClusterIT (fast, zero network)
Module: integration
Wires 3 StubForwardPassHandlers via LocalInferencePipeline in the same JVM.
No gRPC, no sockets. Tests the full GenerationLoop + RequestScheduler stack.
~250ms total.
9.2 ThreeNodeClusterIT (real network, real JVMs)
Module: integration
ClusterHarness (player module) forks 3 separate JVM processes via ProcessBuilder:
NodeMain JVM #1 (port 19092, -Xmx4g -XX:+UseZGC)
NodeMain JVM #2 (port 19093, -Xmx4g -XX:+UseZGC)
NodeMain JVM #3 (port 19094, -Xmx4g -XX:+UseZGC)
Each NodeMain starts an EmbeddedNodeServer (gRPC NodeServiceImpl backed by
CyclicForwardPassHandler) and prints READY:<nodeId>:<port> to stdout.
ClusterHarness reads stdout until all 3 READY signals are received.
ProcessPipelineClient implements InferencePipeline with real gRPC channels
to all 3 ports. ForwardPass calls are chained in ShardMap order.
GenerationLoop + RequestScheduler run in the coordinator (test) JVM.
Memory budget (16GB host):
3 nodes × -Xmx4g = 12GB
coordinator JVM = 2GB
OS + overhead = 2GB
------
16GB ✓
Recommended model for local testing: TinyLlama-1.1B-Chat-v1.0.Q4_K_M.gguf
vocab_size=32000, hidden_dim=2048, layers=22, heads=32, size=~670MB
Layer split across 3 nodes: 8 / 7 / 7
9.3 ModelLiveRunner (real model, standalone executable — replaces TinyLlamaLiveIT)
Location: integration/src/main/java/io/hyperstack4j/integration/ModelLiveRunner.java
Main class for integration/target/player.jar (built by maven-shade-plugin).
NOT a JUnit test class. Performs the same 6 real-model validation checks that
were previously in TinyLlamaLiveIT, but as a runnable program with coloured
console output, explicit pass/fail per check, and a summary exit code.
Model resolved from the first CLI argument or $MODEL_PATH env var.
Exits 0 if all checks pass, 1 if any check fails.
Checks (in order):
1. hello_greeting
Generates up to 20 tokens (enough for Zephyr template overhead).
Strips template markers via cleanText() before checking vocabulary.
Asserts >= 1 word from extended greeting vocabulary:
{how, are, you, hello, hi, help, doing, today, there, welcome,
assist, can, i, what, do, hola, hey, greetings, good, great, nice, pleased}.
2. no_raw_sentencepiece_markers
Asserts no ▁ (U+2581) appears in any decoded token piece.
3. question_response
"What is 2 plus 2?" -> non-empty response.
4. greedy_determinism
SamplingParams.deterministic() (greedy=true), same prompt twice
-> identical generated text. Using withTemperature(0.0f) alone is
NOT sufficient — greedy=false still routes through weightedSample().
5. multi_turn_conversation
3-turn conversation. Asserts promptTokens > 20.
6. float16_parity
Asserts the F16 pipeline runs end-to-end and produces non-empty output.
Exact token match with F32 is intentionally NOT checked: FLOAT16
quantization shifts logit magnitudes enough to change the argmax
(observed: F32→"WHERE", F16→"H") — both are valid top-K continuations.
The test validates pipeline correctness, not numerical identity.
Run commands:
# Via hyper.sh (integration module, ModelLiveRunner as main)
MODEL_PATH=/path/to/model.gguf ./hyper.sh integration-live
# Direct via Maven exec
mvn exec:java -pl integration \
-Dexec.mainClass=io.hyperstack4j.integration.ModelLiveRunner \
-Dexec.args=/path/to/model.gguf
# Via shaded jar
java -jar integration/target/player.jar /path/to/model.gguf
9.4 LoadShardsParallelTest (unit tests in player module)
Module: player
2 tests using lightweight in-process gRPC servers (TrackingNodeServer).
No forked JVM processes.
all_nodes_receive_load_shard
Verifies all 3 nodes receive exactly one LoadShard RPC with correct
shard assignments (node 0 hasEmbeddings, node 2 hasOutputProjection).
load_shards_is_parallel_not_serial <- timing regression anchor
Each node sleeps 300ms in loadShard. Sequential: 3×300ms = 900ms.
Parallel: ~300ms + overhead. Test asserts elapsed < 600ms.
9.5 Concurrent request tests
Both ITs test concurrent requests via scheduler.submit() which returns
a CompletableFuture per request. Tests use CompletableFuture.allOf() to
wait for all N requests in parallel — no CountDownLatch, no polling.
Pattern:
List<CompletableFuture<GenerationResult>> futures = new ArrayList<>();
for (int i = 0; i < count; i++) {
futures.add(scheduler.submit(req_i, TokenConsumer.discard()));
}
CompletableFuture.allOf(futures.toArray(new CompletableFuture[0]))
.get(30, TimeUnit.SECONDS);
9.6 Run commands
# Full suite — forks 3 JVM node processes (stub mode)
mvn verify -pl integration
# In-process only
mvn verify -pl integration -Dit.test=InProcessClusterIT
# Real model live runner (ModelLiveRunner, not a JUnit IT)
java -jar integration/target/player.jar /path/to/TinyLlama-1.1B-Chat-v1.0.Q4_K_M.gguf
# Unit tests for tokenizer + node only (fast after bug-fix work)
mvn test -pl tokenizer,node
# Unit tests for player module (LoadShardsParallelTest)
mvn test -pl player
# Skip ITs
mvn verify -DskipITs
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
9. ACTIVATION COMPRESSION — ITEMS 7+9
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
PROBLEM:
In a pipeline-parallel cluster each node-hop ships a full activation tensor
over the network. At 70B scale (hidden_dim=8192, seq_len=4096, FLOAT32):
64 MB per hop × (nodes-1) hops = hundreds of MB per generation step.
On 10GbE (1.25 GB/s peak) that is ~51 ms per hop — a significant bottleneck.
SOLUTION (items 7 and 9 combined — same proto change):
A new ActivationDtype field on ForwardRequest / ForwardResponse selects the
wire encoding for the activation bytes. The dtype is negotiated per-request
(future: per-hop) so heterogeneous nodes can participate at their best precision.
FLOAT32 → 1× raw IEEE 754 float, 4 B/elem — lossless (default)
FLOAT16 → 2× IEEE 754 half-precision, 2 B/elem — ~0.1% relative error
INT8 → ~4× symmetric quantisation, 1 B/elem — ~1% relative error
(4-byte float32 scale prefix + 1 signed byte per element)
NETWORK IMPACT (70B, hidden=8192, seq=4096):
FLOAT32 64 MB ~51 ms on 10GbE
FLOAT16 32 MB ~26 ms — saves 25 ms per hop
INT8 16 MB ~13 ms — saves 38 ms per hop
QUANTIZATION-AWARE SHARDING (item 7):
The ActivationDtype on the wire enables heterogeneous nodes:
Node A (high VRAM) → processes layers with full FLOAT32 activations
Node B (low VRAM) → receives INT8 activations, works at lower precision
Nodes that can't fit larger dtypes simply request INT8 in their shard config.
This is the quantization-aware sharding mechanism; the GGUF weight file
already encodes per-layer quantization, so the two concerns compose cleanly.
IMPLEMENTATION:
node/ActivationDtype.java
Enum: FLOAT32 | FLOAT16 | INT8
node/ActivationCodec.java (stateless, thread-safe)
encode(float[], ActivationDtype) → byte[]
decode(byte[], ActivationDtype) → float[]
FLOAT16 uses manual IEEE 754 bit manipulation (no JNI, pure Java 25):
floatToHalf(float) — handles normals, subnormals, ±∞, NaN, over/underflow
halfToFloat(short) — full round-trip
INT8 uses symmetric quantisation:
scale = max(|activations|) / 127 (guard: scale=1 for all-zero arrays)
encode = round(f / scale), clamped to [-127, 127]
decode = byte × scale
Layout: [scale:float32 big-endian (4 B)][quantised bytes × N]
api/inference.proto changes:
+ enum ActivationDtype { FLOAT32=0; FLOAT16=1; INT8=2; }
+ ForwardRequest.dtype = field 9 (ActivationDtype)
+ ForwardResponse.dtype = field 5 (ActivationDtype)
integration/ProcessPipelineClient.java
New ctor: ProcessPipelineClient(nodes, vocabSize, ActivationDtype)
Before each ForwardRequest: activation = ActivationCodec.encode(floats, dtype)
After each ForwardResponse: floats = ActivationCodec.decode(bytes, dtype)
Final-node logits: always decoded as FLOAT32 (no loss on vocab distribution)
integration/EmbeddedNodeServer.java
Reads dtype from ForwardRequest proto; decodes input activation accordingly.
Encodes output activation with the same dtype (FLOAT32 for final node only).
integration/ClusterHarness.java
+ nodeAddresses() — returns List<NodeAddress> for building custom-dtype clients.
TESTS:
node/ActivationCodecTest.java (17 tests — unit)
float32_roundtrip_is_bitwise_lossless
float32_encoded_size_is_4_bytes_per_element
float16_roundtrip_has_bounded_error_for_typical_activations (max 0.002 abs)
float16_encoded_size_is_exactly_half_of_float32
float16_handles_zero_array
float16_handles_positive_and_negative_values
float16_overflow_becomes_infinity
float16_preserves_zero_and_negative_zero
float16_small_values_underflow_to_zero_gracefully
int8_roundtrip_has_bounded_error_for_typical_activations
int8_encoded_size_is_4_byte_scale_plus_1_byte_per_element
int8_gives_approximately_4x_size_reduction_vs_float32
int8_handles_all_zero_array_without_divide_by_zero
int8_preserves_sign_of_each_element
decode_null_returns_empty_array_for_all_dtypes
decode_empty_bytes_returns_empty_array_for_all_dtypes
single_element_roundtrip (parameterised across all 3 dtypes)
compression_ratios_match_expected_sizes
integration/ThreeNodeClusterIT.java (3 new tests — IT)
float16PipelineProducesSameWinnerToken (order 7)
int8PipelineProducesSameWinnerToken (order 8)
generationLoopWithFloat16Compression (order 9)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
10. FULL TOKEN GENERATION DATA FLOW
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1. Client sends prompt via REST POST /v1/inference/stream
2. Javalin routes to InferenceHandler
3. Coordinator receives InferenceRequest (OpenAPI model)
4. RequestScheduler.submit(request, consumer)
-> CompletableFuture registered in ConcurrentHashMap
-> Virtual thread spawned
5. GenerationLoop.generate() begins on virtual thread
6. ChatTemplateFormatter wraps messages in model-specific format
7. Tokenizer.encode() -> int[] tokens
8. PrefixCache.findLongestPrefix(tokens) -> check for shared prefix
9. pipeline.forwardFromPosition() or pipeline.forward()
10. Node 1: embedding lookup + forward layers 0-N -> activation (gRPC)
11. Node 2..N: forward their layer ranges -> pass activation via gRPC
12. Last Node: final layers + output projection -> float[vocab] logits
13. Logits returned to Coordinator via gRPC
14. Sampler: temperature -> topK -> topP -> softmax -> penalty -> sample
15. int nextToken -> Tokenizer.decodeToken() -> String piece
16. TokenConsumer.accept(piece, tokenId, step) -> SSE or gRPC stream
17. Token appended to generated sequence
18. Repeat from step 8 until EOS or maxTokens
19. future.complete(GenerationResult) — caller's join() returns
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
11. FULL CONFIGURATION REFERENCE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
cluster:
name: hyper-stack-4j-cluster
seed-nodes:
- 192.168.1.10:5701
- 192.168.1.11:5701
seed-node-count: 2
backup-count: 2
coordinator:
count: 2
grpc-port: 9090
http-port: 8080
metrics-port: 9091
max-queue-depth: 1000
max-batch-size: 8
preemption-enabled: true
preemption-strategy: ABORT
scheduler:
max-wait-ms: 50
priority-weights:
HIGH: 3
NORMAL: 1
LOW: 1
node:
grpc-port: 9092
device-id: 0
vram-headroom-fraction: 0.10
kv-cache:
gpu:
capacity-fraction: 0.85
eviction: LRU
cpu:
max-bytes: 8589934592 # 8GB — tune with your -Xmx
eviction: W-TinyLFU
# disk: REMOVED — RAM only
health:
probe-interval-ms: 5000
vram-warning-threshold: 0.90
vram-critical-threshold: 0.98
circuit-breaker:
failure-rate-threshold: 50
sliding-window-size: 10
wait-duration-seconds: 30
sampling:
defaults:
temperature: 0.7
top-k: 50
top-p: 0.9
repetition-penalty: 1.1
max-tokens: 512
profiles:
deterministic:
temperature: 0.1
greedy: true
creative:
temperature: 1.2
top-k: 100
top-p: 0.95
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
12. TECHNOLOGY SUMMARY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Language Java 25
Build Maven (multi-module)
GPU compute JCuda / JCublas 12.x
Distributed state Hazelcast 5.x
Leader election Hazelcast CP FencedLock
Data plane gRPC + Protocol Buffers
Cluster messaging Hazelcast Topics + IMap listeners
RDMA networking jVerbs
Concurrency Java 25 Virtual Threads + CompletableFuture
REST API server Javalin 6.x (NO Spring Boot)
REST API spec OpenAPI 3.0 — jaxrs-spec generator
KV Cache L1 JCuda CudaBuffer (GPU VRAM)
KV Cache L2 Caffeine (JVM heap, W-TinyLFU)
KV Cache L3 REMOVED (no disk IO)
Circuit breaker Resilience4j
Metrics Micrometer + Prometheus (JDK HttpServer, no Spring)
Tokenizer DJL SpTokenizer (SentencePiece JNI)
Weight format GGUF
Weight parser DJL / llama.cpp JNI
Sampler Pure Java, zero external deps
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
13. BUILD STATUS (as of 2026-03-14, session 9)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
hyper-stack-4j SUCCESS (parent pom)
api SUCCESS (OpenAPI + gRPC + proto generated sources)
registry SUCCESS (11 classes: NodeDescriptor, ShardPlanner, ShardMap,
ShardAssignment, SeedScorer, ModelDescriptor,
ModelRegistry, ModelStatus, NodeStatus, NodeConfig,
QuantizationType, InsufficientClusterVramException)
tokenizer SUCCESS (8 classes: Tokenizer, StubTokenizer, DJLTokenizer,
SimpleTokenizer, GgufTokenizer, ChatMessage,
ChatTemplate, ChatTemplateFormatter)
sampler SUCCESS (9 classes: full pipeline — 8 steps, 3 preset profiles)
kvcache SUCCESS (8 classes: KVCacheManager [+invalidatePrefix], GpuKVCache,
CpuKVCache, PrefixCache, KVBlock, KVKey, KVCache, LayerRange)
health SUCCESS (6 classes: CircuitBreaker, CircuitState,
HealthEvaluator, HealthEvent, HealthThresholds,
NodeHealth)
node SUCCESS (13 classes: CpuForwardPassHandler [parallel matVec],
GgufReader, LlamaConfig, LocalInferencePipeline,
InferencePipeline, ForwardPassHandler,
CyclicForwardPassHandler, ActivationCodec,
ActivationDtype, ForwardRequest, ForwardResult,
ShardContext, NodeConfig)
coordinator SUCCESS (14 classes: GenerationLoop [session KV reuse + EOS filter],
RequestScheduler, InferenceApiServer,
SseTokenConsumer, BatchConfig, BatchEntry,
FaultTolerantPipeline, RetryPolicy,
PipelineUnavailableException, HealthReactor,
InferenceRequest [+sessionId, +ofSession, +kvCacheKey],
GenerationResult, TokenConsumer, RequestPriority)
player SUCCESS (6 main classes: ClusterHarness, EmbeddedNodeServer,
NodeMain, ProcessPipelineClient,
ChatHistory [+sessionId], ConsoleMain [+ofSession, +evictSession];
1 test class: LoadShardsParallelTest)
Shade jar: player/target/player.jar (main: ConsoleMain)
integration SUCCESS (ModelLiveRunner [standalone main, 6 checks, not JUnit];
InProcessClusterIT + ThreeNodeClusterIT;
depends on player module)
Shade jar: integration/target/player.jar (main: ModelLiveRunner)
Unit tests: 349 (across api/registry/coordinator/node/kvcache/tokenizer/
sampler/health/player — all @Test methods in *Test.java files)
+9 GenerationLoopSessionTest (session 9)
Integration: 15 (InProcessClusterIT:6 + ThreeNodeClusterIT:9)
Total @Test: 355
Failures: 0
Errors: 0
19.0 Session 7 — Player history fix (multi-turn REPL)
-----------------------------------------------
Problem: Second and later turns produced wrong or garbled output; conversation
history appeared not to work.
Root causes:
(1) GenerationLoop used prefix cache but evicted KV after each turn. Turn 2
got a prefix hit pointing at freed KV → missing context → garbage output.
(2) ConsoleMain passed modelId "model" → chatml template selected for TinyLlama,
which requires the tinyllama template.
Fixes:
- GenerationLoop.generate(): disabled prefix cache for single-request path.
Always startPos=0; full conversation re-prefilled every turn. (Correct but
O(N) — superseded by session 9.)
- ChatModelType.fromPath(path): derives template key from GGUF file name.
ConsoleMain uses it so TinyLlama-1.1B-Chat-v1.0.*.gguf gets tinyllama template.
- ChatModelTypeTest: 6 unit tests.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
20. SESSION 9 CHANGES (2026-03-14)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
20.1 Multi-turn session KV cache reuse
-----------------------------------------------
Problem:
GenerationLoop.generate() always set startPos=0 and called kvCache.evict(requestId)
after every turn. Each REPL turn gets a fresh UUID requestId, so the pipeline's
internal KV (Map<String, float[][]> keyed by requestId) is always cold. Turn N
re-prefills the full conversation history → O(N) latency growth:
23 s → 30 s → 42 s → 64 s → 75 s observed on TinyLlama.
The session 7 fix was correct about the symptom (evicted KV + prefix hit = corrupt
output) but chose the wrong cure ("always prefill everything" instead of "use a
stable key that survives across turns").
Fix — 6 files:
InferenceRequest (coordinator)
Added nullable sessionId field. ofSession(sessionId,...) factory for multi-turn
requests. kvCacheKey() returns sessionId when present, requestId otherwise.
Existing of() factory and all existing call sites unchanged.
GenerationLoop.generate() (coordinator)
kvKey = request.kvCacheKey() — stable across turns
if (hasSession) startPos = findLongestPrefix(promptIds).matchedTokens()
prefill + decode pass kvKey to pipeline.forward() not requestId
after generation:
session: cachePrefix(promptIds, promptIds.length, kvKey)
do NOT evict — KV must survive for turn N+1
stateless: evict(kvKey) as before, no cachePrefix
NOTE: cachePrefix stores promptIds, not allTokens. allTokens contains
generated token IDs appended after the prompt. These IDs do not appear in
the next turn's formatted prompt, so the trie leaf would be unreachable and
findLongestPrefix would return a miss every time.
GenerationLoop.evictSession(sessionId) (coordinator)
New public method. Calls kvCache.evict(sessionId) and
kvCache.invalidatePrefix(sessionId). Call when conversation ends.
KVCacheManager (kvcache)
Added invalidatePrefix(cacheKey) delegating to prefixCache.invalidate().
ChatHistory (player)
Added UUID sessionId field + sessionId() accessor.
ConsoleMain.startRepl() (player)