You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Build a complete real-time application from scratch. Covers transports, rooms, AI streaming, MCP, clustering, channels, and production deployment.
33
33
</Card>
34
34
<Cardtitle="Spring Boot & Quarkus"icon="setting">
35
-
First-class starters with auto-configuration, native image support, Micrometer metrics, and OpenTelemetry tracing. Tracks Spring Boot 4.0.5 and Quarkus 3.31.
35
+
First-class starters with auto-configuration, native image support, Micrometer metrics, and OpenTelemetry tracing. Tracks Spring Boot 4.0.5 and Quarkus 3.35.
Copy file name to clipboardExpand all lines: docs/src/content/docs/integrations/koog.md
+9-7Lines changed: 9 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -87,18 +87,20 @@ aspiration.
87
87
|`SYSTEM_PROMPT`| ✅ | Honored by the Koog prompt builder |
88
88
|`CONVERSATION_MEMORY`| ✅ | Per-session memory threaded through `AgentExecutionContext`|
89
89
|`AGENT_ORCHESTRATION`| ✅ | Works with `@Coordinator` and `@Fleet`|
90
-
|`TOKEN_USAGE`| — | Not declared. Koog reports usage on its response objects but the bridge does not yet thread it into `StreamingSession.usage(TokenUsage)`. Tier 1 follow-up. |
91
-
|`VISION`| — | Koog 0.7.x does not expose a stable multi-modal input API on the bridge path |
92
-
|`MULTI_MODAL`| — | Same limitation |
93
-
|`PROMPT_CACHING`| — | Koog 0.7.3 only ships Bedrock-specific cache variants; no OpenAI-compatible passthrough |
94
-
|`PER_REQUEST_RETRY`| — | Inherits Koog's native retry layer (not per-request overridable from the bridge) |
90
+
|`TOKEN_USAGE`| ✅ | The `StreamFrame.End` handler reads Koog's usage totals and emits a typed `TokenUsage` record via `session.usage()` after drain (`executeWithAgent` lines 223-232). |
|`AUDIO`| ✅ | Same path as `VISION` — `ContentPart.Audio` attached via `AttachmentContent.Binary.Base64`|
93
+
|`MULTI_MODAL`| ✅ | Combined image + audio + text inputs work on the no-tools path; tools + multi-modal degrade gracefully (the tool path wins with a WARN — `AIAgent.run(String)` only accepts plain text) |
94
+
|`PROMPT_CACHING`| ✅ | Koog 0.8.0 honors Bedrock-specific `CacheControl.Bedrock.{FiveMinutes, OneHour}` mapped from Atmosphere's portable `CacheHint`. Non-Bedrock providers silently drop the cache control — same shape Spring AI / LC4j take for OpenAI `prompt_cache_key`. |
95
+
|`CANCELLATION`| ✅ |`executeWithHandle` returns an `ExecutionHandle` whose `cancel()` calls `Job.cancel()` + `Thread.interrupt()` + resolves the done-future with synthetic completion (terminal-path closure per Correctness Invariant #2) |
96
+
|`PER_REQUEST_RETRY`| ✅ | Honored via `executeWithOuterRetry` which wraps `executeInternal` in a retry loop respecting `context.retryPolicy()`. Pre-stream transient failures retry on top of Koog's native HTTP retry. |
95
97
96
98
Exclusions are **honest** — Koog declares them as absent in its `capabilities()` set so runtime-truth advertising is accurate (Correctness Invariant #5). When Koog upstream adds these surfaces in a future release, the bridge will honor them without a breaking change. No `KoogEmbeddingRuntime` ships today — if you need Koog-backed embeddings, wire a Spring AI or LangChain4j `EmbeddingModel` alongside the Koog agent runtime.
97
99
98
100
## Samples
99
101
100
-
-[spring-boot-koog-chat](https://github.com/Atmosphere/atmosphere/tree/main/samples/spring-boot-koog-chat) — `@AiEndpoint`chat sample routing through Koog's `executeStreaming`
101
-
-[spring-boot-ai-classroom](https://github.com/Atmosphere/atmosphere/tree/main/samples/spring-boot-ai-classroom) — swap the runtime to Koog by changing one dependency, same `@Agent` code
102
+
-[spring-boot-ai-chat](https://github.com/Atmosphere/atmosphere/tree/main/samples/spring-boot-ai-chat) — swap to Koog by adding `atmosphere-koog` to `pom.xml`; the same `@AiEndpoint`code routes through `KoogAgentRuntime` (Atmosphere's SPI promise)
103
+
-[spring-boot-ai-classroom](https://github.com/Atmosphere/atmosphere/tree/main/samples/spring-boot-ai-classroom) — multi-room shared AI; same one-Maven-dep swap to route through Koog
Copy file name to clipboardExpand all lines: docs/src/content/docs/integrations/quarkus.md
+4-3Lines changed: 4 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,11 @@
1
1
---
2
2
title: "Quarkus"
3
-
description: "Build-time processing for Quarkus 3.31.3+"
3
+
description: "Build-time processing for Quarkus 3.35.2+"
4
4
---
5
5
6
6
# Quarkus Integration
7
7
8
-
A Quarkus extension that integrates Atmosphere with Quarkus 3.31.3+. Provides build-time annotation scanning via Jandex, Arc CDI integration, and GraalVM native image support.
8
+
A Quarkus extension that integrates Atmosphere with Quarkus 3.35.2+. Provides build-time annotation scanning via Jandex, Arc CDI integration, and GraalVM native image support.
-**`load-on-startup`** must be > 0 for the endpoint to register at boot time (Quarkus's `UndertowDeploymentRecorder` skips on `<= 0`, unlike the Servlet spec).
115
115
116
-
**Runtime coverage:**per-request retry is **Built-in only** in 4.0.36. Framework runtimes inherit their native retry layers. See the [per-runtime capability matrix](../../tutorial/11-ai-adapters/#per-runtime-capability-matrix).
116
+
**Runtime coverage:**all eight framework adapters declare `PER_REQUEST_RETRY` honestly as of 4.0.43 (commit `374631e7`) — they all inherit `AbstractAgentRuntime.executeWithOuterRetry`. Each adapter stacks this on top of its own native retry layer (Spring Retry, LC4j `RetryUtils`, ADK `HttpClient`, Koog `CallRetryPolicy`, SK `OpenAIAsyncClient`). See the [per-runtime capability matrix](../../tutorial/11-ai-adapters/#per-runtime-capability-matrix).
117
117
118
118
## Samples
119
119
120
120
-[Quarkus Chat](https://github.com/Atmosphere/atmosphere/tree/main/samples/quarkus-chat) -- real-time chat with WebSocket and long-polling fallback
121
+
-[Quarkus AI Chat](https://github.com/Atmosphere/atmosphere/tree/main/samples/quarkus-ai-chat) -- five `@AiEndpoint` demos (basic streaming, retry, multi-modal, prompt caching, structured output) on Quarkus + Quarkus LangChain4j bridge, port `18810`
Copy file name to clipboardExpand all lines: docs/src/content/docs/integrations/semantic-kernel.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ description: "AgentRuntime backed by Microsoft Semantic Kernel — ChatCompletio
7
7
8
8
`AgentRuntime` implementation backed by [Microsoft Semantic Kernel](https://learn.microsoft.com/en-us/semantic-kernel/) for Java. Semantic Kernel is Microsoft's enterprise-grade AI orchestration SDK with plugins, memory, and planners. Adding `atmosphere-semantic-kernel` as a dependency makes `@AiEndpoint` route streaming chat through an SK `ChatCompletionService` and makes `EmbeddingRuntime` use SK's `TextEmbeddingGenerationService`.
9
9
10
-
Semantic Kernel is the **7th runtime** added to Atmosphere in 4.0.36.
10
+
Semantic Kernel landed as the 7th `AgentRuntime` in Atmosphere 4.0.36; the 8th and 9th (AgentScope, Spring AI Alibaba) followed in 4.0.42.
11
11
12
12
## Maven Coordinates
13
13
@@ -126,7 +126,7 @@ See the `modules/semantic-kernel/README.md` exclusion note for the full trade-of
126
126
127
127
## Samples
128
128
129
-
Semantic Kernel is wired into [spring-boot-ai-classroom](https://github.com/Atmosphere/atmosphere/tree/main/samples/spring-boot-ai-classroom) as one of the seven swappable runtimes. Drop the `atmosphere-semantic-kernel` JAR alongside `atmosphere-ai` and the same `@AiEndpoint` code routes through SK.
129
+
Semantic Kernel is wired into [spring-boot-ai-classroom](https://github.com/Atmosphere/atmosphere/tree/main/samples/spring-boot-ai-classroom) as one of the nine swappable runtimes. Drop the `atmosphere-semantic-kernel` JAR alongside `atmosphere-ai` and the same `@AiEndpoint` code routes through SK.
**Runtime coverage:** per-request retry is **Built-in only** in 4.0.36. Framework runtimes (Spring AI, LangChain4j, ADK, Koog, Embabel, Semantic Kernel) inherit their native retry layers and ignore the per-request override. The Built-in runtime threads `context.retryPolicy()` into `OpenAiCompatibleClient.sendWithRetry` as a real override. See the [per-runtime capability matrix](../../tutorial/11-ai-adapters/#per-runtime-capability-matrix) for the full breakdown.
247
+
**Runtime coverage:** all eight framework adapters declare `PER_REQUEST_RETRY` honestly as of 4.0.43 (commit `374631e7`) — `AbstractAgentRuntime.executeWithOuterRetry` wraps each adapter's dispatch in a retry loop respecting `context.retryPolicy(...)`, on top of each runtime's own native retry layer (Spring Retry, LC4j `RetryUtils`, ADK `HttpClient`, Koog `CallRetryPolicy`, SK `OpenAIAsyncClient`). The Built-in runtime additionally threads `context.retryPolicy()` into `OpenAiCompatibleClient.sendWithRetry` as a native override. See the [per-runtime capability matrix](../../tutorial/11-ai-adapters/#per-runtime-capability-matrix) for the full breakdown.
Copy file name to clipboardExpand all lines: docs/src/content/docs/reference/ai.md
+22-12Lines changed: 22 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -81,7 +81,7 @@ public interface AgentRuntime {
81
81
82
82
To switch runtimes, change a single Maven dependency — no code changes needed.
83
83
84
-
> **Spring AI Alibaba runtime — Spring Boot 3 only today.** Spring AI Alibaba `1.1.2.0` is compiled against Spring AI `1.1.2`, and `spring-ai-alibaba-graph-core-1.1.2.0` hardcodes Spring AI 1.1.2-only types like `DeepSeekAssistantMessage`, so the runtime requires Spring AI 1.1.2. Spring AI 1.1.2 itself requires Spring Boot 3 — it pins the SB3-era FQN of `RestClientAutoConfiguration`, which Spring Boot 4 ships at a renamed FQN. Drop `atmosphere-spring-ai-alibaba` into a Spring Boot 3 sample (e.g. `samples/spring-boot-ai-chat -Pspring-boot3`) and it round-trips end-to-end (verified via chrome-devtools against Ollama). A Spring Boot 4 path will become possible once Alibaba publishes a Spring AI 2.x-aligned `spring-ai-alibaba-agent-framework`. `atmosphere-agentscope` is unaffected and works on Spring Boot 4.
84
+
> **Spring AI Alibaba runtime — Spring Boot 3 only today.** Spring AI Alibaba `1.1.2.2` is compiled against Spring AI `1.1.2`, and `spring-ai-alibaba-graph-core-1.1.2.x` hardcodes Spring AI 1.1.2-only types like `DeepSeekAssistantMessage`, so the runtime requires Spring AI 1.1.2. Spring AI 1.1.2 itself requires Spring Boot 3 — it pins the SB3-era FQN of `RestClientAutoConfiguration`, which Spring Boot 4 ships at a renamed FQN. Drop `atmosphere-spring-ai-alibaba` into a Spring Boot 3 sample (e.g. `samples/spring-boot-ai-chat -Pspring-boot3`) and it round-trips end-to-end (verified via chrome-devtools against Ollama). A Spring Boot 4 path will become possible once Alibaba publishes a Spring AI 2.x-aligned `spring-ai-alibaba-agent-framework`. `atmosphere-agentscope` is unaffected and works on Spring Boot 4.
85
85
86
86
### Per-Request Runtime Extensions
87
87
@@ -90,16 +90,22 @@ that need framework-native composition (Spring AI advisor chain, LangChain4j
90
90
`AiServices`, Koog graph DSL, ADK multi-agent topology), a small per-request
91
91
helper attaches the framework-native object to `AgentExecutionContext.metadata()`
92
92
and the runtime applies it for that one call — no `AgentRuntime` SPI growth, no
93
-
mutation of shared beans. All five helpers follow the `CacheHint` pattern:
94
-
`from(context)` and `attach(context, ...)` static methods.
93
+
mutation of shared beans. Every helper follows the `CacheHint` pattern:
94
+
`from(context)` and `attach(context, ...)` static methods, with strict type
95
+
checking that throws `IllegalArgumentException` on a wrong-type slot (silent
96
+
drops would mask the override never firing).
95
97
96
98
| Helper | Runtime | Slot it drives |
97
99
|--------|---------|----------------|
98
-
|`SpringAiAdvisors`| Spring AI |`ChatClient.prompt().advisors(...)` — RAG, memory, guardrails, observability |
99
-
|`LangChain4jAiServices`| LangChain4j | Routes through caller's `AiServices`-backed interface (`TokenStream` callbacks bridged to session) |
100
+
|`SpringAiAdvisors`| Spring AI |`ChatClient.prompt().advisors(...)` — RAG, memory, guardrails, observability (additive — multiple advisors compose into a chain) |
101
+
|`LangChain4jAiServices`| LangChain4j | Routes through caller's `AiServices`-backed interface (`TokenStream` callbacks bridged to session) — gives access to `maxSequentialToolsInvocations`, custom system message providers, etc. |
100
102
|`KoogStrategy`| Koog | Swaps default `chatAgentStrategy()` with a custom `AIAgentGraphStrategy<String, String>` from the `strategy {}` DSL |
101
103
|`AdkRootAgent`| ADK | Replaces the runtime's default `LlmAgent` with `SequentialAgent` / `ParallelAgent` / `LoopAgent` / any `BaseAgent` subclass |
102
-
|`ToolLoopPolicies`| Built-in | Per-request `ToolLoopPolicy(maxIterations, OnMaxIterations)` for the OpenAI-compatible tool loop |
|`EmbabelPromptRunner`| Embabel |`UnaryOperator<PromptRunner>` customizer applied AFTER the runtime's default wiring — stack `withTemperature` / `withModel` / `withGuardrails` on top. Atmosphere-native dispatch path only |
106
+
|`AgentScopeAgent`| AgentScope | Per-request `ReActAgent` — useful when different prompts route through different agent topologies (planner vs. quick lookup) without re-installing the runtime client |
107
+
|`SpringAiAlibabaRunnableConfig`| Spring AI Alibaba | Per-request `RunnableConfig` — Alibaba's natural per-invocation handle for `threadId` (memory thread continuation), `checkPointId` (resume), `streamMode`, metadata, store |
108
+
|`ToolLoopPolicies`| Built-in, Koog | Per-request `ToolLoopPolicy(maxIterations, OnMaxIterations)` — Built-in honors via OpenAI-compatible tool loop, Koog via `AIAgent.maxIterations`|
103
109
104
110
Example — Spring AI advisor scoped to one request:
105
111
@@ -123,11 +129,14 @@ returns non-empty). See the per-module READMEs for full DSL examples:
and the [`ToolLoopPolicy`](https://github.com/Atmosphere/atmosphere/blob/main/modules/ai/README.md#tool-loop-policy) section.
125
131
126
-
Other runtimes (`AgentScope`, `Embabel`, `SemanticKernel`, `SpringAiAlibaba`) do
127
-
not yet ship a per-request bridge. `Embabel` did get **native streaming** in the
128
-
same merge: when `StreamingPromptRunnerBuilder.streaming().generateStream()` is
129
-
available the runtime emits `Flux<String>` chunks directly to the session, with
130
-
graceful fallback to `runner.generateText(...)` when the streaming API is absent.
132
+
All eight framework runtimes now ship a per-request sidecar (the four added
133
+
most recently — `SemanticKernelInvocation`, `EmbabelPromptRunner`,
134
+
`AgentScopeAgent`, `SpringAiAlibabaRunnableConfig` — close the matrix that
135
+
previously left these adapters without a per-request escape hatch). `Embabel`
136
+
also has **native streaming**: when
137
+
`StreamingPromptRunnerBuilder.streaming().generateStream()` is available the
138
+
runtime emits `Flux<String>` chunks directly to the session, with graceful
139
+
fallback to `runner.generateText(...)` when the streaming API is absent.
131
140
132
141
### Model-Lifecycle Observability
133
142
@@ -682,7 +691,8 @@ The `RecordingSession` test double captures all events, text chunks, metadata, a
682
691
-[Spring Boot AI Chat](https://github.com/Atmosphere/atmosphere/tree/main/samples/spring-boot-ai-chat) -- built-in client with Gemini/OpenAI/Ollama
683
692
-[Spring Boot AI Tools](https://github.com/Atmosphere/atmosphere/tree/main/samples/spring-boot-ai-tools) -- framework-agnostic `@AiTool` pipeline
684
693
-[Spring Boot AI Classroom](https://github.com/Atmosphere/atmosphere/tree/main/samples/spring-boot-ai-classroom) -- rooms-based multi-room AI with an Expo client
-[Spring Boot RAG Chat](https://github.com/Atmosphere/atmosphere/tree/main/samples/spring-boot-rag-chat) -- Spring AI VectorStore-backed RAG agent
695
+
-[Quarkus AI Chat](https://github.com/Atmosphere/atmosphere/tree/main/samples/quarkus-ai-chat) -- five `@AiEndpoint` demos on Quarkus + LangChain4j bridge
686
696
-[Spring Boot Dentist Agent](https://github.com/Atmosphere/atmosphere/tree/main/samples/spring-boot-dentist-agent) -- `@Agent` with tools, memory, and approval gates
0 commit comments