For teams who wire the core into their own game or extend this repository. Normative contracts and the roadmap live in DGF_SPEC.md; this document is a practical map of the codebase and common tasks.
From zero in ~10 minutes: QUICK_START.md → RogueliteArena scene, LLM, F9. Index of all Docs: DOCS_INDEX.md.
| Step | Document / location | Why |
|---|---|---|
| 0 | QUICK_START.md, ../../_exampleGame/Docs/UNITY_SETUP.md | Quick start and step-by-step Example Game setup in Unity |
| 1 | DGF_SPEC.md §1–5, §8–9 (§9.4 — main Unity flow after LLM) | Core goals, LLM/stub, Lua, threading |
| 2 | AI_AGENT_ROLES.md | Agent roles, placement, model selection |
| 3 | LLMUNITY_SETUP_AND_MODELS.md | LLMUnity, LM Studio / OpenAI HTTP, Play Mode tests, Lua pipeline |
| 4 | ../README.md (host CoreAiUnity) |
Builds, folders, DI, prompts, MessagePipe |
| 5 | GameTemplateGuides/INDEX.md | Short recipes for your title |
| 6 | ../../_exampleGame/README.md | Example game and entry points |
Principle: CoreAI.Core is portable C# with no engine-specific implementation; CoreAI.Source is the Unity layer (DI, scene, LLM adapters). Normatively fixed in DGF_SPEC §3.0.
| Assembly | Folder | Constraint |
|---|---|---|
| CoreAI.Core | Assets/CoreAI/Runtime/Core/ |
No Unity (noEngineReferences). AI contracts, orchestrator, QueuedAiOrchestrator queue, session snapshot, MoonSharp sandbox, Lua parsing, envelope processor. Since v1.5.0 also owns portable LLM pipeline: LoggingLlmClientDecorator, ToolExecutionPolicy (uses ILlmAsyncMarshaler via ICoreAISettings.ToolInvocationMarshaler), SmartToolCallingChatClient, ClientLimitedLlmClientDecorator, portable abstractions IToolCallEventPublisher, IToolExecutionNotifier, ILlmPreflightAnnotator, and ILog. |
| CoreAI.Source | Assets/CoreAiUnity/Runtime/Source/ |
Unity: VContainer, MessagePipe, LLM routing (RoutingLlmClient, LlmRoutingManifest), LLMUnity/OpenAI HTTP, logging, command router, Lua bindings (report / add). Unity-side adapters: MessagePipeToolCallEventPublisher, CoreAiToolExecutionNotifier. Package com.nexoider.coreaiunity. |
| CoreAI.Tests | Assets/CoreAiUnity/Tests/EditMode/ |
Edit Mode NUnit (includes UnityMainThreadLlmAsyncMarshalerEditModeTests, v1.5.14 — Edit Mode deadlock regression). |
| CoreAI.Tests.PlayMode.FastNoLlm | Assets/CoreAiUnity/Tests/PlayMode/FastNoLlm/ |
Fast Play Mode: stubs, orchestrator smoke, UITK/chat panel, Lua (no loaded model / no HTTP LLM dependency where avoidable). |
| CoreAI.Tests.PlayMode.LlmVerification | Assets/CoreAiUnity/Tests/PlayMode/LlmVerification/ |
Live-model probes (Ignore without backend/env). |
| CoreAI.Tests.PlayMode.Scenarios | Assets/CoreAiUnity/Tests/PlayMode/Scenarios/ |
Long multi-step game scenarios (crafting workflows, merchants). Supports DLLs CoreAI.Tests.PlayMode.Shared + CoreAI.Tests.PlayMode.LlmInfra. |
| CoreAI.ExampleGame | Assets/_exampleGame/ |
Demo arena; depends on Source. |
Verification: compile with dotnet build on generated *.csproj (Unity/Rider) or build from the IDE; NUnit Edit Mode / Play Mode — in Unity Test Runner (Window → General → Test Runner). The source of truth for scenarios involving UnityEngine and test assets is Test Runner, not bare dotnet test without Unity.
Rule: title gameplay logic should not “leak” into Core unless necessary. New game APIs for Lua go through IGameLuaRuntimeBindings in Source (or in the game assembly), not by editing the sandbox outside the whitelist.
The template is meant to work sensibly by default, while still allowing targeted tuning without rewriting the core.
- DI + MessagePipe + log:
CoreAILifetimeScoperegistersIGameLogger,ApplyAiGameCommandbroker,IAiGameCommandSink. - Orchestration: default
IAiOrchestrationServiceisQueuedAiOrchestratoraroundAiOrchestrator. - Lua pipeline:
AiGameCommandRoutermarshals handling to the main thread and runsLuaAiEnvelopeProcessor. - Lua limits:
LuaExecutionGuardcaps wall-clock and “steps” (best-effort). - Prompts: system/user chain from manifest → Resources → built-in fallback.
- Programmer versions (Lua + data overlays): in the Unity layer they are persisted to disk by default (File* store).
- World Commands: Lua API
coreai_world_*publishes world commands to the bus; execution runs on the main thread (see WORLD_COMMANDS.md). - WebGL / IL2CPP:
CoreServicesInstallerregistersIAiGameCommandSinkwith an explicit factory (MessagePipeAiCommandSink), notRegister<MessagePipeAiCommandSink>().As<…>(), so VContainer does not depend on constructor metadata analysis (avoidsType does not found injectable constructorin player builds). The Unity package includeslink.xmlpreservingMessagePipeAiCommandSink. EditMode coverage:CoreServicesInstallerEditModeTests.
- LLM backend:
OpenAiHttpLlmSettings(OpenAI-compatible HTTP) andLlmRoutingManifest(per-role routing). - Prompts:
AgentPromptsManifest(system/user overrides and custom roles). - Logs:
GameLogSettingsAsset(feature and level filter). - World Commands:
World Prefab Registry(spawn prefab whitelist).
Recommendation for a title: keep settings in one or two ScriptableObject assets and version them in git (no secrets).
- In the CoreAI core use
IGameLoggerandGameLogFeature— built-in subsystem “tags” and level filtering viaGameLogSettingsAsset(structured categories without a separate NuGet). Unity console output goes throughFilteringGameLogger→UnityGameLogSink; avoid scatteringDebug.Logacross business code. - Serilog / NLog / Microsoft.Extensions.Logging in Unity are wired separately if you need files, Seq, Elasticsearch, etc. They are not required for core code: implement your own
IGameLoggeror replace the sink to mirror into Serilog without mixing two logging styles in one layer. - Filtering in the Unity console: by message prefix (category from
GameLogFeature), byTraceIdin the orchestrator/command chain (see host README), plus minimum level in the log asset. - Editor (menus, setup without DI):
CoreAIEditorLog— single entry point for editor messages.
Simplified runtime diagram:
flowchart LR
Game["Game: IAiOrchestrationService.RunTaskAsync"]
Orch["AiOrchestrator"]
LLM["ILlmClient"]
Sink["IAiGameCommandSink → MessagePipe"]
Router["AiGameCommandRouter"]
LuaP["LuaAiEnvelopeProcessor"]
Lua["SecureLuaEnvironment + MoonSharp"]
Game --> Orch
Orch --> LLM
Orch --> Sink
Sink --> Router
Router --> LuaP
LuaP --> Lua
LuaP -->|"error + Programmer"| Orch
- The game calls
IAiOrchestrationService.RunTaskAsync(AiTaskRequest)(role, hint,Priority,CancellationScope, optional Lua repair fields,TraceId). - The default implementation is
QueuedAiOrchestrator(concurrency limit, priority, canceling the previous task with the sameCancellationScope) aroundAiOrchestrator.AiOrchestratorassignsTraceId, assembles prompts, asksIConversationContextManagerto prepare long chat history, then callsILlmClient.CompleteAsync; withIRoleStructuredResponsePolicyfor a role, one retry is allowed with astructured_retry:hint in user/hint. ThenApplyAiGameCommandis published (AiEnvelope,TraceId, …). Metrics —IAiOrchestrationMetrics(log underGameLogFeature.Metrics). - In DI,
ILlmClientisLoggingLlmClientDecoratoraroundRoutingLlmClient(or a legacy single client): inside —OpenAiChatLlmClient/MeaiLlmUnityClient/StubLlmClientperLlmRoutingManifestand role. LogGameLogFeature.Llm(LLM ▶/LLM ◀/LLM ⏱), backend lineRoutingLlmClient→OpenAiHttp, etc. For “is this stub?” —LoggingLlmClientDecorator.Unwrap(client). - Subscriber
AiGameCommandRouterreceivesApplyAiGameCommandfrom MessagePipe and marshals handling to the Unity main thread (UniTask.SwitchToMainThread), then callsLuaAiEnvelopeProcessor.Process: Lua is extracted from text, executed in the sandbox with API fromIGameLuaRuntimeBindings;[MessagePipe]logs include the same tasktraceId. - On success / failure,
LuaExecutionSucceeded/LuaExecutionFailedare published (TraceIdpreserved). For the Programmer role on error, the orchestrator is invoked again with repair context and the sameTraceId(up to 3 attempts by default, configurable viaCoreAISettings.MaxLuaRepairRetries).
Important: gameplay systems may subscribe to ApplyAiGameCommand and react to command types; do not parse raw LLM text outside the shared pipeline if you want consistency. For logs and timeout details, see LLMUNITY_SETUP_AND_MODELS.md §1 (CoreAI block) and timeout.
Unity main thread (short): after QueuedAiOrchestrator, async continuations often run not on the main thread; Publish from the orchestrator may arrive from the thread pool. Any code using UnityEngine, FindObjectsByType, scene, or UI — only on the main thread or after explicit marshaling. The template marshals in AiGameCommandRouter; your own MessagePipe subscribers should follow the same rule. Normative text and checklist: DGF_SPEC.md §9.4.
QueuedAiOrchestrator is the default IAiOrchestrationService wrapper. It provides:
- Concurrency cap:
AiOrchestrationQueueOptions.MaxConcurrentlimits total in-flight work across non-streaming and streaming tasks. - Priority: higher
AiTaskRequest.Priorityruns first. Equal priority is FIFO. - Shared sync/stream priority:
RunTaskAsyncandRunStreamingAsyncuse one effective priority order; a high-priority stream is not blocked behind a lower-priority non-stream task. - Latest-wins scopes: when a new task has the same non-empty
CancellationScope, older active and pending work for that scope is cancelled immediately. - Explicit stop:
CancelTasks(scope)cancels active work and removes pending non-streaming / streaming work for that scope. - External cancellation: a caller
CancellationTokencancels pending work before it starts, so callers do not wait for a free LLM slot just to observe cancellation.
Beginner rule: set CancellationScope = roleId for UI/chat-style “only latest request matters” flows.
Advanced rule: use stable domain scopes (arena_wave_plan, npc:merchant:dialogue) and priority bands
for predictable gameplay scheduling.
Chat history is not sent blindly forever. When AgentMemoryPolicy.RoleMemoryConfig.WithChatHistory is enabled, AiOrchestrator loads recent stored chat and passes it to IConversationContextManager.
The default DeterministicConversationContextManager uses the role ContextTokens budget (and the portable token budget when enabled). Fresh turns remain in LlmCompletionRequest.ChatHistory; older turns are compacted into ## Conversation Summary in the system prompt. Summaries are stored in IConversationSummaryStore: RegisterCorePortable wires InMemoryConversationSummaryStore by default (accumulation for the process); Unity’s CoreAILifetimeScope overrides with FileConversationSummaryStore for disk persistence. This compaction is deterministic and does not spend another LLM request.
Production projects can replace IConversationContextManager with an implementation that calls a backend summarizer, stores summaries per user/session/topic, or applies stricter privacy rules. Keep the output short and factual because it becomes part of every later request.
Tool calls are awaited by ToolExecutionPolicy.ExecuteSingleAsync (portable, CoreAI.Core), including async AIFunction implementations. The policy publishes tool lifecycle events through the IToolCallEventPublisher abstraction:
| Event | When |
|---|---|
LlmToolCallStarted |
Immediately before AIFunction.InvokeAsync |
LlmToolCallCompleted |
After successful invocation |
LlmToolCallFailed |
After failed invocation, exception, or missing tool |
In Unity, MessagePipeToolCallEventPublisher bridges these calls to GlobalMessagePipe. Non-Unity hosts can supply their own implementation or use NullToolCallEventPublisher.
Additionally, IToolExecutionNotifier.NotifyToolExecuted fires after each successful tool execution — in Unity this delegates to CoreAi.NotifyToolExecuted via CoreAiToolExecutionNotifier.
Both streaming and non-streaming paths in MeaiLlmClient create ToolExecutionPolicy with the same adapters, ensuring identical event sequences regardless of execution path.
Each event exposes Info: LlmToolCallInfo with TraceId, RoleId, provider CallId, ToolName, and sanitized ArgumentsJson. Use Info.CallId when correlating start/completed/failed logs, especially when providers issue several tool calls in one response.
Since v1.5.0, CoreAI uses two logging interfaces:
| Interface | Package | Used by | Static access |
|---|---|---|---|
ILog |
CoreAI.Core (portable) |
ToolExecutionPolicy, SmartToolCallingChatClient, LoggingLlmClientDecorator |
Log.Instance |
IGameLogger |
CoreAI.Source (Unity) |
MeaiLlmClient, RoutingLlmClient, Unity-side infrastructure |
DI-injected |
In production, CoreServicesInstaller registers UnityLog : ILog and sets Log.Instance to that adapter. Both interfaces write to the same Unity console.
Key rule for tool-call diagnostics: the [ToolCall] per-call diagnostic line is written by ToolExecutionPolicy via ILog (Log.Instance), not IGameLogger. If a PlayMode test uses a SpyLogger : IGameLogger to capture log lines, it must also implement ILog and set Log.Instance = spy before invoking the pipeline, otherwise [ToolCall] lines are silently dropped to NullLog.
// PlayMode test pattern:
private sealed class SpyLogger : IGameLogger, ILog { ... }
[TearDown] public void TearDown() => Log.Instance = NullLog.Instance;
// In test body:
var spy = new SpyLogger();
Log.Instance = spy;The system prompt sent to the model is not the literal string you pass to AgentBuilder.WithSystemPrompt. CoreAI composes the final prompt from three independent layers, in this order, in AiPromptComposer:
| Layer | Source | Configured by | Purpose |
|---|---|---|---|
| 1 — Universal Prefix | ICoreAISettings.UniversalSystemPromptPrefix (default: 4 baseline rules) |
CoreAISettingsAsset Inspector → General → Universal Prompt Prefix |
Project-wide guard rails that apply to every role (style, safety, output format). |
| 2 — Role base prompt | AgentPromptsManifest ScriptableObject OR Resources/Prompts/{RoleId}.txt OR built-in fallback string for BuiltInAgentRoleIds |
AgentPromptsManifest asset |
Stable per-role instructions (Creator, Programmer, PlainChat, SmartChat, Merchant, etc.). |
| 3 — Builder additional prompt | AgentBuilder.WithSystemPrompt(...) text |
Code | Per-instance refinement (this specific NPC, this scene-bound storyteller). |
Composition order: Layer 1 + "\n\n" + Layer 2 + "\n\n" + Layer 3. Each layer is optional. If Layer 2 is missing for a custom role, only Layers 1 + 3 are used.
Skipping the universal prefix. Roles that need a fully custom prompt (strict JSON parsers, validators) opt out per role:
new AgentBuilder("JsonParser")
.WithSystemPrompt("You are a strict JSON parser. Output JSON only.")
.WithOverrideUniversalPrefix() // skips Layer 1 for this role
.Build();How to inspect the actual final prompt. Two options:
- Toggle
Log LLM InputonCoreAISettingsAsset(Inspector → Debug → Log LLM Input). The composed prompt is dumped to the console for every request. - Read
AgentTurnTrace.SystemPromptfrom the orchestrator (Unity tools and EditMode tests already do this — seeAiOrchestratorHistoryEditModeTests).
Common confusion. A frequent first-time issue is "I wrote You are a pirate, but the agent keeps mentioning rules I never wrote." That's Layer 1 leaking through. Either edit UniversalSystemPromptPrefix on the asset, or call WithOverrideUniversalPrefix() for that single role. Editing the prefix changes behavior for every agent that does not opt out — make that edit deliberately.
Why three layers and not one big prompt? It keeps the universal rules in one place (a single asset edit propagates to every NPC), keeps role catalogues reusable across projects (Layer 2), and lets per-instance customization stay in code with the rest of the agent definition (Layer 3) — without copy-pasting the universal rules into every WithSystemPrompt call.
LlmExecutionMode is the public mode surface. One project can use a single global mode from CoreAISettingsAsset, or several modes at once through LlmRoutingManifest profiles.
| Mode | Runtime client path | When to use |
|---|---|---|
| LocalModel | MeaiLlmUnityClient via LLMAgent |
Local/offline prototyping and shipped local models |
| ClientOwnedApi | OpenAiChatLlmClient |
User/developer owns the provider key |
| ClientLimited | ClientLimitedLlmClientDecorator → OpenAiChatLlmClient |
Local caps for demos or prototypes |
| ServerManagedApi | ServerManagedLlmClient pointed at a backend proxy |
Production WebGL/multiplayer/school/SaaS deployment |
| Offline | OfflineLlmClient or StubLlmClient |
Tests and builds without live model access. Conversational roles (chat, Teacher-style ids, NPC dialog) receive a single Offline Custom Response line from settings — never the full serialized UserPayload. SourceTag == "Chat" failures return a trimmed error string to the orchestrator caller instead of null. See COREAI_SETTINGS.md (Offline). |
RoutingLlmClient resolves a role through LlmClientRegistry, annotates LlmCompletionRequest.RoutingProfileId, and publishes LlmBackendSelected, LlmRequestStarted, LlmRequestCompleted, and LlmUsageReported via MessagePipe. Diagnostics and UI code should subscribe to those messages instead of inspecting registry internals.
Note (child LifetimeScope): those events are published with IPublisher<T> from CoreAILifetimeScope. If your title uses a child scope and a second RegisterMessagePipe(), constructor-injected ISubscriber<LlmRequestStarted> (and related types) resolved only in the child may attach to a different broker graph, so you will see no LLM telemetry despite live completions. Use GlobalMessagePipe.GetSubscriber<T>() after the parent scope has built (same provider as CoreServicesInstaller’s SetProvider), or avoid a second RegisterMessagePipe and extend the parent pipe for game-only events.
Note (PlayMode tests without a scene scope): ToolExecutionPolicy publishes LlmToolCall* via GlobalMessagePipe only if a provider exists. For package PlayMode fixtures, call GlobalMessagePipeMinimalBootstrap.EnsureInitializedForLlmDiagnostics() (or use TestAgentSetup, which invokes it in Initialize) before asserting on tool-call messages.
ServerManagedApi supports dynamic backend authorization:
ServerManagedAuthorization.SetProvider(() => "Bearer " + authTokenStore.CurrentJwt);Provider failures use LlmErrorCode on LlmCompletionResult, LlmStreamChunk, and LlmRequestCompleted, so callers can handle QuotaExceeded, AuthExpired, RateLimited, BackendUnavailable, and other stable categories without parsing error text.
For mixed routing, create profiles such as player_server, analyzer_limited, and creator_local, then map role ids to those profiles. A single request always resolves to one concrete backend, but the scene can keep multiple profiles active.
Symbol COREAI_NO_LLM (manual opt-out): the container keeps a chain with StubLlmClient / HTTP as needed — details in DGF_SPEC §5.2.
Symbol COREAI_HAS_LLMUNITY (automatic): defined via versionDefines in the asmdef when the ai.undream.llm package is installed. Code that depends on LLMUnity types (MeaiLlmUnityClient, LLMAgent, LLMManager) compiles only with this symbol. Users do not set it manually.
LLMUnity defaults (Editor / desktop player, since v1.7.4): when LocalModel / UseLlmUnity is on, ConfigurableLlmAgentProvider can auto-create a runtime LLM + LLMAgent from CoreAISettingsAsset if the scene has none (LlmUnityAutoCreateRuntimeHost, default on). GgufModelPath on the asset is applied to LLM.model before Model Manager fallback. LlmUnityAutostartLocalServer (default on) triggers a post-DI warm-up via LlmUnityAutostartEntryPoint (timeout: LlmUnityStartupTimeoutSeconds). WebGL and builds without LLMUnity keep the previous scene-based / stub paths. See LLMUNITY_SETUP_AND_MODELS.md.
Observability: GameLogFeature.Llm (LLM requests); GameLogFeature.Metrics (orchestrator metrics, not in AllBuiltIn — enable manually in the asset). Older Game Log Settings without the Llm bit are patched on OnValidate. Filtering by traceId links LLM ▶/◀ and ApplyAiGameCommand.
For streaming with tool-calling, MeaiLlmClient.CompleteStreamingAsync uses one cycle per MEAI step. Since v1.7.3, when tools are declared (Tools non-empty) and LlmCompletionRequest.BufferFullStreamingIterationWhenToolsDeclared is not true, the client uses a hybrid JSON hold (same idea as for unbound streaming): only the prefix that cannot be part of an incomplete text-shaped tool JSON is streamed live; the rest is held until extraction runs, so tool JSON does not leak into the chat. Native delta.tool_calls (Path 2) and text-shaped JSON (Path 1) both reconcile any held prefix with the cleaned assistant string and emit a suffix as LlmStreamChunk.Text when needed. Set BufferFullStreamingIterationWhenToolsDeclared = true only if a backend fragments deltas in a way that breaks hybrid hold.
By default, per-role streaming override is enabled for roles with tools (AgentMode.ToolsAndChat and AgentMode.ToolsOnly); for AgentMode.ChatOnly the standard fallback from settings remains.
CoreAIGameEntryPoint in the Unity layer is idempotent: repeated Start() does not reinitialize global CoreAIAgent and logs a warning on LogTag.Composition, guarding against accidental double composition of the scene container.
-
System prompt chain: manifest (optional) →
Resources/AgentPrompts/System/<RoleId>.txt→ built-in fallback (BuiltInAgentSystemPromptTexts). -
Built-in roles: see
BuiltInAgentRoleIdsandAgentRolesAndPromptsTests. -
Custom agents: use
AgentBuilderto create agents with unique tools. See AGENT_BUILDER.md. -
User payload: default JSON like
{"telemetry":{...},"hint":"..."}fromGameSessionSnapshot.Telemetry; Lua repair addslua_repair_generation,lua_error,fix_this_lua(AiPromptComposer). -
Runtime context: register
IAiPromptContextProviderimplementations to append per-request context such as current quest, lesson slot, learner profile, or objective under## Runtime Context. -
Agent memory (optional): the agent persists memory via MEAI tool calling:
{"name": "memory", "arguments": {"action": "write", "content": "..."}}— overwrite{"name": "memory", "arguments": {"action": "append", "content": "..."}}— append{"name": "memory", "arguments": {"action": "clear"}}— clear
By default memory is off for all roles except Creator (see
AgentMemoryPolicy). At Unity runtime, memory is stored underApplication.persistentDataPath/CoreAI/AgentMemory/<RoleId>.json. For multi-user or session-scoped products, wrap a store withScopedAgentMemoryStoreDecoratorand provide anIAgentMemoryScopeProvider. -
MEAI tools on Unity (
ToolInvocationMarshaler): since v1.5.12,ToolExecutionPolicywraps MEAIAIFunction.InvokeAsyncinICoreAISettings.ToolInvocationMarshaler. The defaultCoreAISettingsAssetusesUnityMainThreadLlmAsyncMarshaler(UniTask.SwitchToMainThreadin Player / packaged builds only — since v1.5.14, Edit Mode!Application.isPlayingskips the hop to avoid deadlock withTask.Wait/Resulton the editor managed main thread) becauseSmartToolCallingChatClientstill usesConfigureAwait(false)for WebGL. HTTP OpenAI traffic is handled by portableMeaiOpenAiChatClient(System.Net.Http.HttpClient) inCoreAI.Core.
CoreAI uses MessagePipe as the Unity-side integration bus. The default orchestrator flow is:
AiOrchestrator → IAiGameCommandSink → MessagePipeAiCommandSink → IPublisher<ApplyAiGameCommand> → AiGameCommandRouter
The important rule: gameplay handling must run on the Unity main thread. AiGameCommandRouter
already does UniTask.SwitchToMainThread() before processing Lua, world commands, logs, and
CommandReceived.
For UI, tutorials, simple game reactions, or debugging, use:
AiGameCommandRouter.CommandReceived += OnAiCommand;
private void OnAiCommand(ApplyAiGameCommand cmd)
{
// Already on Unity main thread.
Debug.Log(cmd.JsonPayload);
}This is the easiest extension point: no direct DI or MessagePipe subscription is required, and it is safe to touch Unity objects.
For larger systems, register your own ISubscriber<ApplyAiGameCommand> subscriber in the container.
This is useful for analytics, multiplayer replication, custom command routing, save integration, or
domain-specific systems.
If you subscribe directly to MessagePipe, marshal your handler to the main thread before touching Unity APIs:
_subscription = subscriber.Subscribe(cmd =>
{
UniTask.Void(async () =>
{
await UniTask.SwitchToMainThread();
// Safe Unity/GameObject work here.
});
});Direct MessagePipe subscribers may also run lightweight, thread-safe work without switching (for example
enqueueing telemetry), but Unity scene mutation, UI, GameObject, Transform, Animator, and most save/UI
integrations should use the main-thread path.
Prefer publishing through IAiGameCommandSink when you are inside CoreAI/agent code. Use
IPublisher<ApplyAiGameCommand> directly only in Unity integration code that is already part of the
MessagePipe composition. Keep payloads explicit (CommandTypeId, TraceId, SourceRoleId) so logs and
external subscribers can follow the agent work.
- Parsing:
AiLuaPayloadParser(markdown → JSONExecuteLua). - Execution:
SecureLuaEnvironment,LuaExecutionGuard,LuaApiRegistry. - Limits:
LuaExecutionGuardapplies best-effort wall-clock and step caps (seeInstructionLimitDebugger) so infinite Lua loops cannot hang forever. - Default game calls in the template:
LoggingLuaRuntimeBindings—report(string),add(a,b). - Extension: register your
IGameLuaRuntimeBindingsinCoreAILifetimeScope(instead of or on top of the default — per project policy; avoid duplicating the interface in the container without an explicit replacement). - World control (runtime): the built-in World Commands feature adds Lua API
coreai_world_*and executes commands on the Unity main thread via MessagePipe. See WORLD_COMMANDS.md.
This is separate CoreAI file storage under Application.persistentDataPath (via File.WriteAllText / read when creating the store), not Neo SaveProvider and not the title’s shared game save.
| What | Default path |
|---|---|
| Programmer Lua versions | persistentDataPath/CoreAI/LuaScriptVersions/lua_script_versions.json |
| Data overlays | persistentDataPath/CoreAI/DataOverlayVersions/data_overlays.json |
- After restarting the game, when the container starts the store reads JSON again: current text (
current) and revision history are restored; orchestrator/Lua use the loaded state. - Android / iOS / Desktop — normal writes to the app directory; data persists across sessions until the user uninstalls the app or clears “app data”.
- WebGL —
persistentDataPathin Unity maps to browser storage (IndexedDB / IDBFS): agent memory and chat JSON useFileAgentMemoryStoreunderCoreAILifetimeScopeon the player too (v1.6.19+), withCoreAi_PersistFsSyncafter writes so data survives reload whenApplication.Quitdoes not run. Since v1.7.2,CoreAiPersistFs.jslibqueuesFS.syncfsso only one sync runs at a time (avoids concurrent sync warnings and related stalls). Conversation summaries for compaction stay in-memory on WebGL (seeCoreAILifetimeScope). Users can clear site data; quota limits may apply — see Unity documentation for your version under WebGL. - Sync with cloud / a single game save needs a separate integration (copy files, custom provider, or mirroring after
RecordSuccessfulExecution).
| Assembly | How to run | What it covers |
|---|---|---|
| CoreAI.Tests | Test Runner → Edit Mode | Prompts, stub LLM, Lua sandbox, envelope parser, LuaAiEnvelopeProcessor, repair composer, LuaProgrammerPipelineEndToEndEditModeTests (orchestrator → envelope → Lua → error → Programmer retry → success). |
PlayMode assemblies (CoreAI.Tests.PlayMode.*) |
Test Runner → Play Mode (filter by assembly) | FastNoLlm — quick stub coverage; LlmVerification — streaming/HTTP/tool/memory probes (env COREAI_OPENAI_TEST_* / LLMUNITY — see LLMUNITY doc); Scenarios — crafting / merchant narratives. Shared helpers: Shared, LlmInfra. |
Recommendation: run Edit Mode before a PR; Play Mode when DI/scene or the HTTP client changes.
Current Edit Mode checks for recent stability fixes:
CoreAIGameEntryPointEditModeTests— single-init behavior for the CoreAI facade when the entry point starts twice.MeaiLlmClientEditModeTests.CompleteStreamingAsync_ToolJsonWithVisiblePrefix_KeepsPrefixAndHidesJson— tool-call JSON does not reach the UI; visible text is preserved.MeaiLlmClientEditModeTests.CompleteStreamingAsync_TooManyToolIterations_ReturnsTerminalError— streaming tool loop ends with a controlled error when the iteration limit is exceeded.
- Scene
RogueliteArena(see Build Settings):CompositionRootwithCoreAILifetimeScope,ExampleRogueliteEntry(arena + hotkeys). - F9 — Programmer task (demo Lua +
report),CoreAiLuaHotkeycomponent. - Child
LifetimeScopein the sample:RogueliteArenaLifetimeScope— stub for game features with Parent = core.
Details: ../../_exampleGame/README.md.
| Task | Where to look / what to do |
|---|---|
| New agent role | Constant or string id; prompt in Resources or manifest; add a test in AgentRolesAndPromptsTests if needed. |
| New AI command type | Extend handling of ApplyAiGameCommand.CommandTypeId (new subscriber or branch in the game); do not mix with raw LLM text without a parser. |
| New Lua functions for the LLM | Implement IGameLuaRuntimeBindings; register delegates in LuaApiRegistry (whitelist). |
| World control from Lua | Use World Commands (coreai_world_*), configure CoreAiPrefabRegistryAsset and assign it on CoreAILifetimeScope. See WORLD_COMMANDS.md. |
| Change model / cloud | LLMUNITY_SETUP_AND_MODELS.md; do not commit API keys for production. |
| Multiplayer | DGF_SPEC, AI_AGENT_ROLES (placement); LLM authority on the host is the game’s responsibility. |
Use the static facade CoreAI.Api.CoreAi to manage current agent state (cancel tasks, clear memory, subscribe to tools).
If an agent is generating for a long time or its task is no longer valid, you can programmatically cancel all its current and queued tasks in QueuedAiOrchestrator:
// Stop generation for a specific role (uses CancellationScope = roleId)
CoreAi.StopAgent("Teacher");Also available directly on the orchestrator for advanced users: _orchestrator.CancelTasks("Teacher").
While a reply is generating, the send button coreai-chat-send in CoreAiChatPanel automatically switches to Stop mode:
- visually turns red (
.coreai-chat-send-button-stop); - button label changes from
>toX; - tooltip:
Stop generation (Esc).
The user can interrupt generation:
- by clicking that button again;
- with the
Esckey while the chat is focused.
In both cases the UI calls CoreAi.StopAgent(roleId) and cancels the active request token, which safely stops the current reply and related role tasks in QueuedAiOrchestrator.
Starting with com.nexoider.coreaiunity 0.25.6, the button stays enabled during generation (stop control), busy state is set until the first await, and the UI reliably clears streaming/sending state after cancel.
Stock chat template: default floating size ~650×910 (see CoreAiChatConfig / CoreAiChat.uss), vertical scrollbar flush to the panel’s inner right edge, and optional coreai-long-request-hint (status under the typing row on long turns) — details in README_CHAT.md.
From 0.25.7, auto-creation of CoreAISettings.asset in the Editor (CoreAIBuildMenu) runs via EditorApplication.delayCall: not in the same frame as domain reload, and with an on-disk file check — a cloned Assets/Resources/CoreAISettings.asset is not replaced by an empty asset with defaults.
Reset chat history (short-term context) and/or long-term agent memory (MemoryTool):
// Fully clear agent context (message history and memory)
CoreAi.ClearContext("Teacher");
// Clear only chat history (session context), leave agent memory intact
CoreAi.ClearContext("Teacher", clearChatHistory: true, clearLongTermMemory: false);
// Clear only long-term memory (facts, state), keep the current dialogue
CoreAi.ClearContext("Teacher", clearChatHistory: false, clearLongTermMemory: true);For hooks (sounds, VFX, logging) you can subscribe to the global event for a successful tool call from the model (via MEAI):
private void OnEnable()
{
CoreAi.OnToolExecuted += HandleToolExecuted;
}
private void OnDisable()
{
CoreAi.OnToolExecuted -= HandleToolExecuted;
}
private void HandleToolExecuted(string roleId, string toolName, IDictionary<string, object?>? args, object? result)
{
Debug.Log($"Agent {roleId} used tool {toolName}!");
// Example: react to a specific tool
if (toolName == "spawn_item" && args != null && args.TryGetValue("item_id", out var itemId))
{
AudioSystem.PlaySound($"spawn_{itemId}");
}
}The built-in CoreAiChatPanel can append one diagnostic row per tool call when CoreAiChatConfig.ShowToolCallsInChat is enabled (default off). Override CoreAiChatPanel.FormatToolExecutedForChat for custom text.
The built-in chat panel header (CoreAiChatPanel) has a 🗑 button — on click it clears all messages from the UI and resets short-term context (chat history) for the agent. That is the default behavior.
You can control this in code:
// Clear UI messages + chat history (default for 🗑)
chatPanel.ClearChat();
// Full clear: chat and long-term memory
chatPanel.ClearChat(clearChatHistory: true, clearLongTermMemory: true);
// Long-term memory only, keep the current dialogue in the UI
chatPanel.ClearChat(clearChatHistory: false, clearLongTermMemory: true);Practical integration pain points and ways to keep CoreAI automatic but configurable.
Problem: easy to forget CoreAILifetimeScope (LLM backend, prompts, log settings, world prefab registry).
Simplify:
- Add an Editor menu “CoreAI → Setup → Create Default Assets”:
GameLogSettingsAsset(withLlmand needed features enabled)OpenAiHttpLlmSettings(empty template)AgentPromptsManifest(optional)CoreAiPrefabRegistryAsset(empty whitelist)
- Add “CoreAI → Setup → Validate Scene” (checks:
CoreAILifetimeScopepresent, references valid, warnings). - Use CoreAI → Delete All Persistent Saves... (Editor only, not in Play Mode) to wipe
Application.persistentDataPath/CoreAI— agent memory + persisted chat JSON, conversation summaries (desktop), Lua script versions, data overlays. Does not delete assets underAssets/.
Problem: “Why is the model silent?” — LLMAgent missing or HTTP off, and the core fell back to stub.
Simplify:
- Log an explicit summary at startup: backend=stub/llmunity/http and why.
- Show current backend and last request
traceIdin UI/dashboard.
Problem: commands may arrive from the thread pool.
Simplify:
- Canonize one “apply to Unity” entry point (as with
AiGameCommandRouter) and forbid handling directly fromISubscriber<T>without marshaling. - Add a small util/template
MainThreadCommandQueuefor projects without UniTask.
Problem: infinite loops, API growth, Lua errors.
Simplify:
- Keep Lua API as small features (Versioning, World Commands, game bindings) and document each.
- Enable limits (
LuaExecutionGuard) by default and log limit breaches as a distinct signal.
Problem: Programmer changes both code and data; fast rollback matters.
Simplify:
- Stable keys (use case id / overlay key) and a single “Versions” UI in a dashboard (original/current/history + reset).
- “Reset All” for emergency recovery.
Problem: Play Mode tests may depend on model/network.
Simplify:
- For CI: default
COREAI_NO_LLMor stub profile, mandatory Edit Mode run. - For an “integration” branch: separate manual job with HTTP env and a time cap.
- Edit Mode:
CoreAI.Testsgreen (prompts, Lua, parsers, envelope processor). - Play Mode: when changing
CoreAILifetimeScope, scenes,OpenAiChatLlmClient, or Play Mode tests — runCoreAI.Tests.PlayMode.FastNoLlm(always quick), then selectivelyCoreAI.Tests.PlayMode.LlmVerification/Scenarioswhere your change touches live LLMs or workflows. - Secrets: do not commit API keys,
.envwith keys, or local model paths with personal data; for CI use environment variables (see LLMUNITY_SETUP_AND_MODELS.md). - Documentation: if contracts or flow change (DGF §3 / DI), update DGF_SPEC and this guide in the same PR if needed.
- UPM release (any change under
Assets/CoreAIorAssets/CoreAiUnity): bumpversionin../../CoreAI/package.json(com.nexoider.coreai) and../package.json(com.nexoider.coreaiunity; dependency = core version); add entries in ../../CoreAI/CHANGELOG.md and ../CHANGELOG.md; update docs for the affected feature (root README.md / README_RU.md, DOCS_INDEX, README_CHAT, QUICK_START, etc.); if public API changes, add tests as needed.
Record major contract changes in DGF_SPEC (version in the header). DEVELOPER_GUIDE describes the current code map; if it diverges from code, the repository wins — update the guide in the same PR.
UPM sync: the number in the README header and in QUICK_START should match the current package.json, or package consumers see a stale version.
Version of this guide: 1.9.1 (May 2026) — Editor menu CoreAI → Delete All Persistent Saves... documents wiping persistentDataPath/CoreAI. 1.9 / UPM 1.7.4: LLMUnity runtime auto-host, GgufModelPath → LLM.model, LlmUnityAutostartLocalServer. 1.7.3: streaming hybrid hold when Tools declared; LlmCompletionRequest.BufferFullStreamingIterationWhenToolsDeclared. 1.7.2: WebGL CoreAiPersistFs FS.syncfs single-flight. 1.7.1: CoreAiChatPanel typing after buffered tool-hint marker. 1.7.0: LlmStreamChunk.BufferedStreamingNoToolBinding, BufferedStreamingUseToolProgressHint, CoreAiChatConfig.StreamingToolProgressHint. WebGL agent memory / chat JSON via FileAgentMemoryStore + CoreAi_PersistFsSync under CoreAILifetimeScope (v1.6.19+); fetch SSE jslib logs quiet by default (v1.6.19). Earlier: portable LLM pipeline decoupling (v1.5), MessagePipe event tests, UPM v1.5.0.