Segment lifecycle, resolution scoring, and segment analytics.
A session can contain multiple tasks. MOA tracks each task as a segment so learning is based on discrete outcomes instead of whole-session guesses.
Segments answer:
- What task was being attempted?
- Which intent was assigned, if any?
- Which tools and skills were used?
- How many turns and tokens did it cost?
- Did it resolve?
- What learning should be recorded from the outcome?
TaskSegment lives in crates/moa-core/src/types/segments.rs; persistent rows live in Postgres task_segments.
Important fields:
idsession_idtenant_idsegment_indexintent_labelintent_confidencetask_summarystarted_atended_atresolutionresolution_signalresolution_confidencetools_usedskills_activatedturn_counttoken_costprevious_segment_id
ActiveSegment is the lighter projection stored in session VO state.
Query rewriting produces QueryRewriteResult:
rewritten_query- high-level
intent is_new_tasktask_summary- suggested tools and clarification metadata
When a turn is prepared, SegmentTracker uses the query rewrite metadata and session events to decide whether to:
- keep the current active segment
- create the first segment
- close the previous segment and start a new one
The event log records SegmentStarted and SegmentCompleted events.
New segments can be classified against active tenant intents:
- Build text from task summary and first user message.
- Embed the text with the configured embedding provider.
- Query active tenant intent centroids in Postgres.
- Accept the nearest match when cosine distance is below the configured threshold.
- Store the label and confidence on the segment.
- Append
intent_classifiedtolearning_log.
If a tenant has no active intents, classification returns no match. New tenants therefore start blank.
During a turn, the orchestrator records:
- tool names used
- skill names activated
- completed turn count
- token cost
The active VO state and task_segments row stay in sync through session store calls.
Resolution scoring combines five signal classes:
| Signal | Meaning |
|---|---|
| Tool outcome | Whether tools completed, failed, or produced useful output |
| Verification | Whether tests/checks/verification commands succeeded |
| Continuation | Whether the next user message indicates success, rework, abandonment, or a new task |
| Self-assessment | Whether the agent response claims completion or uncertainty |
| Structural | Whether turns, cost, and duration are anomalous for the tenant/intent baseline |
The scorer outputs:
resolvedpartialunknownfailedabandoned
Scoring phases:
immediate: when a segment appears idle or completeddeferred: after a later user message gives continuation evidencefinal: when cancellation or timeout closes the segment
Each score updates the segment row and appends resolution_scored to learning_log.
Segment rows drive learning views:
| View | Use |
|---|---|
skill_resolution_rates |
Ranks skills by tenant/intent resolution outcomes |
intent_transitions |
Tracks common task-to-task flows |
segment_baselines |
Provides structural baselines for resolution scoring |
Refresh is handled through the session store's materialized-view refresh path.
Segment events are durable boundaries. History compaction can summarize older events, but segment start/completion records remain part of replay and analytics.
User messages
-> query rewrite
-> segment start/continue/complete
-> tool and skill counters
-> resolution score
-> learning_log
-> skill ranking and intent learning
Task segmentation is the measurement layer that makes the rest of MOA's learning pipeline reliable.