Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 14 additions & 2 deletions DRIFT.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,7 @@ When a `critical` drift is detected:
- OpenAI Responses API → `src/responses.ts` (`buildTextResponse`, `buildToolCallResponse`, `buildTextStreamEvents`, `buildToolCallStreamEvents`)
- Anthropic Claude → `src/messages.ts` (`buildClaudeTextResponse`, `buildClaudeToolCallResponse`, `buildClaudeTextStreamEvents`, `buildClaudeToolCallStreamEvents`)
- Google Gemini → `src/gemini.ts` (`buildGeminiTextResponse`, `buildGeminiToolCallResponse`, `buildGeminiTextStreamChunks`, `buildGeminiToolCallStreamChunks`)
- Gemini Interactions → `src/gemini-interactions.ts` (`buildInteractionsTextResponse`, `buildInteractionsToolCallResponse`, `buildInteractionsTextSSEEvents`, `buildInteractionsToolCallSSEEvents`)

2. **Update the builder** — add or modify the field to match the real API shape.

Expand Down Expand Up @@ -106,7 +107,18 @@ When a model is deprecated:

## WebSocket Drift Coverage

In addition to the 19 existing drift tests (16 HTTP response-shape + 3 model deprecation), WebSocket drift tests cover aimock's WS protocols (4 verified + 2 canary = 6 WS tests):
In addition to the 23 existing drift tests (20 HTTP response-shape + 3 model deprecation), WebSocket drift tests cover aimock's WS protocols (4 verified + 2 canary = 6 WS tests):

### Gemini Interactions API (Beta)

The Gemini Interactions API (`/v1beta/interactions`) is covered by 4 drift tests in `gemini-interactions.drift.ts`:

- Non-streaming text shape
- Streaming text event sequence
- Non-streaming tool call shape
- Streaming tool call event sequence

Uses `describe.skipIf(!GOOGLE_API_KEY)` like other Gemini tests. The Interactions API is in Beta — shapes may shift as Google iterates on the endpoint.

| Protocol | Text | Tool Call | Real Endpoint | Status |
| ------------------- | ---- | --------- | ------------------------------------------------------------------- | ---------- |
Expand Down Expand Up @@ -163,4 +175,4 @@ The fix workflow also supports `workflow_dispatch` for manual runs.

## Cost

~25 API calls per run (16 HTTP response-shape + 3 model listing + 6 WS including canaries) using the cheapest available models (`gpt-4o-mini`, `gpt-4o-mini-realtime-preview`, `claude-haiku-4-5-20251001`, `gemini-2.5-flash`) with 10-100 max tokens each. Under $0.15/week at daily cadence. When Gemini Live text-capable models become available, the 2 canary tests will become full drift tests, increasing real WS connections from 4 to 6.
~29 API calls per run (20 HTTP response-shape + 3 model listing + 6 WS including canaries) using the cheapest available models (`gpt-4o-mini`, `gpt-4o-mini-realtime-preview`, `claude-haiku-4-5-20251001`, `gemini-2.5-flash`) with 10-100 max tokens each. Under $0.20/week at daily cadence. When Gemini Live text-capable models become available, the 2 canary tests will become full drift tests, increasing real WS connections from 4 to 6.
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,22 +35,22 @@ await mock.stop();

aimock mocks everything your AI app talks to:

| Tool | What it mocks | Docs |
| -------------- | ------------------------------------------------------------------------------------------------------- | --------------------------------------------------- |
| **LLMock** | OpenAI (Chat/Responses/Realtime), Claude, Gemini (REST/Live), Bedrock, Azure, Vertex AI, Ollama, Cohere | [Providers](https://aimock.copilotkit.dev/docs) |
| **MCPMock** | MCP tools, resources, prompts with session management | [MCP](https://aimock.copilotkit.dev/mcp-mock) |
| **A2AMock** | Agent-to-agent protocol with SSE streaming | [A2A](https://aimock.copilotkit.dev/a2a-mock) |
| **AGUIMock** | AG-UI agent-to-UI event streams for frontend testing | [AG-UI](https://aimock.copilotkit.dev/agui-mock) |
| **VectorMock** | Pinecone, Qdrant, ChromaDB compatible endpoints | [Vector](https://aimock.copilotkit.dev/vector-mock) |
| **Services** | Tavily search, Cohere rerank, OpenAI moderation | [Services](https://aimock.copilotkit.dev/services) |
| Tool | What it mocks | Docs |
| -------------- | -------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------- |
| **LLMock** | OpenAI (Chat/Responses/Realtime), Claude, Gemini (REST/Live/Interactions), Bedrock, Azure, Vertex AI, Ollama, Cohere | [Providers](https://aimock.copilotkit.dev/docs) |
| **MCPMock** | MCP tools, resources, prompts with session management | [MCP](https://aimock.copilotkit.dev/mcp-mock) |
| **A2AMock** | Agent-to-agent protocol with SSE streaming | [A2A](https://aimock.copilotkit.dev/a2a-mock) |
| **AGUIMock** | AG-UI agent-to-UI event streams for frontend testing | [AG-UI](https://aimock.copilotkit.dev/agui-mock) |
| **VectorMock** | Pinecone, Qdrant, ChromaDB compatible endpoints | [Vector](https://aimock.copilotkit.dev/vector-mock) |
| **Services** | Tavily search, Cohere rerank, OpenAI moderation | [Services](https://aimock.copilotkit.dev/services) |

Run them all on one port with `npx @copilotkit/aimock --config aimock.json`, or use the programmatic API to compose exactly what you need.

## Features

- **[Record & Replay](https://aimock.copilotkit.dev/record-replay)** — Proxy real APIs, save as fixtures, replay deterministically forever
- **[Multi-turn Conversations](https://aimock.copilotkit.dev/multi-turn)** — Record and replay multi-turn traces with tool rounds; match distinct turns via `turnIndex`, `hasToolResult`, `toolCallId`, `sequenceIndex`, or custom predicates
- **[11 LLM Providers](https://aimock.copilotkit.dev/docs)** — OpenAI Chat, OpenAI Responses, OpenAI Realtime, Claude, Gemini, Gemini Live, Azure, Bedrock, Vertex AI, Ollama, Cohere — full streaming support
- **[12 LLM Providers](https://aimock.copilotkit.dev/docs)** — OpenAI Chat, OpenAI Responses, OpenAI Realtime, Claude, Gemini, Gemini Live, Gemini Interactions, Azure, Bedrock, Vertex AI, Ollama, Cohere — full streaming support
- **Multimedia APIs** — [image generation](https://aimock.copilotkit.dev/images) (DALL-E, Imagen), [text-to-speech](https://aimock.copilotkit.dev/speech), [audio transcription](https://aimock.copilotkit.dev/transcription), [video generation](https://aimock.copilotkit.dev/video)
- **[MCP](https://aimock.copilotkit.dev/mcp-mock) / [A2A](https://aimock.copilotkit.dev/a2a-mock) / [AG-UI](https://aimock.copilotkit.dev/agui-mock) / [Vector](https://aimock.copilotkit.dev/vector-mock)** — Mock every protocol your AI agents use
- **[Chaos Testing](https://aimock.copilotkit.dev/chaos-testing)** — 500 errors, malformed JSON, mid-stream disconnects at any probability
Expand Down
5 changes: 4 additions & 1 deletion docs/docs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -306,7 +306,10 @@ <h2>The Suite</h2>
<tbody>
<tr>
<td>LLM Providers</td>
<td>OpenAI, Claude, Gemini, Bedrock, Azure, Vertex AI, Ollama, Cohere</td>
<td>
OpenAI, Claude, Gemini, Gemini Interactions, Bedrock, Azure, Vertex AI, Ollama,
Cohere
</td>
<td><a href="/chat-completions">Docs &rarr;</a></td>
</tr>
<tr>
Expand Down
8 changes: 8 additions & 0 deletions docs/fixtures/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -547,6 +547,7 @@ <h2>Provider Support Matrix</h2>
<th>OpenAI Responses</th>
<th>Claude</th>
<th>Gemini</th>
<th>Gemini Int.</th>
<th>Vertex AI</th>
<th>Bedrock</th>
<th>Azure</th>
Expand All @@ -566,6 +567,7 @@ <h2>Provider Support Matrix</h2>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Tool Calls</td>
Expand All @@ -578,6 +580,7 @@ <h2>Provider Support Matrix</h2>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Content + Tool Calls</td>
Expand All @@ -590,6 +593,7 @@ <h2>Provider Support Matrix</h2>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Streaming</td>
Expand All @@ -598,6 +602,7 @@ <h2>Provider Support Matrix</h2>
<td>SSE</td>
<td>SSE</td>
<td>SSE</td>
<td>SSE</td>
<td>Binary EventStream</td>
<td>SSE</td>
<td>NDJSON</td>
Expand All @@ -609,6 +614,7 @@ <h2>Provider Support Matrix</h2>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>&mdash;</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
Expand All @@ -626,6 +632,7 @@ <h2>Provider Support Matrix</h2>
<td>&mdash;</td>
<td>&mdash;</td>
<td>&mdash;</td>
<td>&mdash;</td>
</tr>
<tr>
<td>Response Overrides</td>
Expand All @@ -634,6 +641,7 @@ <h2>Provider Support Matrix</h2>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>&mdash;</td>
<td>Yes<sup>*</sup></td>
<td>&mdash;</td>
Expand Down
Loading
Loading