Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .changeset/add-ai-mistral.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
'@tanstack/ai-mistral': minor
---

Add new `@tanstack/ai-mistral` adapter package for Mistral models using the `@mistralai/mistralai` SDK. Supports streaming chat, tool calling, vision input (Pixtral / Mistral Medium / Small), structured output via JSON Schema, and reasoning streams (Magistral) β€” emitted as AG-UI `REASONING_*` events. Includes model metadata for Mistral Large, Medium, Small, Ministral 3B/8B, Codestral, Pixtral, Magistral, and Open Mistral Nemo.
329 changes: 329 additions & 0 deletions docs/adapters/mistral.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,329 @@
---
title: Mistral
id: mistral-adapter
order: 7
description: "Use Mistral models with TanStack AI β€” Mistral Large, Mistral Medium, Pixtral vision models, Magistral reasoning models, and Codestral via @tanstack/ai-mistral."
keywords:
- tanstack ai
- mistral
- mistral large
- pixtral
- magistral
- codestral
- adapter
- llm
---

The Mistral adapter provides access to Mistral's chat models, including Mistral Large, the multimodal Pixtral family, the Magistral reasoning models, and the Codestral code-specialized model.

## Installation

```bash
npm install @tanstack/ai-mistral
```

## Basic Usage

```typescript
import { chat } from "@tanstack/ai";
import { mistralText } from "@tanstack/ai-mistral";

const stream = chat({
adapter: mistralText("mistral-large-latest"),
messages: [{ role: "user", content: "Hello!" }],
});
```

## Basic Usage - Custom API Key

```typescript
import { chat } from "@tanstack/ai";
import { createMistralText } from "@tanstack/ai-mistral";

const adapter = createMistralText(
"mistral-large-latest",
process.env.MISTRAL_API_KEY!,
);

const stream = chat({
adapter,
messages: [{ role: "user", content: "Hello!" }],
});
```

## Configuration

```typescript
import {
createMistralText,
type MistralTextConfig,
} from "@tanstack/ai-mistral";

const config: Omit<MistralTextConfig, "apiKey"> = {
serverURL: "https://api.mistral.ai", // Optional, this is the default
defaultHeaders: {
"X-Custom-Header": "value",
},
};

const adapter = createMistralText(
"mistral-large-latest",
process.env.MISTRAL_API_KEY!,
config,
);
```

## Example: Chat Completion

```typescript
import { chat, toServerSentEventsResponse } from "@tanstack/ai";
import { mistralText } from "@tanstack/ai-mistral";

export async function POST(request: Request) {
const { messages } = await request.json();

const stream = chat({
adapter: mistralText("mistral-large-latest"),
messages,
});

return toServerSentEventsResponse(stream);
}
```

## Example: With Tools

```typescript
import { chat, toolDefinition } from "@tanstack/ai";
import { mistralText } from "@tanstack/ai-mistral";
import { z } from "zod";

const getWeatherDef = toolDefinition({
name: "get_weather",
description: "Get the current weather for a location",
inputSchema: z.object({
location: z.string(),
}),
});

const getWeather = getWeatherDef.server(async ({ location }) => {
return { temperature: 72, conditions: "sunny" };
});

const stream = chat({
adapter: mistralText("mistral-large-latest"),
messages: [{ role: "user", content: "What's the weather in Paris?" }],
tools: [getWeather],
});
```

## Example: Multimodal (Vision)

Use a vision-capable model β€” `pixtral-large-latest`, `pixtral-12b-2409`, `mistral-medium-latest`, or `mistral-small-latest` β€” to send images alongside text:

```typescript
import { chat } from "@tanstack/ai";
import { mistralText } from "@tanstack/ai-mistral";

const stream = chat({
adapter: mistralText("pixtral-large-latest"),
messages: [
{
role: "user",
content: [
{ type: "text", content: "What's in this image?" },
{
type: "image",
source: {
type: "url",
value: "https://example.com/photo.jpg",
},
},
],
},
],
});
```

For data-URL or base64 images, set `source.type` to `"data"` and provide `mimeType`:

```typescript
{
type: "image",
source: {
type: "data",
mimeType: "image/png",
value: base64String,
},
}
```

See [Multimodal Content](../advanced/multimodal-content) for the full content-part shape.

## Example: Reasoning (Magistral)

Magistral models (`magistral-medium-latest`, `magistral-small-latest`) stream their reasoning as separate events before the final answer. The adapter emits AG-UI `REASONING_*` chunks for the thinking content and `TEXT_MESSAGE_*` chunks for the answer:

```typescript
import { chat } from "@tanstack/ai";
import { mistralText } from "@tanstack/ai-mistral";

const stream = chat({
adapter: mistralText("magistral-medium-latest"),
messages: [{ role: "user", content: "Why is the sky blue?" }],
});

for await (const chunk of stream) {
if (chunk.type === "REASONING_MESSAGE_CONTENT") {
process.stdout.write(`[thinking] ${chunk.delta}`);
} else if (chunk.type === "TEXT_MESSAGE_CONTENT") {
process.stdout.write(chunk.delta);
}
}
```

Reasoning events are always closed before any text or tool output begins, so consumers see a complete `REASONING_START β†’ REASONING_MESSAGE_START β†’ REASONING_MESSAGE_CONTENT* β†’ REASONING_MESSAGE_END β†’ REASONING_END` sequence first.

See [Thinking & Reasoning](../chat/thinking-content) for the cross-provider event spec.

## Example: Structured Output

Generate JSON that conforms to a Zod schema using Mistral's `json_schema` response format:

```typescript
import { generate } from "@tanstack/ai";
import { mistralText } from "@tanstack/ai-mistral";
import { z } from "zod";

const recipeSchema = z.object({
name: z.string(),
ingredients: z.array(z.string()),
steps: z.array(z.string()),
});

const result = await generate({
adapter: mistralText("mistral-large-latest"),
messages: [
{ role: "user", content: "Give me a chocolate chip cookie recipe." },
],
outputSchema: recipeSchema,
});

console.log(result.data); // typed as z.infer<typeof recipeSchema>
```

See [Structured Outputs](../chat/structured-outputs) for the full guide.

## Model Options

Mistral exposes provider-specific options via `modelOptions`:

```typescript
const stream = chat({
adapter: mistralText("mistral-large-latest"),
messages,
temperature: 0.7,
topP: 0.9,
maxTokens: 1024,
modelOptions: {
random_seed: 42,
stop: ["END"],
safe_prompt: true,
frequency_penalty: 0.5,
presence_penalty: 0.5,
parallel_tool_calls: true,
tool_choice: "auto",
},
});
```

> Pass `temperature`, `topP`, and `maxTokens` at the top level β€” not inside `modelOptions`.

## Environment Variables

Set your API key in environment variables:

```bash
MISTRAL_API_KEY=...
```

Get a key from the [Mistral Console](https://console.mistral.ai/).

## Supported Models

### Chat

- `mistral-large-latest` β€” Flagship general-purpose model (128k context)
- `mistral-medium-latest` β€” Multimodal mid-tier model with vision
- `mistral-small-latest` β€” Fast, affordable multimodal model with vision
- `ministral-8b-latest` β€” 8B edge model
- `ministral-3b-latest` β€” 3B edge model
- `open-mistral-nemo` β€” Open 12B model

### Code

- `codestral-latest` β€” Code-specialized model (256k context)

### Vision

- `pixtral-large-latest` β€” Large vision model
- `pixtral-12b-2409` β€” 12B vision model

### Reasoning

Reasoning content is streamed as `REASONING_*` events before the final answer.

- `magistral-medium-latest` β€” Mid-tier reasoning model
- `magistral-small-latest` β€” Small reasoning model

See [Mistral's model comparison](https://docs.mistral.ai/getting-started/models/compare) for full details.

## API Reference

### `mistralText(model, config?)`

Creates a Mistral text adapter using the `MISTRAL_API_KEY` environment variable.

**Parameters:**

- `model` β€” The model name (e.g., `'mistral-large-latest'`)
- `config.serverURL?` β€” Custom base URL (optional)
- `config.defaultHeaders?` β€” Headers to attach to every request (optional)

**Returns:** A Mistral text adapter instance.

### `createMistralText(model, apiKey, config?)`

Creates a Mistral text adapter with an explicit API key.

**Parameters:**

- `model` β€” The model name
- `apiKey` β€” Your Mistral API key
- `config.serverURL?` β€” Custom base URL (optional)
- `config.defaultHeaders?` β€” Headers to attach to every request (optional)

**Returns:** A Mistral text adapter instance.

## Limitations

- **Embeddings**: Use the [Mistral SDK](https://github.com/mistralai/client-ts) directly for `mistral-embed`.
- **Image / Audio / Video Generation**: Mistral does not provide these endpoints. Use OpenAI, Gemini, or fal.ai.
- **Text-to-Speech / Transcription**: Not supported. Use OpenAI or ElevenLabs.

## Next Steps

- [Getting Started](../getting-started/quick-start) β€” Learn the basics
- [Tools Guide](../tools/tools) β€” Define and call tools
- [Structured Outputs](../chat/structured-outputs) β€” Generate typed JSON
- [Multimodal Content](../advanced/multimodal-content) β€” Send images and other modalities
- [Other Adapters](./openai) β€” Explore other providers

## Provider Tools

Mistral does not currently expose provider-specific tool factories.
Define your own tools with `toolDefinition()` from `@tanstack/ai`.

See [Tools](../tools/tools.md) for the general tool-definition flow, or
[Provider Tools](../tools/provider-tools.md) for other providers'
native-tool offerings.
4 changes: 4 additions & 0 deletions docs/config.json
Original file line number Diff line number Diff line change
Expand Up @@ -274,6 +274,10 @@
"label": "Groq",
"to": "adapters/groq"
},
{
"label": "Mistral",
"to": "adapters/mistral"
},
{
"label": "ElevenLabs",
"to": "adapters/elevenlabs"
Expand Down
Loading