Skip to content

Commit 1bdd07c

Browse files
feat: auto-sync provider models from OpenRouter API (#433)
* feat: add script to fetch models from OpenRouter API * feat: add script to sync OpenRouter models into provider packages * fix: harden regex and replacement patterns in sync script Use replacer functions in String.replace() to prevent $-character interpretation, and use non-greedy regex for array matching to handle potential ] characters in comments. * feat: add generate:models:fetch and generate:models:sync scripts * ci: add daily model sync workflow * fix: include openrouter.models.ts in CI commit * fix: use actual input modalities, exclude non-chat models, reduce CI noise - Use OpenRouter's actual input_modalities instead of blindly copying reference model's modalities (fixes text-only models claiming image support) - Filter modalities to only include types valid for each provider's interface - Add blocklist for non-chat model families (lyria, veo, imagen, sora, dall-e, tts) - Swap field order to supports-before-pricing matching existing convention - CI only creates PR when package files actually changed (not just openrouter.models.ts) * ci: apply automated fixes * fix: fix YAML parsing in sync-models workflow * fix: address PR review feedback - Exclude non-chat model families (lyria, veo, imagen, sora, dall-e, tts) from OPENROUTER_CHAT_MODELS in the convert script - Add 30s fetch timeout via AbortSignal.timeout - Fix import ordering (dirname before resolve) - Remove redundant isNonChatModel check in chatModels filter * ci: apply automated fixes * refactor: rename scripts and move changeset creation into sync script - generate:models is now the full pipeline (fetch + convert + sync) - regenerate:models is the old openrouter-only convert - Sync script now creates changeset file when models are added - CI workflow simplified (no longer creates changeset, just commits) - Restored missing workflow file lost during merge * chore: sync model metadata from OpenRouter API * ci: apply automated fixes * feat: add model age filtering and provider skip patterns - Skip deprecated/legacy model families per provider: - OpenAI: gpt-3.5-*, gpt-4-*, gpt-4o*, gpt-oss-*, chatgpt-* - Google: gemma-* (open-source, not Gemini API models) - Skip models created >30 days before last sync run - Track last run timestamp in scripts/.sync-models-last-run - Re-sync with filters: 5 models added (was 47 without filters) * ci: apply automated fixes * fix: reuse existing sync-models changeset instead of creating duplicates * ci: apply automated fixes * chore: sync model metadata from OpenRouter API * ci: apply automated fixes --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
1 parent ba46ba6 commit 1bdd07c

12 files changed

Lines changed: 4617 additions & 680 deletions

File tree

.changeset/sync-models.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
---
2+
'@tanstack/ai-anthropic': patch
3+
'@tanstack/ai-gemini': patch
4+
'@tanstack/ai-grok': patch
5+
'@tanstack/ai-openai': patch
6+
'@tanstack/ai-openrouter': patch
7+
---
8+
9+
Update model metadata from OpenRouter API

.github/workflows/sync-models.yml

Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
name: Sync Model Metadata
2+
3+
on:
4+
schedule:
5+
- cron: '0 6 * * *'
6+
workflow_dispatch:
7+
8+
concurrency:
9+
group: ${{ github.workflow }}-${{ github.ref }}
10+
cancel-in-progress: true
11+
12+
permissions:
13+
contents: write
14+
pull-requests: write
15+
16+
jobs:
17+
sync:
18+
name: Sync Models
19+
runs-on: ubuntu-latest
20+
steps:
21+
- name: Checkout
22+
uses: actions/checkout@v6.0.2
23+
with:
24+
fetch-depth: 0
25+
26+
- name: Setup Tools
27+
uses: TanStack/config/.github/setup@main
28+
29+
- name: Fetch and sync model metadata
30+
run: pnpm generate:models
31+
32+
- name: Check for package changes
33+
id: changes
34+
run: |
35+
if git diff --quiet -- packages/; then
36+
echo "changed=false" >> $GITHUB_OUTPUT
37+
else
38+
echo "changed=true" >> $GITHUB_OUTPUT
39+
fi
40+
41+
- name: Commit and force-push
42+
if: steps.changes.outputs.changed == 'true'
43+
run: |
44+
git config user.name "github-actions[bot]"
45+
git config user.email "github-actions[bot]@users.noreply.github.com"
46+
git add packages/ scripts/openrouter.models.ts scripts/.sync-models-last-run .changeset/
47+
git commit -m "chore: sync model metadata from OpenRouter"
48+
git push --force origin HEAD:automated/sync-models
49+
env:
50+
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
51+
52+
- name: Create or update PR
53+
if: steps.changes.outputs.changed == 'true'
54+
env:
55+
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
56+
shell: bash
57+
run: |
58+
BRANCH="automated/sync-models"
59+
EXISTING_PR=$(gh pr list --head "$BRANCH" --base main --json number --jq '.[0].number' 2>/dev/null || true)
60+
61+
if [ -z "$EXISTING_PR" ] || [ "$EXISTING_PR" = "null" ]; then
62+
BODY=$(cat <<'PRBODY'
63+
Automated daily sync of model metadata from the OpenRouter API.
64+
65+
- Fetches the latest model list from OpenRouter
66+
- Converts to the internal adapter format
67+
- Syncs provider-specific model metadata for affected packages
68+
- Creates a patch changeset for all changed packages
69+
PRBODY
70+
)
71+
gh pr create \
72+
--title "chore: sync model metadata from OpenRouter" \
73+
--body "$BODY" \
74+
--base main \
75+
--head "$BRANCH"
76+
fi

package.json

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,9 @@
3939
"dev": "pnpm run watch",
4040
"format": "prettier --experimental-cli --ignore-unknown '**/*' --write",
4141
"generate-docs": "node scripts/generate-docs.ts && pnpm run copy:readme",
42-
"generate:models": "tsx scripts/convert-openrouter-models.ts",
42+
"generate:models": "pnpm generate:models:fetch && pnpm regenerate:models && tsx scripts/sync-provider-models.ts",
43+
"generate:models:fetch": "tsx scripts/fetch-openrouter-models.ts",
44+
"regenerate:models": "tsx scripts/convert-openrouter-models.ts",
4345
"sync-docs-config": "node scripts/sync-docs-config.ts",
4446
"copy:readme": "cp README.md packages/typescript/ai/README.md && cp README.md packages/typescript/ai-devtools/README.md && cp README.md packages/typescript/preact-ai-devtools/README.md && cp README.md packages/typescript/ai-client/README.md && cp README.md packages/typescript/ai-gemini/README.md && cp README.md packages/typescript/ai-ollama/README.md && cp README.md packages/typescript/ai-openai/README.md && cp README.md packages/typescript/ai-react/README.md && cp README.md packages/typescript/ai-react-ui/README.md && cp README.md packages/typescript/react-ai-devtools/README.md && cp README.md packages/typescript/solid-ai-devtools/README.md",
4547
"changeset": "changeset",

packages/typescript/ai-anthropic/src/model-meta.ts

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -423,6 +423,36 @@ const CLAUDE_HAIKU_3 = {
423423
? TMessageCapabilities
424424
: unknown */
425425

426+
const CLAUDE_OPUS_4_6_FAST = {
427+
name: 'claude-opus-4.6-fast',
428+
id: 'claude-opus-4.6-fast',
429+
context_window: 1_000_000,
430+
max_output_tokens: 128_000,
431+
supports: {
432+
input: ['text', 'image'],
433+
extended_thinking: true,
434+
priority_tier: true,
435+
},
436+
pricing: {
437+
input: {
438+
normal: 30,
439+
cached: 3,
440+
},
441+
output: {
442+
normal: 150,
443+
},
444+
},
445+
} as const satisfies ModelMeta<
446+
AnthropicContainerOptions &
447+
AnthropicContextManagementOptions &
448+
AnthropicMCPOptions &
449+
AnthropicServiceTierOptions &
450+
AnthropicStopSequencesOptions &
451+
AnthropicThinkingOptions &
452+
AnthropicToolChoiceOptions &
453+
AnthropicSamplingOptions
454+
>
455+
426456
export const ANTHROPIC_MODELS = [
427457
CLAUDE_OPUS_4_6.id,
428458
CLAUDE_OPUS_4_5.id,
@@ -435,6 +465,8 @@ export const ANTHROPIC_MODELS = [
435465
CLAUDE_OPUS_4.id,
436466
CLAUDE_HAIKU_3_5.id,
437467
CLAUDE_HAIKU_3.id,
468+
469+
CLAUDE_OPUS_4_6_FAST.id,
438470
] as const
439471

440472
// const ANTHROPIC_IMAGE_MODELS = [] as const
@@ -537,6 +569,14 @@ export type AnthropicChatModelProviderOptionsByName = {
537569
AnthropicStopSequencesOptions &
538570
AnthropicToolChoiceOptions &
539571
AnthropicSamplingOptions
572+
[CLAUDE_OPUS_4_6_FAST.id]: AnthropicContainerOptions &
573+
AnthropicContextManagementOptions &
574+
AnthropicMCPOptions &
575+
AnthropicServiceTierOptions &
576+
AnthropicStopSequencesOptions &
577+
AnthropicThinkingOptions &
578+
AnthropicToolChoiceOptions &
579+
AnthropicSamplingOptions
540580
}
541581

542582
/**
@@ -562,4 +602,5 @@ export type AnthropicModelInputModalitiesByName = {
562602
[CLAUDE_OPUS_4.id]: typeof CLAUDE_OPUS_4.supports.input
563603
[CLAUDE_HAIKU_3_5.id]: typeof CLAUDE_HAIKU_3_5.supports.input
564604
[CLAUDE_HAIKU_3.id]: typeof CLAUDE_HAIKU_3.supports.input
605+
[CLAUDE_OPUS_4_6_FAST.id]: typeof CLAUDE_OPUS_4_6_FAST.supports.input
565606
}

packages/typescript/ai-grok/src/model-meta.ts

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -213,6 +213,44 @@ const GROK_2_IMAGE = {
213213
* Grok Chat Models
214214
* Based on xAI's available models as of 2025
215215
*/
216+
const GROK_4_20 = {
217+
name: 'grok-4.20',
218+
context_window: 2_000_000,
219+
supports: {
220+
input: ['text', 'image', 'document'],
221+
output: ['text'],
222+
capabilities: ['reasoning', 'structured_outputs', 'tool_calling'],
223+
},
224+
pricing: {
225+
input: {
226+
normal: 2,
227+
cached: 0.2,
228+
},
229+
output: {
230+
normal: 6,
231+
},
232+
},
233+
} as const satisfies ModelMeta
234+
235+
const GROK_4_20_MULTI_AGENT = {
236+
name: 'grok-4.20-multi-agent',
237+
context_window: 2_000_000,
238+
supports: {
239+
input: ['text', 'image', 'document'],
240+
output: ['text'],
241+
capabilities: ['reasoning', 'structured_outputs', 'tool_calling'],
242+
},
243+
pricing: {
244+
input: {
245+
normal: 2,
246+
cached: 0.2,
247+
},
248+
output: {
249+
normal: 6,
250+
},
251+
},
252+
} as const satisfies ModelMeta
253+
216254
export const GROK_CHAT_MODELS = [
217255
GROK_4_1_FAST_REASONING.name,
218256
GROK_4_1_FAST_NON_REASONING.name,
@@ -223,6 +261,9 @@ export const GROK_CHAT_MODELS = [
223261
GROK_3.name,
224262
GROK_3_MINI.name,
225263
GROK_2_VISION.name,
264+
265+
GROK_4_20.name,
266+
GROK_4_20_MULTI_AGENT.name,
226267
] as const
227268

228269
/**
@@ -247,6 +288,8 @@ export type GrokModelInputModalitiesByName = {
247288
[GROK_3.name]: typeof GROK_3.supports.input
248289
[GROK_3_MINI.name]: typeof GROK_3_MINI.supports.input
249290
[GROK_2_VISION.name]: typeof GROK_2_VISION.supports.input
291+
[GROK_4_20.name]: typeof GROK_4_20.supports.input
292+
[GROK_4_20_MULTI_AGENT.name]: typeof GROK_4_20_MULTI_AGENT.supports.input
250293
}
251294

252295
/**

packages/typescript/ai-openai/src/model-meta.ts

Lines changed: 97 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1646,6 +1646,86 @@ const TTS_1_HD = {
16461646
> */
16471647

16481648
// Chat/text completion models (based on endpoints: "chat" or "chat-completions")
1649+
const GPT_5_4_MINI = {
1650+
name: 'gpt-5.4-mini',
1651+
context_window: 400_000,
1652+
max_output_tokens: 128_000,
1653+
supports: {
1654+
input: ['image', 'text'],
1655+
output: ['text'],
1656+
endpoints: ['chat', 'chat-completions'],
1657+
features: [
1658+
'streaming',
1659+
'function_calling',
1660+
'structured_outputs',
1661+
'distillation',
1662+
],
1663+
tools: [
1664+
'web_search',
1665+
'file_search',
1666+
'image_generation',
1667+
'code_interpreter',
1668+
'mcp',
1669+
],
1670+
},
1671+
pricing: {
1672+
input: {
1673+
normal: 0.75,
1674+
cached: 0.075,
1675+
},
1676+
output: {
1677+
normal: 4.5,
1678+
},
1679+
},
1680+
} as const satisfies ModelMeta<
1681+
OpenAIBaseOptions &
1682+
OpenAIReasoningOptions &
1683+
OpenAIStructuredOutputOptions &
1684+
OpenAIToolsOptions &
1685+
OpenAIStreamingOptions &
1686+
OpenAIMetadataOptions
1687+
>
1688+
1689+
const GPT_5_4_NANO = {
1690+
name: 'gpt-5.4-nano',
1691+
context_window: 400_000,
1692+
max_output_tokens: 128_000,
1693+
supports: {
1694+
input: ['image', 'text'],
1695+
output: ['text'],
1696+
endpoints: ['chat', 'chat-completions'],
1697+
features: [
1698+
'streaming',
1699+
'function_calling',
1700+
'structured_outputs',
1701+
'distillation',
1702+
],
1703+
tools: [
1704+
'web_search',
1705+
'file_search',
1706+
'image_generation',
1707+
'code_interpreter',
1708+
'mcp',
1709+
],
1710+
},
1711+
pricing: {
1712+
input: {
1713+
normal: 0.2,
1714+
cached: 0.02,
1715+
},
1716+
output: {
1717+
normal: 1.25,
1718+
},
1719+
},
1720+
} as const satisfies ModelMeta<
1721+
OpenAIBaseOptions &
1722+
OpenAIReasoningOptions &
1723+
OpenAIStructuredOutputOptions &
1724+
OpenAIToolsOptions &
1725+
OpenAIStreamingOptions &
1726+
OpenAIMetadataOptions
1727+
>
1728+
16491729
export const OPENAI_CHAT_MODELS = [
16501730
// Frontier models
16511731
GPT5_2.name,
@@ -1694,6 +1774,9 @@ export const OPENAI_CHAT_MODELS = [
16941774
// Legacy reasoning
16951775
O1.name,
16961776
O1_PRO.name,
1777+
1778+
GPT_5_4_MINI.name,
1779+
GPT_5_4_NANO.name,
16971780
] as const
16981781

16991782
export type OpenAIChatModel = (typeof OPENAI_CHAT_MODELS)[number]
@@ -1947,6 +2030,18 @@ export type OpenAIChatModelProviderOptionsByName = {
19472030
OpenAIToolsOptions &
19482031
OpenAIStreamingOptions &
19492032
OpenAIMetadataOptions
2033+
[GPT_5_4_MINI.name]: OpenAIBaseOptions &
2034+
OpenAIReasoningOptions &
2035+
OpenAIStructuredOutputOptions &
2036+
OpenAIToolsOptions &
2037+
OpenAIStreamingOptions &
2038+
OpenAIMetadataOptions
2039+
[GPT_5_4_NANO.name]: OpenAIBaseOptions &
2040+
OpenAIReasoningOptions &
2041+
OpenAIStructuredOutputOptions &
2042+
OpenAIToolsOptions &
2043+
OpenAIStreamingOptions &
2044+
OpenAIMetadataOptions
19502045
}
19512046

19522047
/**
@@ -2002,4 +2097,6 @@ export type OpenAIModelInputModalitiesByName = {
20022097
[O3_MINI.name]: typeof O3_MINI.supports.input
20032098
[GPT_4O_SEARCH_PREVIEW.name]: typeof GPT_4O_SEARCH_PREVIEW.supports.input
20042099
[GPT_4O_MINI_SEARCH_PREVIEW.name]: typeof GPT_4O_MINI_SEARCH_PREVIEW.supports.input
2100+
[GPT_5_4_MINI.name]: typeof GPT_5_4_MINI.supports.input
2101+
[GPT_5_4_NANO.name]: typeof GPT_5_4_NANO.supports.input
20052102
}

0 commit comments

Comments
 (0)