Skip to content

feat: prepend openrouter model fallbacks#3709

Closed
THVrYXNE wants to merge 1 commit into
code-yeongyu:devfrom
THVrYXNE:dev
Closed

feat: prepend openrouter model fallbacks#3709
THVrYXNE wants to merge 1 commit into
code-yeongyu:devfrom
THVrYXNE:dev

Conversation

@THVrYXNE
Copy link
Copy Markdown

@THVrYXNE THVrYXNE commented Apr 28, 2026

Summary

  • Prepends openrouter provider as first fallback for all agents and categories in src/shared/model-requirements.ts
  • Adds reasoningEffort support to FallbackEntry type
  • Preserves all existing fallback chains as secondary options

Model Mapping

Agent/Category OpenRouter Model
sisyphus, prometheus, metis deepseek/deepseek-v4-pro
momus deepseek/deepseek-v4-pro (xhigh)
oracle moonshotai/kimi-k2.6
librarian, explore qwen/qwen3.6-flash
multimodal-looker ~google/gemini-flash-latest
atlas xiaomi/mimo-v2.5-pro
visual-engineering ~google/gemini-pro-latest
ultrabrain openai/gpt-5.5 (xhigh)
artistry google/gemini-3.1-pro-preview
quick, unspecified-low qwen/qwen3.6-27b
unspecified-high qwen/qwen3.6-plus
writing ~google/gemini-pro-latest

View in Codesmith
Need help on this PR? Tag @codesmith with what you need.

  • Let Codesmith autofix CI failures and bot reviews

Summary by cubic

Prepended openrouter models as the first fallback across all agents and categories for faster, more flexible routing. Added reasoningEffort support and kept existing fallbacks as secondary options.

  • New Features
    • Set first-choice openrouter models per role (examples): sisyphus/prometheus/metis → deepseek/deepseek-v4-pro; librarian/explore → qwen/qwen3.6-flash; oracle → moonshotai/kimi-k2.6; visual-engineering/writing → ~google/gemini-pro-latest; ultrabrain → openai/gpt-5.5 (xhigh).
    • Added reasoningEffort to fallback entries; applied xhigh for momus and ultrabrain.

Written for commit ffd3e9e. Summary will update on new commits. Review in cubic

@github-actions
Copy link
Copy Markdown
Contributor

Thank you for your contribution! Before we can merge this PR, we need you to sign our Contributor License Agreement (CLA).

To sign the CLA, please comment on this PR with:

I have read the CLA Document and I hereby sign the CLA

This is a one-time requirement. Once signed, all your future contributions will be automatically accepted.


I have read the CLA Document and I hereby sign the CLA


Sisyphus seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You can retrigger this bot by commenting recheck in this Pull Request. Posted by the CLA Assistant Lite bot.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 1 file

Confidence score: 3/5

  • There is a concrete medium-severity risk in src/shared/model-requirements.ts: duplicate requirement keys can silently overwrite earlier fallbackChain entries, which can change effective model selection order at runtime.
  • Because this behavior is deterministic but non-obvious, it increases regression risk for users relying on earlier fallback configuration; this is why the score is a 3 rather than a safer 4-5.
  • Pay close attention to src/shared/model-requirements.ts - duplicate keys can mask dead config and alter fallback precedence.
Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src/shared/model-requirements.ts">

<violation number="1" location="src/shared/model-requirements.ts:123">
P2: Duplicate requirement keys silently overwrite earlier fallbackChain blocks, leaving dead config and making the effective model selection order depend on the last duplicate.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

@@ -20,6 +20,7 @@ export type ModelRequirement = {
export const AGENT_MODEL_REQUIREMENTS: Record<string, ModelRequirement> = {
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot Apr 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Duplicate requirement keys silently overwrite earlier fallbackChain blocks, leaving dead config and making the effective model selection order depend on the last duplicate.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/shared/model-requirements.ts, line 123:

<comment>Duplicate requirement keys silently overwrite earlier fallbackChain blocks, leaving dead config and making the effective model selection order depend on the last duplicate.</comment>

<file context>
@@ -74,9 +96,33 @@ export const AGENT_MODEL_REQUIREMENTS: Record<string, ModelRequirement> = {
+      { providers: ["openai", "opencode", "vercel"], model: "gpt-5.4-nano" },
+    ],
+  },
+  explore: {
+    fallbackChain: [
+      { providers: ["openrouter"], model: "qwen/qwen3.6-flash" },
</file context>
Fix with Cubic

@acamq
Copy link
Copy Markdown
Collaborator

acamq commented May 11, 2026

Hi @THVrYXNE, thanks for your contribution!

This PR is being closed because the CLA check has been failing and the PR has been inactive for 7+ days.

The CLA is a requirement for all contributions to this project. To resolve this:

  1. Sign the CLA by following the instructions in the CLA check failure comment on your PR.
  2. Reopen this PR (or open a new one targeting the latest dev branch) once signed.

If you believe this was closed in error, feel free to comment and we'll take another look.

Thanks again for your time and contribution!

@acamq acamq closed this May 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants