Skip to content

Commit ba24257

Browse files
PicoNVIDIAclaude
andcommitted
feat(inference): add Nemotron 3 Nano Omni to CLOUD_MODEL_OPTIONS
Adds `nvidia/nemotron-3-nano-omni-reasoning-30b-a3b` to the curated cloud model picker and its matching test entry. Super 120B remains the default. Motivation: the multimodal hermes-omni-demo cookbook in brevdev/nemoclaw-demos currently has to do a post-onboard `openshell inference set` to switch the gateway from Super to Omni, because the wizard only exposes Super. Adding Omni here lets users select it during `nemoclaw onboard` directly and lets cookbooks drop the manual swap step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
1 parent 2f92692 commit ba24257

2 files changed

Lines changed: 2 additions & 0 deletions

File tree

src/lib/inference-config.test.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ describe("inference selection config", () => {
2020
it("exposes the curated cloud model picker options", () => {
2121
expect(CLOUD_MODEL_OPTIONS.map((option: { id: string }) => option.id)).toEqual([
2222
"nvidia/nemotron-3-super-120b-a12b",
23+
"nvidia/nemotron-3-nano-omni-reasoning-30b-a3b",
2324
"moonshotai/kimi-k2.5",
2425
"z-ai/glm5",
2526
"minimaxai/minimax-m2.5",

src/lib/inference-config.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ export const INFERENCE_ROUTE_URL = "https://inference.local/v1";
1212
export const DEFAULT_CLOUD_MODEL = "nvidia/nemotron-3-super-120b-a12b";
1313
export const CLOUD_MODEL_OPTIONS = [
1414
{ id: "nvidia/nemotron-3-super-120b-a12b", label: "Nemotron 3 Super 120B" },
15+
{ id: "nvidia/nemotron-3-nano-omni-reasoning-30b-a3b", label: "Nemotron 3 Nano Omni 30B" },
1516
{ id: "moonshotai/kimi-k2.5", label: "Kimi K2.5" },
1617
{ id: "z-ai/glm5", label: "GLM-5" },
1718
{ id: "minimaxai/minimax-m2.5", label: "MiniMax M2.5" },

0 commit comments

Comments
 (0)