Skip to content

feat: add KubeStellar Console#413

Open
clubanderson wants to merge 1 commit intoInftyAI:mainfrom
clubanderson:feat/add-kubestellar-console
Open

feat: add KubeStellar Console#413
clubanderson wants to merge 1 commit intoInftyAI:mainfrom
clubanderson:feat/add-kubestellar-console

Conversation

@clubanderson
Copy link
Copy Markdown

Summary

Adds KubeStellar Console to the Inference Platform section.

KubeStellar Console is an open source AI-powered multi-cluster Kubernetes dashboard for managing LLM serving clusters, with:

  • GPU monitoring and LLM inference cluster management
  • Benchmark streaming for inference performance tracking
  • Real-time observability across hybrid edge and cloud environments
  • 20+ CNCF integrations (Argo, Kyverno, Prometheus, Grafana)

It is a CNCF Sandbox project licensed under Apache 2.0.

KubeStellar Console is an AI-powered multi-cluster Kubernetes dashboard
for hybrid edge and cloud with GPU monitoring, LLM inference cluster
management, benchmark streaming, and 20+ CNCF integrations. CNCF Sandbox
project (Apache 2.0).

https://github.com/kubestellar/console
https://console.kubestellar.io
Signed-off-by: Andy Anderson <andy@clubanderson.com>
@InftyAI-Agent InftyAI-Agent added needs-triage Indicates an issue or PR lacks a label and requires one. needs-priority Indicates a PR lacks a label and requires one. do-not-merge/needs-kind Indicates a PR lacks a label and requires one. labels Apr 24, 2026
@kerthcet
Copy link
Copy Markdown
Member

Kubestellar seems great but seems more like a kubernetes dashboard?

@clubanderson
Copy link
Copy Markdown
Author

Great question @kerthcet! It is a Kubernetes dashboard at its core, but it has a significant LLMOps surface:

  • kc-agent — an MCP server that bridges AI coding agents (Claude, Copilot, Codex) to the Kubernetes API for LLM-driven cluster operations
  • AI Mission Explorer — LLM-powered guided workflows for deploying and troubleshooting LLM infrastructure (vLLM, Ollama, TensorRT, KubeRay, etc.)
  • LLM-d stack monitoring — dedicated dashboard cards for LLM serving stacks (model loading, GPU utilization, inference latency)
  • AI recommendations engine — uses LLMs to analyze cluster state and suggest optimizations

That said, if you feel it's not a strong enough fit for this list, happy to close — no hard feelings!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/needs-kind Indicates a PR lacks a label and requires one. needs-priority Indicates a PR lacks a label and requires one. needs-triage Indicates an issue or PR lacks a label and requires one.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants