The shortest path from an empty Kubernetes cluster to a governed MCP endpoint: install the control plane, registry, broker, and Sentinel stack; deploy one MCP server; grant access; and observe live traffic.
- Go
1.25+(matches the repositorygo.modfiles) make- Docker or a Docker-compatible client, with the daemon running and reachable
kubectlonPATH, configured for the target clustercurl,jq, andpython3for documented dev and traffic-generation flows- A Kubernetes cluster (k3s, kind, minikube, Docker Desktop Kubernetes, EKS — see cluster-readiness.md for distribution-specific prep)
Host bootstrap:
make deps-install # best-effort install for supported macOS/Linux hosts
STRICT_DEPS_CHECK=1 make deps-checkmake deps-install is intentionally best-effort: it can install some packages with Homebrew or apt, but it cannot enable Docker Desktop, create cloud credentials, or configure your kubeconfig. Re-run STRICT_DEPS_CHECK=1 make deps-check until the required host tools pass.
make deps
make buildThis produces ./bin/mcp-runtime.
./bin/mcp-runtime bootstrapBefore setup, confirm the target Kubernetes cluster is ready for registry pushes, image pulls, ingress, storage, and TLS. See cluster-readiness.md for distribution-specific preparation.
setup installs MCP Runtime resources into an already-running cluster. It does
not configure node DNS, containerd or Docker registry trust, public DNS, TLS
issuers, image pull credentials, or storage classes. Fix those prerequisites
with your platform tooling before continuing.
bootstrap validates kubectl connectivity, CoreDNS, the default
StorageClass, Traefik IngressClass, and MetalLB namespace. Warnings only —
fix gaps with your platform tooling, or bootstrap --apply --provider k3s to
install bundled CoreDNS / local-path on k3s. After setup, run cluster doctor
to validate the installed MCP Runtime resources, registry pulls, ingress,
Sentinel, and operator readiness.
For local contributor work, use a disposable Kind cluster and setup --test-mode. This path is for development and CI-style validation: it uses the
HTTP ingress overlay, avoids public DNS/TLS, and assumes local Docker can build
the runtime images. It does not skip builds: setup builds and pushes the
operator, gateway proxy, and Sentinel images with latest tags to the
configured or bundled registry.
Create Kind with the registry mirror MCP Runtime expects for image pulls:
cat > /tmp/mcp-runtime-kind.yaml <<'EOF'
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.registry.svc.cluster.local:5000"]
endpoint = ["http://127.0.0.1:32000"]
EOF
kind create cluster --name mcp-runtime --config /tmp/mcp-runtime-kind.yaml
kubectl config use-context kind-mcp-runtimeIn test mode, setup intentionally emits pod image references under
registry.registry.svc.cluster.local:5000/... so the image host matches this
Kind containerd mirror exactly instead of using a mutable registry Service
ClusterIP:port.
Build the CLI, run bootstrap, and install the stack in test mode:
make deps
make build
./bin/mcp-runtime bootstrap
MCP_SETUP_WAIT_TIMEOUT=900 \
./bin/mcp-runtime setup --test-mode \
--ingress-manifest config/ingress/overlays/httpConfirm the install and expose the local dashboard/gateway:
./bin/mcp-runtime status
./bin/mcp-runtime cluster status
./bin/mcp-runtime registry status
./bin/mcp-runtime sentinel status
./bin/mcp-runtime cluster doctor
kubectl port-forward -n traefik svc/traefik 18080:8000cluster doctor is most useful after setup because it validates the installed
MCP Runtime components, registry pulls, ingress, Sentinel, and operator
readiness. On a fresh cluster before setup, those resources do not exist yet.
Local URLs:
- Dashboard UI:
http://localhost:18080/ - API:
http://localhost:18080/api - Demo MCP routes, after applying demo servers:
http://localhost:18080/<server-name>/mcp
The MCP Servers tab exposes a copyable connect config. In this local test-mode flow, that config should use the same reachable local origin, for example:
{
"mcpServers": {
"go-example-mcp": {
"type": "http",
"url": "http://localhost:18080/go-example-mcp/mcp"
}
}
}When the platform is installed with MCP_PLATFORM_DOMAIN=mcpruntime.org or an
explicit MCP_MCP_INGRESS_HOST, production connect configs should use the
public MCP host instead, for example
https://mcp.mcpruntime.org/go-example-mcp/mcp.
setup --test-mode also seeds development-only email/password logins in the
platform identity store:
| Role | Password | |
|---|---|---|
| User | test@mcpruntime.org |
test@123 |
| Admin | admin@mcpruntime.org |
admin@123 |
These credentials are for local Kind/debugging only. They are enabled by the
managed mcp-sentinel-secrets key PLATFORM_DEV_LOGIN_ENABLED=true and can be
disabled or overridden by editing the PLATFORM_DEV_* keys before rolling the
API deployment.
After setup --test-mode is complete, you do not need to rerun setup for every
service change. Edit the service under services/, build only that image, push
it to the bundled registry, and update only that Kubernetes Deployment.
Run the targeted service tests before rebuilding the image:
(cd services/api && go test ./... -count=1)
(cd services/ui && go test ./... -count=1)
node --check services/ui/static/app.jsFor API/UI changes that cross the browser-to-API proxy boundary, rebuild and
roll both services. A common example is the MCP connect config URL: the UI
forwards the browser origin to the API, and the API uses that origin to return a
local URL such as http://localhost:18080/<server-name>/mcp; production hosts
map platform.<domain> to mcp.<domain>.
For the UI, for example:
SERVICE=ui
IMAGE_REPO=mcp-sentinel-ui
DOCKERFILE=services/ui/Dockerfile
BUILD_CONTEXT=services/ui
DEPLOYMENT=mcp-sentinel-ui
CONTAINER=ui
TAG="${SERVICE}-dev-$(date +%s)"
LOCAL_IMAGE="${IMAGE_REPO}:${TAG}"
REGISTRY=registry.registry.svc.cluster.local:5000
docker build -t "$LOCAL_IMAGE" -f "$DOCKERFILE" "$BUILD_CONTEXT"
./bin/mcp-runtime registry push \
--image "$LOCAL_IMAGE" \
--name "$IMAGE_REPO" \
--registry "$REGISTRY" \
--namespace registry
kubectl -n mcp-sentinel set image \
"deployment/$DEPLOYMENT" \
"$CONTAINER=$REGISTRY/$IMAGE_REPO:$TAG"
kubectl -n mcp-sentinel rollout status "deployment/$DEPLOYMENT" --timeout=90sKeep the Traefik port-forward running and refresh the local URL:
http://localhost:18080/. Use a new tag for each build so Kubernetes does not
reuse an older IfNotPresent image from the node cache.
Use the same commands with the variables below for other Sentinel services:
| Service | Edit path | Image repo | Dockerfile | Build context | Deployment | Container |
|---|---|---|---|---|---|---|
| UI | services/ui |
mcp-sentinel-ui |
services/ui/Dockerfile |
services/ui |
mcp-sentinel-ui |
ui |
| API | services/api |
mcp-sentinel-api |
services/api/Dockerfile |
. |
mcp-sentinel-api |
api |
| Ingest | services/ingest |
mcp-sentinel-ingest |
services/ingest/Dockerfile |
services/ingest |
mcp-sentinel-ingest |
ingest |
| Processor | services/processor |
mcp-sentinel-processor |
services/processor/Dockerfile |
services/processor |
mcp-sentinel-processor |
processor |
services/mcp-proxy is different: it runs as the mcp-gateway sidecar inside
each MCP server pod. To test proxy changes, build and push
mcp-sentinel-mcp-proxy, update the operator's MCP_GATEWAY_PROXY_IMAGE, then
restart the operator and recreate or restart the affected MCP server pods so
the sidecar image is injected again.
If pods report ImagePullBackOff, run ./bin/mcp-runtime cluster doctor.
For Kind test mode, the usual cause is a cluster created without the
registry.registry.svc.cluster.local:5000 mirror to 127.0.0.1:32000. If pod
events include http: server gave HTTP response to HTTPS client, the node's
containerd tried HTTPS against the HTTP dev registry. Configure the insecure
registry mirror for the exact image host string in the pod image reference
(registry.registry.svc.cluster.local:5000 in the documented Kind flow), or use TLS.
On k3s with the bundled plain HTTP registry, that exact host may be the registry
Service ClusterIP:port such as 10.43.x.x:5000; add a matching
/etc/rancher/k3s/registries.yaml mirror and restart k3s. On hosts where
~/.kube/config is empty or minimal, run setup with
--kubeconfig /etc/rancher/k3s/k3s.yaml.
If setup reached image deployment before the k3s mirror was configured, copy
the registry Internal URL from setup output into registries.yaml, restart
k3s/containerd, then rerun setup. The rerun republishes the latest images;
clear partial runtime namespaces first if StatefulSet storage was interrupted
during the failed run.
With the port-forward still running, open http://localhost:18080/ to confirm
the platform dashboard loads. Then deploy the bundled Go MCP example through the
same build, push, generate, and deploy path contributors use for server work.
Create a local metadata file that enables gateway policy and Sentinel analytics:
cat > /tmp/go-example-mcp.yaml <<'EOF'
version: v1
servers:
- name: go-example-mcp
description: Go MCP example server with smoke and text transformation tools.
route: /go-example-mcp/mcp
publicPathPrefix: go-example-mcp
port: 8088
namespace: mcp-servers
envVars:
- name: MCP_PATH
value: /go-example-mcp/mcp
tools:
- name: add
description: Add two numbers.
requiredTrust: low
- name: upper
description: Uppercase the provided message.
requiredTrust: medium
auth:
mode: header
humanIDHeader: X-MCP-Human-ID
agentIDHeader: X-MCP-Agent-ID
sessionIDHeader: X-MCP-Agent-Session
policy:
mode: allow-list
defaultDecision: deny
policyVersion: v1
session:
required: true
gateway:
enabled: true
analytics:
enabled: true
ingestURL: http://mcp-sentinel-ingest.mcp-sentinel.svc.cluster.local:8081/events
apiKeySecretRef:
name: go-example-mcp-analytics
key: api-key
EOFCreate the analytics secret in the server namespace:
API_KEY="$(
kubectl get secret mcp-sentinel-secrets -n mcp-sentinel \
-o jsonpath='{.data.INGEST_API_KEYS}' | base64 -d | cut -d, -f1
)"
kubectl create secret generic go-example-mcp-analytics \
-n mcp-servers \
--from-literal=api-key="$API_KEY" \
--dry-run=client -o yaml | kubectl apply -f -Build and push the image into the Kind-accessible registry:
./bin/mcp-runtime server build image go-example-mcp \
--metadata-file /tmp/go-example-mcp.yaml \
--dockerfile examples/go-mcp-server/Dockerfile \
--context examples/go-mcp-server \
--registry registry.registry.svc.cluster.local:5000 \
--tag dev
./bin/mcp-runtime registry push \
--image registry.registry.svc.cluster.local:5000/go-example-mcp:devGenerate and deploy the Kubernetes manifests:
rm -rf /tmp/go-example-mcp-manifests
./bin/mcp-runtime pipeline generate \
--file /tmp/go-example-mcp.yaml \
--output /tmp/go-example-mcp-manifests
./bin/mcp-runtime pipeline deploy --dir /tmp/go-example-mcp-manifests
kubectl rollout status deploy/go-example-mcp -n mcp-servers --timeout=180s
./bin/mcp-runtime server status --namespace mcp-serversIn Kind, server status may show PartiallyReady while the Deployment is
ready and traffic works. That usually means Traefik is routing through the local
port-forward but has not written Ingress.status.loadBalancer.ingress[]; see
Local development notes when you want permissive
ingress readiness for this setup.
Useful server and policy checks while iterating:
SERVER=go-example-mcp
NAMESPACE=mcp-servers
CONTAINER=go-example-mcp
kubectl get mcpservers -n "$NAMESPACE"
kubectl get deploy/"$SERVER" svc/"$SERVER" ingress/"$SERVER" -n "$NAMESPACE" -o wide
kubectl get cm -n "$NAMESPACE" "${SERVER}-gateway-policy" -o yaml
kubectl get mcpaccessgrant,mcpagentsession -n "$NAMESPACE" -o wide
kubectl get pods -n "$NAMESPACE" -o wide
kubectl get pods -n "$NAMESPACE" \
-o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{range .spec.containers[*]}{.name}{","}{end}{"\n"}{end}'
POD="$(
kubectl get pods -n "$NAMESPACE" -l app="$SERVER" \
-o jsonpath='{.items[0].metadata.name}'
)"
kubectl describe mcpserver -n "$NAMESPACE" "$SERVER"
kubectl describe pod -n "$NAMESPACE" "$POD"
kubectl logs -n "$NAMESPACE" "$POD" -c "$CONTAINER"
kubectl logs -n "$NAMESPACE" "$POD" -c mcp-gateway
./bin/mcp-runtime server logs "$SERVER" --namespace "$NAMESPACE"
./bin/mcp-runtime server policy inspect "$SERVER" --namespace "$NAMESPACE"
kubectl get cm -n "$NAMESPACE" "${SERVER}-gateway-policy" \
-o 'go-template={{index .data "policy.json"}}'
kubectl get events -n "$NAMESPACE" --sort-by=.lastTimestampThe governed sidecar container is named mcp-gateway; it runs the
mcp-proxy image/process and forwards to the app on 127.0.0.1. The bundled
Go example image is distroless, so kubectl exec ... -- /bin/sh and
/bin/bash are expected to fail. Use logs/describe first, or attach a debug
container when you need a shell in the pod namespace:
kubectl debug -it -n "$NAMESPACE" "pod/$POD" \
--target="$CONTAINER" \
--image=busybox:1.36 -- shserver policy inspect prints the rendered policy.json from the
${SERVER}-gateway-policy ConfigMap. If a grant or session exists in
Kubernetes but is missing from that output, check the operator logs in the
platform block below. If it appears in the rendered policy but calls still fail,
check the mcp-gateway sidecar logs and allow a few seconds for the sidecar to
reload the mounted file.
Start from the symptom instead of running every command every time:
| Symptom | First checks |
|---|---|
| Pod is not ready or image pulls fail | kubectl describe pod, namespace events, cluster doctor |
| Grant or session does not affect traffic | kubectl get mcpaccessgrant,mcpagentsession, server policy inspect, raw policy ConfigMap |
| Policy renders but tool calls are denied | kubectl logs ... -c mcp-gateway, request headers, Mcp-Session-Id / X-MCP-Agent-Session values |
| Requests work but analytics are missing | sentinel logs ingest, sentinel logs processor, analytics secret and ingest URL |
| Dashboard, API, or MCP route returns 404 | kubectl get ingress -A, Sentinel ingress YAML, Traefik logs |
Useful local platform checks:
./bin/mcp-runtime cluster doctor
./bin/mcp-runtime sentinel status
./bin/mcp-runtime sentinel events
./bin/mcp-runtime sentinel logs api --since 10m
./bin/mcp-runtime sentinel logs ui --since 10m
./bin/mcp-runtime sentinel logs gateway --since 10m
./bin/mcp-runtime sentinel logs ingest --since 10m
./bin/mcp-runtime sentinel logs processor --since 10m
kubectl get pods -n mcp-runtime -o wide
kubectl get pods -n mcp-sentinel -o wide
kubectl rollout status deploy/mcp-sentinel-api -n mcp-sentinel --timeout=90s
kubectl rollout status deploy/mcp-sentinel-ingest -n mcp-sentinel --timeout=90s
kubectl rollout status deploy/mcp-sentinel-processor -n mcp-sentinel --timeout=90s
kubectl rollout status deploy/mcp-sentinel-ui -n mcp-sentinel --timeout=90s
kubectl rollout status deploy/mcp-sentinel-gateway -n mcp-sentinel --timeout=90s
kubectl logs -n mcp-runtime deploy/mcp-runtime-operator-controller-manager --since=10m
kubectl logs -n traefik deploy/traefik --tail=120
kubectl get ingress -A
kubectl get ingress -n mcp-sentinel -o yamlApply an access grant and session for the local request:
cat > /tmp/go-example-access.yaml <<'EOF'
apiVersion: mcpruntime.org/v1alpha1
kind: MCPAccessGrant
metadata:
name: go-example-local
namespace: mcp-servers
spec:
serverRef:
name: go-example-mcp
subject:
humanID: local-user
agentID: local-agent
maxTrust: high
policyVersion: v1
toolRules:
- name: add
decision: allow
- name: upper
decision: allow
---
apiVersion: mcpruntime.org/v1alpha1
kind: MCPAgentSession
metadata:
name: local-session
namespace: mcp-servers
spec:
serverRef:
name: go-example-mcp
subject:
humanID: local-user
agentID: local-agent
consentedTrust: high
policyVersion: v1
EOF
kubectl apply -f /tmp/go-example-access.yaml
until ./bin/mcp-runtime server policy inspect go-example-mcp --namespace mcp-servers | grep -q local-session; do
sleep 2
done
# The proxy sidecar reloads rendered policy on a short polling loop, so give the
# gateway a few seconds to observe the new access session before the first tool call.
sleep 6Make a local MCP JSON-RPC request through Traefik and the Sentinel gateway:
BASE=http://localhost:18080/go-example-mcp/mcp
PROTO=2025-06-18
SESSION="$(
curl -si \
-H "content-type: application/json" \
-H "accept: application/json, text/event-stream" \
-H "Mcp-Protocol-Version: $PROTO" \
-H "X-MCP-Human-ID: local-user" \
-H "X-MCP-Agent-ID: local-agent" \
-H "X-MCP-Agent-Session: local-session" \
-d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{}}' \
"$BASE" | awk -F': ' 'tolower($1)=="mcp-session-id"{print $2}' | tr -d '\r'
)"
curl -sS \
-H "content-type: application/json" \
-H "accept: application/json, text/event-stream" \
-H "Mcp-Protocol-Version: $PROTO" \
-H "Mcp-Session-Id: $SESSION" \
-H "X-MCP-Human-ID: local-user" \
-H "X-MCP-Agent-ID: local-agent" \
-H "X-MCP-Agent-Session: local-session" \
-d '{"jsonrpc":"2.0","method":"notifications/initialized"}' \
"$BASE" >/dev/null
curl -sS \
-H "content-type: application/json" \
-H "accept: application/json, text/event-stream" \
-H "Mcp-Protocol-Version: $PROTO" \
-H "Mcp-Session-Id: $SESSION" \
-H "X-MCP-Human-ID: local-user" \
-H "X-MCP-Agent-ID: local-agent" \
-H "X-MCP-Agent-Session: local-session" \
-d '{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":"add","arguments":{"a":2,"b":3}}}' \
"$BASE" | jq .
curl -sS \
-H "content-type: application/json" \
-H "accept: application/json, text/event-stream" \
-H "Mcp-Protocol-Version: $PROTO" \
-H "Mcp-Session-Id: $SESSION" \
-H "X-MCP-Human-ID: local-user" \
-H "X-MCP-Agent-ID: local-agent" \
-H "X-MCP-Agent-Session: local-session" \
-d '{"jsonrpc":"2.0","id":3,"method":"tools/call","params":{"name":"upper","arguments":{"message":"hello world"}}}' \
"$BASE" | jq .You should see successful tools/call responses containing 5 and
HELLO WORLD. Then verify Sentinel health and query the analytics API:
The bundled Go example server also exposes upper, lower, echo, and
slugify, and each of those tools expects a message field in arguments
instead of input or text.
./bin/mcp-runtime sentinel status
./bin/mcp-runtime sentinel events
ADMIN_KEY="$(
kubectl get secret mcp-sentinel-secrets -n mcp-sentinel \
-o jsonpath='{.data.UI_API_KEY}' | base64 -d
)"
curl -sS -H "x-api-key: $ADMIN_KEY" \
http://localhost:18080/api/dashboard/summary | jq .
curl -sS -H "x-api-key: $ADMIN_KEY" \
"http://localhost:18080/api/analytics/usage?limit=10" | jq .
curl -sS -H "x-api-key: $ADMIN_KEY" \
"http://localhost:18080/api/events/filter?server=go-example-mcp&tool_name=add&limit=5" | jq .mcp-runtime sentinel events shows Kubernetes events for the Sentinel
namespace. Use /api/dashboard/summary, /api/events, or
/api/analytics/usage to verify request analytics. The admin Dashboard tab
uses /api/analytics/usage for its MCP server, human/agent, tool, and decision
rollups.
To exercise tenant isolation between two MCPServer resources and per-subject
grant enforcement on the same cluster, see
Sentinel → Verifying multi-tenancy.
./bin/mcp-runtime setupsetup installs the platform pieces companies need for MCP operations: CRDs, mcp-runtime and mcp-servers namespaces, the internal Docker registry, ingress wiring, the operator, and the bundled Sentinel stack for gateway policy, analytics, audit, and observability.
Common variants:
./bin/mcp-runtime setup --with-tls # cert-manager TLS for the registry
./bin/mcp-runtime setup --without-sentinel # skip the request-path stack
./bin/mcp-runtime setup --test-mode # local Kind/dev build+push pathFor Kind or other local setups where traffic reaches Traefik through kubectl port-forward or a NodePort but the ingress controller does not publish Ingress.status.loadBalancer.ingress[], run setup with permissive ingress readiness:
export MCP_INGRESS_READINESS_MODE=permissive
./bin/mcp-runtime setup --test-mode --ingress-manifest config/ingress/overlays/http
kubectl port-forward -n traefik svc/traefik 18080:8000Then use http://127.0.0.1:18080/<publicPathPrefix>/mcp for local MCP traffic. Keep the default strict readiness mode for production clusters that rely on published load-balancer status.
./bin/mcp-runtime status
./bin/mcp-runtime cluster status
./bin/mcp-runtime registry status
./bin/mcp-runtime sentinel status# payments.yaml
apiVersion: mcpruntime.org/v1alpha1
kind: MCPServer
metadata:
name: payments
namespace: mcp-servers
spec:
image: registry.example.com/payments-mcp
imageTag: v1.0.0
port: 8088
publicPathPrefix: payments
gateway:
enabled: true
analytics:
enabled: true./bin/mcp-runtime server apply --file payments.yaml
./bin/mcp-runtime server statusStart with the smallest useful MCPServer and add features only when you need them.
metadata.namebecomes the server identity inside the platform.metadata.namespaceis usuallymcp-servers.spec.imagepoints at the container image the platform should run.spec.imageTagsets the tag when you do not include one directly inspec.image.spec.portis the port your MCP server process listens on inside the container.spec.publicPathPrefixcontrols the public route prefix.paymentsbecomes/payments/mcp.spec.gateway.enabledturns on brokered access and policy enforcement.spec.analytics.enabledturns on audit and analytics emission for governed traffic.
Use this minimal pattern for most first deployments:
apiVersion: mcpruntime.org/v1alpha1
kind: MCPServer
metadata:
name: my-server
namespace: mcp-servers
spec:
image: registry.example.com/my-server
imageTag: v1.0.0
port: 8088
publicPathPrefix: my-server
gateway:
enabled: true
analytics:
enabled: trueCommon edits:
- Set
spec.ingressHostif you use host-based routing instead of the default path-based shape. - Set
spec.servicePortif you need a Service port other than80. - Add
spec.envVarsorspec.secretEnvVarswhen the server needs configuration or credentials. - Add
spec.imagePullSecretsif the image registry requires explicit pull auth. - Add
spec.tools,spec.auth,spec.policy,spec.session, orspec.rolloutwhen you are ready to describe stricter governance or delivery behavior.
For the full field surface, use the API reference.
Author lightweight metadata YAML, generate CRDs, and deploy:
./bin/mcp-runtime server build image my-server --tag v1.0.0
./bin/mcp-runtime registry push --image <exact-image-ref-from-build>
./bin/mcp-runtime pipeline generate --dir .mcp --output manifests/
./bin/mcp-runtime pipeline deploy --dir manifests/<exact-image-ref-from-build> may be a resolved registry endpoint such as 10.43.109.51:5000/my-server:v1.0.0.
The server lands at /{server-name}/mcp on the configured ingress host, behind the same platform surface you use for future MCP servers.
There are two ways to get a server into the platform:
- Build and push an image, then apply an
MCPServermanifest directly. - Build and push an image, then generate and deploy
MCPServermanifests from.mcpmetadata.
The end-to-end flow is the same either way:
- Build the image for your server.
- Push that image to the platform registry or another registry the cluster can pull from.
- Apply an
MCPServerresource that points at the image. - Let the operator reconcile the runtime objects for that server.
After the manifest is applied, the platform does the following:
- Validates and stores the
MCPServerresource in Kubernetes. - Resolves the final image reference using
spec.image,spec.imageTag, and any registry override behavior. - Creates or updates a
Deploymentfor the MCP server. - Creates or updates a
Servicefor in-cluster traffic. - Creates or updates an
Ingressso the server is reachable at/{publicPathPrefix}/mcpor the configured ingress path. - If
gateway.enabledis set, wires traffic through the broker path and renders policy from matching grants and sessions. - If analytics are enabled, emits audit and traffic events into the Sentinel stack.
- Reports readiness and status through
MCPServer.status,mcp-runtime server status, and the platform UI.
Useful checks after publish:
./bin/mcp-runtime server status
./bin/mcp-runtime server get payments
./bin/mcp-runtime server policy inspect payments
./bin/mcp-runtime statusIf the server does not come up, stay in the CLI first:
./bin/mcp-runtime server get payments
./bin/mcp-runtime server logs payments --follow
./bin/mcp-runtime sentinel logs gateway --follow
./bin/mcp-runtime status# grant.yaml
apiVersion: mcpruntime.org/v1alpha1
kind: MCPAccessGrant
metadata:
name: payments-ops-agent
namespace: mcp-servers
spec:
serverRef:
name: payments
subject:
humanID: user-123
agentID: ops-agent
maxTrust: high
toolRules:
- name: list_invoices
decision: allow
requiredTrust: low
- name: refund_invoice
decision: allow
requiredTrust: high# session.yaml (MCPAgentSession)
apiVersion: mcpruntime.org/v1alpha1
kind: MCPAgentSession
metadata:
name: payments-ops-agent-session
namespace: mcp-servers
spec:
serverRef:
name: payments
subject:
humanID: user-123
agentID: ops-agent
consentedTrust: high
policyVersion: v1./bin/mcp-runtime access grant apply --file grant.yaml
./bin/mcp-runtime access session apply --file session.yaml
./bin/mcp-runtime server policy inspect payments./bin/mcp-runtime sentinel port-forward ui # Governance + dashboard
./bin/mcp-runtime sentinel port-forward grafana # Metrics + traces + logs
./bin/mcp-runtime sentinel logs gateway --follow # Tail the proxyflowchart LR
A[Build CLI<br/>make build] --> B[bootstrap<br/>cluster preflight]
B --> C[setup<br/>install platform]
C --> D[Apply MCPServer]
D --> E[Apply Grant + Session]
E --> F[Traffic flows<br/>through gateway]
F --> G[Observe in UI<br/>+ Grafana]
- Publish an MCP Server — write manifests or
.mcpmetadata, build, push, deploy, and verify. - Architecture — how the pieces fit together.
- CLI — full command reference.
- API — every CRD field and HTTP endpoint.
- Sentinel — request-path governance, audit, observability.