Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 45 additions & 0 deletions docs/toolhive/guides-cli/skills-management.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -301,6 +301,50 @@ authenticated before pushing.

:::

## List and remove locally-built skill artifacts

After building skills locally, you can view and manage the artifacts stored in
your local OCI store.

### List locally-built artifacts

```bash
thv skill builds
```

This lists all OCI skill artifacts built locally with `thv skill build`. The
output shows the tag, digest, name, and version of each artifact:

```text
TAG DIGEST NAME VERSION
ghcr.io/my-org/skills/my-skill:v1.0.0 sha256:a1b2c3d4... my-skill 1.0.0
my-skill:latest sha256:e5f6a7b8... my-skill
Comment thread
danbarr marked this conversation as resolved.
```

For JSON output:

```bash
thv skill builds --format json
```

### Remove a locally-built artifact

To remove an artifact from the local OCI store:

```bash
thv skill builds remove <TAG>
```

For example:

```bash
thv skill builds remove my-skill:latest
```

This removes the artifact and cleans up its blobs from the local store. If
multiple tags share the same digest, the blobs are retained until all tags
pointing to that digest are removed.

## Next steps

- [Configure your AI client](./client-configuration.mdx) to register clients
Expand All @@ -312,6 +356,7 @@ authenticated before pushing.

- [Understanding skills](../concepts/skills.mdx) for a conceptual overview
- [`thv skill` command reference](../reference/cli/thv_skill.md)
- [`thv skill builds` command reference](../reference/cli/thv_skill_builds.md)
Comment thread
danbarr marked this conversation as resolved.
Outdated
- [`thv serve` command reference](../reference/cli/thv_serve.md)
- [Agent Skills specification](https://agentskills.io/specification)

Expand Down
2 changes: 1 addition & 1 deletion docs/toolhive/guides-k8s/auth-k8s.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ Check the MCPOIDCConfig status:
kubectl get mcpoidc -n toolhive-system
```

The `REFERENCES` column shows which workloads use this config. The `READY`
The `REFERENCES` column shows which workloads use this config. The `VALID`
column confirms validation passed.

### Benefits of MCPOIDCConfig
Expand Down
30 changes: 21 additions & 9 deletions docs/toolhive/guides-k8s/intro.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,10 @@ quickly using a local kind cluster. Try it out and
The operator introduces new Custom Resource Definitions (CRDs) into your
Kubernetes cluster. The primary CRDs for MCP server workloads are `MCPServer`,
which represents a single MCP server running in Kubernetes, `MCPRemoteProxy`,
which represents an MCP server running outside the cluster that is proxied by
ToolHive, and `VirtualMCPServer`, which represents a virtual MCP server gateway
that aggregates multiple backend MCP servers.
which represents an MCP server hosted outside the cluster that ToolHive proxies,
`VirtualMCPServer`, which aggregates multiple backend MCP servers behind a
single endpoint, and `MCPServerEntry`, which declares a remote MCP server as a
catalog entry without creating any pods or infrastructure.

All ToolHive CRDs are registered under the `toolhive` category, so you can list
every ToolHive resource in your cluster with a single command:
Expand Down Expand Up @@ -67,19 +68,30 @@ or Gateway. To learn how to expose your MCP servers and connect clients, see

## Which resource type should I use?

The operator introduces three resource types for MCP workloads. Choose based on
The operator introduces resource types for MCP workloads. Choose based on
where your MCP server runs and how many servers you need to manage:

| Resource | Use when |
| -------------------- | ----------------------------------------------------------------------------------------------------------------- |
| **MCPServer** | Running an MCP server as a container inside your cluster |
| **MCPRemoteProxy** | Connecting to an MCP server hosted outside your cluster (SaaS tools, external APIs, remote endpoints) |
| **VirtualMCPServer** | Aggregating multiple MCPServer and/or MCPRemoteProxy resources behind a single endpoint for a team or application |
| Resource | Use when |
| ----------------------- | ----------------------------------------------------------------------------------------------------------------- |
| **MCPServer** | Running an MCP server as a container inside your cluster |
| **MCPRemoteProxy** | Connecting to an MCP server hosted outside your cluster (SaaS tools, external APIs, remote endpoints) |
| **VirtualMCPServer** | Aggregating multiple MCPServer and/or MCPRemoteProxy resources behind a single endpoint for a team or application |
| **MCPServerEntry** | Declaring a remote MCP server endpoint as a catalog entry without spawning proxy pods (no infrastructure created) |

Most teams start with `MCPServer` for container-based servers, add
`MCPRemoteProxy` for external SaaS tools, and graduate to `VirtualMCPServer`
when managing five or more servers or needing centralized authentication.

:::note

`MCPServerEntry` is a new resource type added in v0.16.0. The CRD is available
and you can create resources, but the reconciliation controller ships in a
future release. Creating `MCPServerEntry` resources before the controller is
available has no effect — the operator will not reconcile them until the
controller ships.

:::

Comment thread
danbarr marked this conversation as resolved.
Outdated
The operator also provides shared configuration CRDs that you reference from
workload resources:

Expand Down
8 changes: 6 additions & 2 deletions docs/toolhive/guides-vmcp/scaling-and-performance.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -67,8 +67,12 @@ kubectl autoscale deployment vmcp-<VMCP_NAME> -n <NAMESPACE> \
### Session storage for multi-replica deployments

When running multiple replicas, configure Redis session storage so that sessions
are shared across pods. Without session storage, a request routed to a different
replica than the one that established the session will fail.
are shared across pods and survive pod restarts. Without session storage, a
request routed to a different replica than the one that established the session
will fail, and sessions are lost when pods restart.

With Redis session storage configured, vMCP stores session state in Redis so
that any replica can resume a session, even after a pod restart or failover.

```yaml title="VirtualMCPServer resource"
spec:
Expand Down
Loading