Skip to content

Commit 78891a7

Browse files
ChrisJBurnsclaude
andauthored
Update docs for ToolHive v0.15.0 (#663)
* Update docs for ToolHive v0.15.0 release Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Add group_claim_name option to Cedar upstream IDP docs Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent 9f015ce commit 78891a7

File tree

9 files changed

+373
-26
lines changed

9 files changed

+373
-26
lines changed

docs/toolhive/concepts/cedar-policies.mdx

Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -236,6 +236,100 @@ permit(principal, action == Action::"call_tool", resource == Tool::"weather");
236236

237237
Then `tools/list` will only show the "weather" tool for that user.
238238

239+
## Optimizer meta-tool enforcement
240+
241+
When the [optimizer](../guides-vmcp/optimizer.mdx) is enabled alongside Cedar
242+
authorization, Cedar policies cover the optimizer's `find_tool` and `call_tool`
243+
meta-tools:
244+
245+
- **`tools/list`**: The meta-tools (`find_tool`, `call_tool`) pass through Cedar
246+
filtering. Real backend tools are filtered as before.
247+
- **`tools/call` with `call_tool`**: Cedar extracts the inner `tool_name`
248+
argument and authorizes the actual backend tool before execution. Your
249+
existing per-tool policies apply transparently.
250+
- **`tools/call` with `find_tool`**: The response is filtered through Cedar so
251+
clients cannot discover unauthorized tools via search.
252+
253+
You don't need to write separate policies for the meta-tools themselves. Your
254+
existing `call_tool` policies on backend tools are enforced automatically when
255+
the optimizer routes calls.
256+
257+
:::warning[Review policies when enabling Cedar with the optimizer]
258+
259+
If you enable Cedar on a deployment that already uses the optimizer, ensure your
260+
backend tool policies are comprehensive. Previously unchecked operations are now
261+
subject to default-deny authorization. Tools that were accessible without
262+
policies before may now be denied.
263+
264+
:::
265+
266+
## Upstream identity provider claims
267+
268+
When using the
269+
[embedded authorization server](./auth-framework.mdx#embedded-authorization-server),
270+
Cedar policies can reference claims from the upstream identity provider token
271+
(for example, GitHub `login` or Okta `groups`). This enables group-based
272+
authorization using your organization's existing identity provider groups.
273+
274+
### Group-based authorization
275+
276+
The Cedar authorizer extracts group membership from upstream tokens using
277+
configurable claim names. By default, it looks for `groups`, `roles`, and
278+
`cognito:groups` claims. Groups are mapped to `THVGroup` parent entities, so you
279+
can write policies like:
280+
281+
```text
282+
permit(
283+
principal in THVGroup::"engineering",
284+
action == Action::"call_tool",
285+
resource
286+
);
287+
```
288+
289+
This permits any user in the "engineering" group to call any tool.
290+
291+
### Custom group claim names
292+
293+
If your identity provider uses a non-standard claim name for groups (for
294+
example, Auth0 and Okta often use URI-style claims like
295+
`https://example.com/groups`), set the `group_claim_name` option in your Cedar
296+
configuration:
297+
298+
```json
299+
{
300+
"version": "1.0",
301+
"type": "cedarv1",
302+
"cedar": {
303+
"policies": [
304+
"permit(principal in THVGroup::\"engineering\", action, resource);"
305+
],
306+
"entities_json": "[]",
307+
"group_claim_name": "https://example.com/groups"
308+
}
309+
}
310+
```
311+
312+
When `group_claim_name` is set, it takes priority over the well-known defaults.
313+
When it is empty (the default), ToolHive checks `groups`, `roles`, and
314+
`cognito:groups` in order.
315+
316+
### How it works
317+
318+
1. The embedded authorization server authenticates the user with your upstream
319+
identity provider and issues a ToolHive JWT.
320+
2. The Cedar authorizer reads claims from the upstream token (not just the
321+
ToolHive-issued JWT).
322+
3. Group claims are extracted and used to build `THVGroup` parent entities for
323+
the principal.
324+
4. Policies using `principal in THVGroup::"<group>"` evaluate correctly.
325+
326+
:::note
327+
328+
If the upstream token is opaque (not a JWT), the authorizer denies the request.
329+
There is no silent fallback to ToolHive-issued claims only.
330+
331+
:::
332+
239333
## Policy evaluation and secure defaults
240334

241335
Understanding how Cedar evaluates policies helps you write more effective and

docs/toolhive/guides-k8s/auth-k8s.mdx

Lines changed: 151 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ You'll need:
3333

3434
## Choose your authentication approach
3535

36-
There are four main ways to authenticate with MCP servers running in Kubernetes:
36+
There are several ways to authenticate with MCP servers running in Kubernetes:
3737

3838
### Approach 1: External identity provider authentication
3939

@@ -44,18 +44,30 @@ providers like Google, GitHub, Microsoft Entra ID, Okta, or Auth0.
4444

4545
<OidcPrerequisites />
4646

47-
### Approach 2: Shared OIDC configuration with ConfigMap
47+
### Approach 2: Shared OIDC configuration with MCPOIDCConfig
4848

4949
Use this when you want to share the same OIDC configuration across multiple
50-
MCPServers. This is ideal for managing multiple servers with the same external
51-
identity provider.
50+
MCPServers or VirtualMCPServers. The `MCPOIDCConfig` CRD provides a dedicated,
51+
validated resource for managing shared OIDC settings at the platform level.
5252

5353
**Prerequisites for shared OIDC:**
5454

5555
- External identity provider configured (same as Approach 1)
56-
- Understanding of Kubernetes ConfigMaps
5756

58-
### Approach 3: Kubernetes service-to-service authentication
57+
### Approach 3: Shared OIDC configuration with ConfigMap (legacy)
58+
59+
:::warning[Prefer MCPOIDCConfig]
60+
61+
For new deployments, use Approach 2 (`MCPOIDCConfig`) instead of a ConfigMap.
62+
`MCPOIDCConfig` provides built-in validation, status tracking, and lifecycle
63+
management.
64+
65+
:::
66+
67+
This approach predates MCPOIDCConfig and remains supported. Use this when you
68+
want to share OIDC configuration via a ConfigMap.
69+
70+
### Approach 4: Kubernetes service-to-service authentication
5971

6072
Use this when you have client applications running in the same Kubernetes
6173
cluster that need to call MCP servers. This approach uses Kubernetes service
@@ -66,7 +78,7 @@ account tokens for authentication.
6678
- Client applications running in Kubernetes pods
6779
- Understanding of Kubernetes service accounts and RBAC
6880

69-
### Approach 4: Embedded authorization server authentication
81+
### Approach 5: Embedded authorization server authentication
7082

7183
Use this when you want ToolHive to handle the full OAuth flow, including
7284
redirecting users to an upstream identity provider for authentication. This
@@ -82,8 +94,131 @@ For conceptual background, see
8294
- A registered OAuth application/client with your upstream provider
8395
- Client ID and client secret from your upstream provider
8496

97+
## Set up shared OIDC configuration with MCPOIDCConfig
98+
99+
The `MCPOIDCConfig` CRD lets you define OIDC provider settings once and
100+
reference them from multiple MCPServer or VirtualMCPServer resources. Each
101+
server specifies its own `audience` (and optionally `scopes`) to maintain token
102+
isolation.
103+
104+
**Step 1: Create an MCPOIDCConfig resource**
105+
106+
<Tabs groupId="oidc-type">
107+
<TabItem value="inline" label="External IdP" default>
108+
109+
```yaml title="shared-oidc-config.yaml"
110+
apiVersion: toolhive.stacklok.dev/v1alpha1
111+
kind: MCPOIDCConfig
112+
metadata:
113+
name: production-oidc
114+
namespace: toolhive-system
115+
spec:
116+
type: inline
117+
inline:
118+
issuer: 'https://auth.example.com'
119+
clientId: 'your-client-id'
120+
clientSecretRef:
121+
name: oidc-secret
122+
key: client-secret
123+
jwksUrl: 'https://auth.example.com/.well-known/jwks.json'
124+
```
125+
126+
</TabItem>
127+
<TabItem value="k8s" label="Kubernetes service account">
128+
129+
```yaml title="k8s-oidc-config.yaml"
130+
apiVersion: toolhive.stacklok.dev/v1alpha1
131+
kind: MCPOIDCConfig
132+
metadata:
133+
name: k8s-sa-oidc
134+
namespace: toolhive-system
135+
spec:
136+
type: kubernetesServiceAccount
137+
kubernetesServiceAccount:
138+
serviceAccount: mcp-client
139+
namespace: client-apps
140+
```
141+
142+
</TabItem>
143+
</Tabs>
144+
145+
Apply the resource:
146+
147+
```bash
148+
kubectl apply -f <YOUR_OIDC_CONFIG_FILE>.yaml
149+
```
150+
151+
**Step 2: Reference MCPOIDCConfig from an MCPServer**
152+
153+
Use `oidcConfigRef` instead of inline `oidcConfig`. Each server must set a
154+
unique `audience` to prevent token replay across servers:
155+
156+
```yaml title="mcp-server-shared-oidc.yaml"
157+
apiVersion: toolhive.stacklok.dev/v1alpha1
158+
kind: MCPServer
159+
metadata:
160+
name: weather-server
161+
namespace: toolhive-system
162+
spec:
163+
image: ghcr.io/stackloklabs/weather-mcp/server
164+
transport: streamable-http
165+
proxyPort: 8080
166+
permissionProfile:
167+
type: builtin
168+
name: network
169+
# highlight-start
170+
oidcConfigRef:
171+
name: production-oidc
172+
audience: weather-server
173+
scopes:
174+
- openid
175+
# highlight-end
176+
```
177+
178+
```bash
179+
kubectl apply -f mcp-server-shared-oidc.yaml
180+
```
181+
182+
**Step 3: Verify**
183+
184+
Check the MCPOIDCConfig status:
185+
186+
```bash
187+
kubectl get mcpoidc -n toolhive-system
188+
```
189+
190+
The `REFERENCES` column shows which workloads use this config. The `READY`
191+
column confirms validation passed.
192+
193+
### Benefits of MCPOIDCConfig
194+
195+
- **Centralized management**: update provider settings in one place for all
196+
servers
197+
- **Built-in validation**: CEL rules catch misconfiguration at admission time
198+
- **Status tracking**: see which workloads reference the config and whether it
199+
is valid
200+
- **Lifecycle management**: deletion is blocked while workloads reference the
201+
config
202+
203+
:::info[Inline oidcConfig deprecation]
204+
205+
The inline `spec.oidcConfig` field on MCPServer is deprecated and will be
206+
removed in `v1beta1`. Use `oidcConfigRef` to reference a shared MCPOIDCConfig
207+
resource instead. You cannot set both fields on the same MCPServer.
208+
209+
:::
210+
85211
## Set up external identity provider authentication
86212

213+
:::note[Consider MCPOIDCConfig]
214+
215+
For new deployments, consider using `oidcConfigRef` with a shared MCPOIDCConfig
216+
resource instead of inline `oidcConfig`. See
217+
[Set up shared OIDC configuration](#set-up-shared-oidc-configuration-with-mcpoidcconfig)
218+
above.
219+
220+
:::
221+
87222
**Step 1: Create an MCPServer with external OIDC**
88223

89224
Create an `MCPServer` resource configured to accept tokens from your external
@@ -99,7 +234,7 @@ metadata:
99234
spec:
100235
image: ghcr.io/stackloklabs/weather-mcp/server
101236
transport: sse
102-
port: 8080
237+
proxyPort: 8080
103238
permissionProfile:
104239
type: builtin
105240
name: network
@@ -185,7 +320,7 @@ metadata:
185320
spec:
186321
image: ghcr.io/stackloklabs/weather-mcp/server
187322
transport: sse
188-
port: 8080
323+
proxyPort: 8080
189324
permissionProfile:
190325
type: builtin
191326
name: network
@@ -249,7 +384,7 @@ metadata:
249384
spec:
250385
image: ghcr.io/stackloklabs/weather-mcp/server
251386
transport: sse
252-
port: 8080
387+
proxyPort: 8080
253388
permissionProfile:
254389
type: builtin
255390
name: network
@@ -629,7 +764,11 @@ standard `name` field.
629764
## Set up authorization
630765

631766
All authentication approaches can use the same authorization configuration using
632-
Cedar policies.
767+
Cedar policies. When using the embedded authorization server (Approach 5), Cedar
768+
policies can also evaluate upstream identity provider claims such as group
769+
membership. See
770+
[Upstream identity provider claims](../concepts/cedar-policies.mdx#upstream-identity-provider-claims)
771+
for details.
633772

634773
**Step 1: Create authorization configuration**
635774

@@ -678,7 +817,7 @@ metadata:
678817
spec:
679818
image: ghcr.io/stackloklabs/weather-mcp/server
680819
transport: sse
681-
port: 8080
820+
proxyPort: 8080
682821
permissionProfile:
683822
type: builtin
684823
name: network

docs/toolhive/guides-k8s/customize-tools.mdx

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,7 @@ descriptions. You reference the configuration from an MCPServer using the
1616
- toolsOverride: rename tools and/or change their descriptions.
1717
- Same-namespace only: an MCPServer can reference only MCPToolConfig objects in
1818
the same namespace.
19-
- Precedence: toolConfigRef takes precedence over the deprecated spec.tools
20-
field on MCPServer.
19+
- The inline `spec.tools` field has been removed. Use `toolConfigRef` instead.
2120

2221
## Define a basic tool filter
2322

docs/toolhive/guides-k8s/intro.mdx

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,13 @@ which represents an MCP server running outside the cluster that is proxied by
2525
ToolHive, and `VirtualMCPServer`, which represents a virtual MCP server gateway
2626
that aggregates multiple backend MCP servers.
2727

28+
All ToolHive CRDs are registered under the `toolhive` category, so you can list
29+
every ToolHive resource in your cluster with a single command:
30+
31+
```bash
32+
kubectl get toolhive -n toolhive-system
33+
```
34+
2835
When you create an `MCPServer` resource, the operator automatically:
2936

3037
1. Creates a Deployment to run the MCP server
@@ -73,6 +80,16 @@ Most teams start with `MCPServer` for container-based servers, add
7380
`MCPRemoteProxy` for external SaaS tools, and graduate to `VirtualMCPServer`
7481
when managing five or more servers or needing centralized authentication.
7582

83+
The operator also provides shared configuration CRDs that you reference from
84+
workload resources:
85+
86+
| Resource | Purpose |
87+
| ------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------- |
88+
| [**MCPOIDCConfig**](./auth-k8s.mdx#set-up-shared-oidc-configuration-with-mcpoidcconfig) | Shared OIDC authentication settings, referenced via `oidcConfigRef` |
89+
| [**MCPTelemetryConfig**](./telemetry-and-metrics.mdx#shared-telemetry-configuration-recommended) | Shared telemetry/observability settings, referenced via `telemetryConfigRef` |
90+
| [**MCPToolConfig**](./customize-tools.mdx) | Tool filtering and renaming, referenced via `toolConfigRef` |
91+
| [**MCPExternalAuthConfig**](./auth-k8s.mdx#set-up-embedded-authorization-server-authentication) | Token exchange or embedded auth server configuration, referenced via `externalAuthConfigRef` |
92+
7693
## Installation
7794

7895
[Deploy the ToolHive operator](./deploy-operator.mdx) in your Kubernetes

0 commit comments

Comments
 (0)