diff --git a/contrib/production/README.md b/contrib/production/README.md index fde416722d9..6876eeebc54 100644 --- a/contrib/production/README.md +++ b/contrib/production/README.md @@ -8,12 +8,13 @@ This directory contains assets and configuration files for production deployment These assets are referenced by the production deployment documentation in `docs/content/setup/production/`. -Each deployment type (dekker, vespucci, comer) has its own subdirectory with complete configuration files and deployment manifests. +Each deployment type (dekker, vespucci, comer, zheng) has its own subdirectory with complete configuration files and deployment manifests. ## Deployment Types - **kcp-dekker**: Self-signed certificates, simple single-cluster deployment - **kcp-vespucci**: External certificates with Let's Encrypt, public shard access - **kcp-comer**: CDN integration with dual front-proxy configuration +- **kcp-zheng**: Distributed multi-cluster deployment with high availability See the corresponding documentation in `docs/content/setup/production/` for detailed deployment instructions. \ No newline at end of file diff --git a/contrib/production/kcp-mcp/certificate.yaml b/contrib/production/kcp-mcp/certificate.yaml new file mode 100644 index 00000000000..7a2b13ead3c --- /dev/null +++ b/contrib/production/kcp-mcp/certificate.yaml @@ -0,0 +1,15 @@ +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: mcp-tls + namespace: kcp-mcp +spec: + secretName: mcp-tls + duration: 2160h # 90 days + renewBefore: 360h # 15 days + issuerRef: + name: letsencrypt-prod + kind: ClusterIssuer + group: cert-manager.io + dnsNames: + - mcp.vespucci.example.com diff --git a/contrib/production/kcp-mcp/values.yaml b/contrib/production/kcp-mcp/values.yaml new file mode 100644 index 00000000000..ed6d6737176 --- /dev/null +++ b/contrib/production/kcp-mcp/values.yaml @@ -0,0 +1,132 @@ +# Kubernetes MCP Server Helm Values for KCP +# This configuration deploys the MCP server to work with kcp control planes +# using OIDC bearer token authentication (same provider as kcp front-proxy) + +# Replica count for the deployment +replicaCount: 1 + +# Container image configuration +image: + registry: ghcr.io + repository: mjudeikis/kubernetes-mcp-server + version: latest + pullPolicy: Always + +# Service configuration - expose HTTPS on port 443, forward to container port 8443 +service: + type: LoadBalancer + port: 8443 + targetPort: 8443 + annotations: + service.beta.kubernetes.io/aws-load-balancer-type: nlb + service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip + service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing + +# Ingress disabled - using TLS termination at the application level +ingress: + enabled: false + +# Service account configuration +serviceAccount: + create: true + annotations: {} + name: "" + +# RBAC configuration - disable default RBAC as we use kcp's RBAC +rbac: + create: false + +# Pod security context +podSecurityContext: + seccompProfile: + type: RuntimeDefault + +# Container security context +securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + runAsNonRoot: true + +# Resource limits and requests +resources: + limits: + cpu: 100m + memory: 128Mi + requests: + cpu: 100m + memory: 128Mi + +# Extra volumes for mounting base kubeconfig and TLS certs +# The kubeconfig provides cluster endpoint/TLS only (no auth credentials) +# Authentication is handled via OIDC bearer tokens +# Note: config is handled by the chart via config.existingConfigMap +extraVolumes: + - name: kubeconfig + secret: + secretName: kcp-mcp-kubeconfig + - name: tls-certs + secret: + secretName: mcp-tls + +# Extra volume mounts +# Mount to different paths to avoid conflicts with chart's default mounts +extraVolumeMounts: + - name: kubeconfig + mountPath: /etc/kcp-mcp/kube + readOnly: true + - name: tls-certs + mountPath: /etc/kcp-mcp/tls + readOnly: true + +# Configuration file path +configFilePath: /etc/kcp-mcp/config/config.toml + +# MCP Server configuration - this becomes TOML via the chart's toToml function +config: + port: "8443" + cluster_provider_strategy: "kcp" + kubeconfig: "/etc/kcp-mcp/kube/kubeconfig" + require_oauth: true + authorization_url: "https://auth.keycloak.example.com/realms/kcp" + oauth_audience: "kcp" + tls_cert: "/etc/kcp-mcp/tls/tls.crt" + tls_key: "/etc/kcp-mcp/tls/tls.key" + toolsets: + - "kcp" + - "core" + disabled_tools: + - "pods_list" + - "pods_list_in_namespace" + - "pods_get" + - "pods_delete" + - "pods_top" + - "pods_exec" + - "pods_log" + - "pods_run" + - "nodes_log" + - "nodes_stats_summary" + - "nodes_top" + +# Liveness and readiness probes - use HTTPS scheme on port 8443 +livenessProbe: + httpGet: + path: /healthz + port: 8443 + scheme: HTTPS + +readinessProbe: + httpGet: + path: /healthz + port: 8443 + scheme: HTTPS + +# Node selector, tolerations, and affinity +nodeSelector: {} +tolerations: [] +affinity: {} + +# Pod annotations and labels +podAnnotations: {} +podLabels: {} diff --git a/contrib/production/oidc-keycloak/README.md b/contrib/production/oidc-keycloak/README.md new file mode 100644 index 00000000000..dcf5492ee3e --- /dev/null +++ b/contrib/production/oidc-keycloak/README.md @@ -0,0 +1,93 @@ +# Keycloak OIDC Provider for kcp + +This directory contains example Kubernetes manifests for deploying Keycloak as an OIDC provider for kcp. + +## Prerequisites + +- Kubernetes cluster with kubectl access +- [cert-manager](https://cert-manager.io/) installed +- [CloudNativePG](https://cloudnative-pg.io/) operator installed +- [Keycloak Operator](https://www.keycloak.org/operator/installation) installed + +## Files + +| File | Description | +|------|-------------| +| `postgres-cluster.yaml` | CloudNativePG PostgreSQL cluster for Keycloak backend | +| `postgres-database.yaml` | Database resource for the PostgreSQL cluster | +| `keycloak-db-secret.yaml` | Secret for Keycloak to connect to PostgreSQL | +| `certificate-dns.yaml` | cert-manager Certificate for TLS | +| `keycloak.yaml` | Keycloak CRD deployment | +| `values.yaml.template` | Template showing all configuration options | + +## Deployment + +1. Create the namespace: + ```sh + kubectl create namespace oidc + ``` + +2. Deploy PostgreSQL: + ```sh + kubectl apply -f postgres-cluster.yaml + kubectl apply -f postgres-database.yaml + ``` + +3. Wait for PostgreSQL to be ready: + ```sh + kubectl wait --for=condition=Ready cluster/keycloak-auth -n oidc --timeout=300s + ``` + +4. Create the database secret for Keycloak: + ```sh + kubectl apply -f keycloak-db-secret.yaml + ``` + +5. Create the TLS certificate: + ```sh + kubectl apply -f certificate-dns.yaml + ``` + +6. Wait for the certificate to be issued: + ```sh + kubectl wait --for=condition=Ready certificate/keycloak-tls-cert -n oidc --timeout=300s + ``` + +7. Deploy Keycloak: + ```sh + kubectl apply -f keycloak.yaml + ``` + +8. Wait for Keycloak to be ready: + ```sh + kubectl get keycloaks/keycloak -n oidc -w + ``` + +## Post-Deployment Configuration + +After Keycloak is running: + +1. Get the initial admin credentials: + ```sh + kubectl get secret keycloak-initial-admin -n oidc -o jsonpath='{.data.username}' | base64 -d + kubectl get secret keycloak-initial-admin -n oidc -o jsonpath='{.data.password}' | base64 -d + ``` + +2. Access the admin console at your configured hostname (e.g., `https://auth.keycloak.example.com/admin`) + +3. Create a realm for kcp (e.g., `kcp`) + +4. Create a client for kcp with: + - Client ID: `kcp` + - Public client: Yes + - Valid redirect URIs: `http://localhost:8000`, `http://127.0.0.1:8000/` + +## Security Notes + +- Change all default passwords before production use +- Enable MFA for the admin account +- Review and restrict redirect URIs as needed + +## Documentation + +For more details, see the [Keycloak setup guide](../../../docs/content/setup/keycloak.md). diff --git a/contrib/production/oidc-keycloak/certificate-dns.yaml b/contrib/production/oidc-keycloak/certificate-dns.yaml new file mode 100644 index 00000000000..b90ae6b35a4 --- /dev/null +++ b/contrib/production/oidc-keycloak/certificate-dns.yaml @@ -0,0 +1,17 @@ +--- +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: keycloak-tls-cert + namespace: oidc +spec: + secretName: keycloak-tls + issuerRef: + name: letsencrypt-prod + kind: ClusterIssuer + group: cert-manager.io + dnsNames: + - auth.keycloak.example.com + usages: + - digital signature + - key encipherment \ No newline at end of file diff --git a/contrib/production/oidc-keycloak/keycloak-db-secret.yaml b/contrib/production/oidc-keycloak/keycloak-db-secret.yaml new file mode 100644 index 00000000000..41e8b91680c --- /dev/null +++ b/contrib/production/oidc-keycloak/keycloak-db-secret.yaml @@ -0,0 +1,10 @@ +--- +apiVersion: v1 +data: + username: a2V5Y2xvYWs= # keycloak + password: cGFzc3dvcmQ= # password - CHANGE IN PRODUCTION +kind: Secret +metadata: + namespace: oidc + name: keycloak-db-secret +type: kubernetes.io/basic-auth diff --git a/contrib/production/oidc-keycloak/keycloak-service-lb.yaml b/contrib/production/oidc-keycloak/keycloak-service-lb.yaml new file mode 100644 index 00000000000..549c26bb4f2 --- /dev/null +++ b/contrib/production/oidc-keycloak/keycloak-service-lb.yaml @@ -0,0 +1,16 @@ +--- +apiVersion: v1 +kind: Service +metadata: + name: keycloak-lb + namespace: oidc +spec: + type: LoadBalancer + selector: + app: keycloak + app.kubernetes.io/instance: keycloak + ports: + - name: https + port: 443 + targetPort: 8443 + protocol: TCP diff --git a/contrib/production/oidc-keycloak/keycloak.yaml b/contrib/production/oidc-keycloak/keycloak.yaml new file mode 100644 index 00000000000..7516afeb720 --- /dev/null +++ b/contrib/production/oidc-keycloak/keycloak.yaml @@ -0,0 +1,25 @@ +--- +apiVersion: k8s.keycloak.org/v2alpha1 +kind: Keycloak +metadata: + name: keycloak + namespace: oidc +spec: + instances: 1 + db: + vendor: postgres + host: keycloak-auth-rw.oidc.svc.cluster.local + usernameSecret: + name: keycloak-db-secret + key: username + passwordSecret: + name: keycloak-db-secret + key: password + http: + tlsSecret: keycloak-tls + hostname: + hostname: auth.keycloak.example.com + proxy: + headers: xforwarded + ingress: + enabled: false # Using LoadBalancer service instead diff --git a/contrib/production/oidc-keycloak/postgres-cluster.yaml b/contrib/production/oidc-keycloak/postgres-cluster.yaml new file mode 100644 index 00000000000..1ac2250bb2e --- /dev/null +++ b/contrib/production/oidc-keycloak/postgres-cluster.yaml @@ -0,0 +1,39 @@ +--- +apiVersion: postgresql.cnpg.io/v1 +kind: Cluster +metadata: + name: keycloak-auth + namespace: oidc +spec: + instances: 1 + bootstrap: + initdb: + database: keycloak + owner: keycloak + secret: + name: keycloak-postgres + enableSuperuserAccess: true + superuserSecret: + name: keycloak-superuser + storage: + size: 10Gi +--- +apiVersion: v1 +data: + username: a2V5Y2xvYWs= # keycloak + password: cGFzc3dvcmQ= # password - CHANGE IN PRODUCTION +kind: Secret +metadata: + namespace: oidc + name: keycloak-postgres +type: kubernetes.io/basic-auth +--- +apiVersion: v1 +data: + username: cG9zdGdyZXM= # postgres + password: cGFzc3dvcmQ= # password - CHANGE IN PRODUCTION +kind: Secret +metadata: + namespace: oidc + name: keycloak-superuser +type: kubernetes.io/basic-auth \ No newline at end of file diff --git a/contrib/production/oidc-keycloak/postgres-database.yaml b/contrib/production/oidc-keycloak/postgres-database.yaml new file mode 100644 index 00000000000..70994f3a7a9 --- /dev/null +++ b/contrib/production/oidc-keycloak/postgres-database.yaml @@ -0,0 +1,10 @@ +apiVersion: postgresql.cnpg.io/v1 +kind: Database +metadata: + namespace: oidc + name: db-keycloak +spec: + name: keycloak + owner: keycloak + cluster: + name: keycloak-auth \ No newline at end of file diff --git a/docs/content/setup/.pages b/docs/content/setup/.pages index eed8635d4f7..39e4a9a61b1 100644 --- a/docs/content/setup/.pages +++ b/docs/content/setup/.pages @@ -3,6 +3,6 @@ nav: - quickstart.md - helm.md - kubectl-plugin.md - - integrations.md + - integrations - production \ No newline at end of file diff --git a/docs/content/setup/integrations.md b/docs/content/setup/integrations.md deleted file mode 100644 index 910a42aaa57..00000000000 --- a/docs/content/setup/integrations.md +++ /dev/null @@ -1,177 +0,0 @@ -# Integrations - -kcp integrates with several CNCF projects. This page documents known integrations. Please be aware that we try our best to keep it updated but rely on community contributions for that. - -kcp has some "obvious" integrations e.g. with [Kubernetes](https://kubernetes.io) (since it can be deployed on a Kubernetes cluster) and [Helm](https://helm.sh) (since a Helm chart is maintained as the -primary installation method on Kubernetes). - -The fact that kcp is compatible with the Kubernetes Resource Model (KRM) also means that projects using the Kubernetes API might be compatible. The [api-syncagent](https://docs.kcp.io/api-syncagent) -component also allows integration of *any* Kubernetes controller/operator in principle. An example of this can be found in our [KubeCon London workshop](https://docs.kcp.io/contrib/learning/20250401-kubecon-london/workshop/). - -## multicluster-runtime - -kcp integrates with [kubernetes-sigs/multicluster-runtime](https://github.com/kubernetes-sigs/multicluster-runtime) by providing a so-called provider which gives a controller dynamic -access to kcp workspaces. Multiple providers exists for different use cases, see [kcp-dev/multicluster-provider](https://github.com/kcp-dev/multicluster-provider) for a full overview. - -## Dex - -kcp integrates with any OIDC provider, which includes [Dex](https://dexidp.io). To use `kubectl` with it, [kubelogin](https://github.com/int128/kubelogin) is required. - -To integrate them make sure to set up a static client in Dex that is configured similar to: - -```yaml -staticClients: -- id: kcp-kubelogin - name: kcp-kubelogin - secret: - RedirectURIs: - - http://localhost:8000 - - http://localhost:18000 -``` - -Which is then used by [kubelogin](https://github.com/int128/kubelogin) (warning: the secret is shared across all users!). Check its documentation for more details. - -A kubeconfig's `users` configuration would look similar to this: - -```yaml -users: -- name: oidc - user: - exec: - apiVersion: client.authentication.k8s.io/v1beta1 - args: - - oidc-login - - get-token - - --oidc-issuer-url=https:// - - --oidc-client-id=kcp-kubelogin - - --oidc-client-secret= - - --oidc-extra-scope=email,groups - command: kubectl - env: null - interactiveMode: IfAvailable - provideClusterInfo: false -``` - -## OpenFGA - -kcp can integrate with [OpenFGA](https://openfga.dev/) via a shim webhook component that accepts kcp's [authorization webhooks](../concepts/authorization/authorizers.md#webhook-authorizer) and translates -them to OpenFGA queries. - -!!! info "Third Party Solutions" - A third-party example of such a webhook would be Platform Mesh's [rebac-authz-webhook](https://github.com/platform-mesh/rebac-authz-webhook). - -## Lima -You can run kcp inside a [Lima](https://github.com/lima-vm/lima)-managed VM, which makes it portable across macOS, Linux, and Windows (via WSL2). This setup gives you a disposable kcp control plane that integrates smoothly with your host kubectl. - -!!! info "Development Use Only" - This is essentially a development environment, where one can start a single instance of kcp for testing or limited-scope use cases. This is in no way intended for production usage. - -Create a Lima template for kcp and save the following as `kcp.yaml`: - ```yaml -minimumLimaVersion: 1.1.0 - -base: template://_images/ubuntu-lts - -mounts: [] - -containerd: - system: false - user: false - -provision: -- mode: system - script: | - #!/bin/bash - set -eux -o pipefail - command -v kcp >/dev/null 2>&1 && exit 0 - - export DEBIAN_FRONTEND=noninteractive - apt-get update - apt-get install -y curl wget - - KCP_VERSION=$(curl -s https://api.github.com/repos/kcp-dev/kcp/releases/latest | grep tag_name | cut -d '"' -f 4) - KCP_VERSION_NO_V=${KCP_VERSION#v} - - wget https://github.com/kcp-dev/kcp/releases/download/${KCP_VERSION}/kcp_${KCP_VERSION_NO_V}_linux_arm64.tar.gz - tar -xzf kcp_${KCP_VERSION_NO_V}_linux_arm64.tar.gz - mv bin/kcp /usr/local/bin/ - chmod +x /usr/local/bin/kcp - rm -f kcp_${KCP_VERSION_NO_V}_linux_arm64.tar.gz - - mkdir -p /var/.kcp/ - sudo chmod 755 /var/.kcp - - cat > /etc/systemd/system/kcp.service << EOF - [Unit] - Description=kcp server - After=network.target - - [Service] - Type=simple - User=root - ExecStart=/usr/local/bin/kcp start --root-directory=/var/.kcp/ --bind-address=127.0.0.1 - Restart=on-failure - StandardOutput=journal - StandardError=journal - - [Install] - WantedBy=multi-user.target - EOF - - systemctl daemon-reload - systemctl enable kcp - systemctl start kcp - -probes: -- script: | - #!/bin/bash - set -eux -o pipefail - if ! timeout 120s bash -c "until curl -f -s --cacert /var/.kcp/apiserver.crt https://127.0.0.1:6443/readyz >/dev/null; do sleep 3; done"; then - echo >&2 "kcp is not ready yet" - exit 1 - fi - hint: | - The kcp control plane is not ready yet. - Check the kcp logs with "limactl shell kcp sudo journalctl -f" or "tail -f /var/log/kcp.log" - -copyToHost: -- guest: "/var/.kcp/admin.kubeconfig" - host: "{{ '{{.Dir}}' }}/copied-from-guest/kubeconfig.yaml" - deleteOnStop: true - -message: | - To run `kubectl` on the host (assumes kubectl is installed), run: - ------ - export KUBECONFIG="{{ '{{.Dir}}' }}/copied-from-guest/kubeconfig.yaml" - kubectl get workspaces - ------ - - ``` -Initialize the VM -```sh -limactl create --name=kcp ./kcp.yaml -``` - -Start the VM -```sh -limactl start kcp --vm-type=qemu -``` -!!! info -On macOS, Lima may default to vz (Apple Virtualization), while on Linux it defaults to qemu, and on Windows to wsl2. If you want consistency across environments, you can explicitly pass --vm-type=qemu when starting the VM. - -Export the KCP kubeconfig -```sh -export KUBECONFIG="/Users//.lima/kcp/copied-from-guest/kubeconfig.yaml" -``` - -Verify API resources -```sh -kubectl api-resources | grep kcp -``` -You should see kcp-specific resources such as: -```sh -workspaces ws tenancy.kcp.io/v1alpha1 false Workspace -logicalclusters core.kcp.io/v1alpha1 false LogicalCluster -... -``` - diff --git a/docs/content/setup/integrations/.pages b/docs/content/setup/integrations/.pages new file mode 100644 index 00000000000..6d039e2fbec --- /dev/null +++ b/docs/content/setup/integrations/.pages @@ -0,0 +1,8 @@ +nav: + - index.md + - mcp.md + - keycloak.md + - dex.md + - multicluster-runtime.md + - openfga.md + - lima.md diff --git a/docs/content/setup/integrations/dex.md b/docs/content/setup/integrations/dex.md new file mode 100644 index 00000000000..4943dff3765 --- /dev/null +++ b/docs/content/setup/integrations/dex.md @@ -0,0 +1,38 @@ +# Dex + +kcp integrates with any OIDC provider, which includes [Dex](https://dexidp.io). To use `kubectl` with it, [kubelogin](https://github.com/int128/kubelogin) is required. + +To integrate them make sure to set up a static client in Dex that is configured similar to: + +```yaml +staticClients: +- id: kcp-kubelogin + name: kcp-kubelogin + secret: + RedirectURIs: + - http://localhost:8000 + - http://localhost:18000 +``` + +Which is then used by [kubelogin](https://github.com/int128/kubelogin) (warning: the secret is shared across all users!). Check its documentation for more details. + +A kubeconfig's `users` configuration would look similar to this: + +```yaml +users: +- name: oidc + user: + exec: + apiVersion: client.authentication.k8s.io/v1beta1 + args: + - oidc-login + - get-token + - --oidc-issuer-url=https:// + - --oidc-client-id=kcp-kubelogin + - --oidc-client-secret= + - --oidc-extra-scope=email,groups + command: kubectl + env: null + interactiveMode: IfAvailable + provideClusterInfo: false +``` diff --git a/docs/content/setup/integrations/index.md b/docs/content/setup/integrations/index.md new file mode 100644 index 00000000000..01429b30c4d --- /dev/null +++ b/docs/content/setup/integrations/index.md @@ -0,0 +1,20 @@ +# Integrations + +kcp integrates with several CNCF projects. This page documents known integrations. Please be aware that we try our best to keep it updated but rely on community contributions for that. + +kcp has some "obvious" integrations e.g. with [Kubernetes](https://kubernetes.io) (since it can be deployed on a Kubernetes cluster) and [Helm](https://helm.sh) (since a Helm chart is maintained as the +primary installation method on Kubernetes). + +The fact that kcp is compatible with the Kubernetes Resource Model (KRM) also means that projects using the Kubernetes API might be compatible. The [api-syncagent](https://docs.kcp.io/api-syncagent) +component also allows integration of *any* Kubernetes controller/operator in principle. An example of this can be found in our [KubeCon London workshop](https://docs.kcp.io/contrib/learning/20250401-kubecon-london/workshop/). + +## Available Integrations + +| Integration | Description | +|-------------|-------------| +| [MCP Server](mcp.md) | Model Context Protocol server for AI assistant integration | +| [Keycloak](keycloak.md) | OIDC provider for authentication | +| [Dex](dex.md) | Lightweight OIDC provider | +| [multicluster-runtime](multicluster-runtime.md) | Kubernetes-sigs multicluster-runtime provider | +| [OpenFGA](openfga.md) | Authorization via webhook shim | +| [Lima](lima.md) | Development VM for portable kcp testing | diff --git a/docs/content/setup/integrations/keycloak.md b/docs/content/setup/integrations/keycloak.md new file mode 100644 index 00000000000..c5a13f292f9 --- /dev/null +++ b/docs/content/setup/integrations/keycloak.md @@ -0,0 +1,432 @@ +--- +description: > + Deploy Keycloak as an OIDC provider for kcp authentication. +--- + +# Keycloak + +Keycloak is an open-source identity and access management solution aimed at modern applications and services. It provides features such as single sign-on (SSO), user federation, identity brokering, social login, and more. Keycloak can be integrated with kcp to provide authentication and authorization services for users accessing kcp resources. + +This guide describes how to deploy Keycloak on Kubernetes using the Keycloak Operator for use with kcp. + +## Prerequisites + +Before deploying Keycloak, ensure you have the following: + +- A running Kubernetes cluster +- `kubectl` configured to access your cluster +- [cert-manager](https://cert-manager.io/) installed (for TLS certificates) +- [CloudNativePG](https://cloudnative-pg.io/) operator installed (for PostgreSQL database) + +## Deployment Steps + +### 1. Create the Namespace + +```sh +kubectl create namespace oidc +``` + +### 2. Install the Keycloak Operator + +Install the Keycloak Operator in the `oidc` namespace. By default, the operator only watches the namespace it's deployed in. + +```sh +# Install the Keycloak Operator CRDs and resources in the oidc namespace +kubectl -n oidc apply -f https://raw.githubusercontent.com/keycloak/keycloak-k8s-resources/26.0.5/kubernetes/keycloaks.k8s.keycloak.org-v1.yml +kubectl -n oidc apply -f https://raw.githubusercontent.com/keycloak/keycloak-k8s-resources/26.0.5/kubernetes/keycloakrealmimports.k8s.keycloak.org-v1.yml +kubectl -n oidc apply -f https://raw.githubusercontent.com/keycloak/keycloak-k8s-resources/26.0.5/kubernetes/kubernetes.yml + +# Patch the ClusterRoleBinding to use the oidc namespace +kubectl patch clusterrolebinding keycloak-operator-clusterrole-binding \ + --type='json' \ + -p='[{"op": "replace", "path": "/subjects/0/namespace", "value":"oidc"}]' +``` + +Wait for the operator to be ready: + +```sh +kubectl wait --for=condition=Available deployment/keycloak-operator -n oidc --timeout=120s +``` + +### 3. Deploy PostgreSQL Database + +Keycloak requires a database backend. We use CloudNativePG to provision PostgreSQL. + +Create the database secrets and cluster: + +```yaml +# postgres-cluster.yaml +--- +apiVersion: postgresql.cnpg.io/v1 +kind: Cluster +metadata: + name: keycloak-auth + namespace: oidc +spec: + instances: 1 + bootstrap: + initdb: + database: keycloak + owner: keycloak + secret: + name: keycloak-postgres + enableSuperuserAccess: true + superuserSecret: + name: keycloak-superuser + storage: + size: 10Gi +--- +apiVersion: v1 +data: + username: a2V5Y2xvYWs= # keycloak + password: +kind: Secret +metadata: + namespace: oidc + name: keycloak-postgres +type: kubernetes.io/basic-auth +--- +apiVersion: v1 +data: + username: cG9zdGdyZXM= # postgres + password: +kind: Secret +metadata: + namespace: oidc + name: keycloak-superuser +type: kubernetes.io/basic-auth +``` + +Apply the configuration: + +```sh +kubectl apply -f postgres-cluster.yaml +``` + +Optionally, create a separate Database resource: + +```yaml +# postgres-database.yaml +apiVersion: postgresql.cnpg.io/v1 +kind: Database +metadata: + namespace: oidc + name: db-keycloak +spec: + name: keycloak + owner: keycloak + cluster: + name: keycloak-auth +``` + +### 4. Create TLS Certificate + +Use cert-manager to provision a TLS certificate for Keycloak: + +```yaml +# certificate-dns.yaml +--- +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: keycloak-tls-cert + namespace: oidc +spec: + secretName: keycloak-tls + issuerRef: + name: letsencrypt-prod + kind: ClusterIssuer + group: cert-manager.io + dnsNames: + - auth.example.com # Replace with your domain + usages: + - digital signature + - key encipherment +``` + +Apply the certificate: + +```sh +kubectl apply -f certificate-dns.yaml +``` + +Wait for the certificate to be issued: + +```sh +kubectl get certificate -n oidc keycloak-tls-cert -w +``` + +### 5. Create Database Credentials Secret + +Create a secret for Keycloak to access the database: + +```sh +kubectl create secret generic keycloak-db-secret \ + --namespace oidc \ + --from-literal=username=keycloak \ + --from-literal=password=cGFzc3dvcmQ= +``` + +### 6. Deploy Keycloak + +Create the Keycloak Custom Resource. This example disables the default Ingress and uses a separate LoadBalancer Service: + +```yaml +# keycloak.yaml +apiVersion: k8s.keycloak.org/v2alpha1 +kind: Keycloak +metadata: + name: keycloak + namespace: oidc +spec: + instances: 1 + db: + vendor: postgres + host: keycloak-auth-rw.oidc.svc.cluster.local + usernameSecret: + name: keycloak-db-secret + key: username + passwordSecret: + name: keycloak-db-secret + key: password + http: + tlsSecret: keycloak-tls + hostname: + hostname: auth.example.com # Replace with your domain + proxy: + headers: xforwarded + ingress: + enabled: false # Disable default Ingress, we'll use LoadBalancer +``` + +Create a LoadBalancer Service to expose Keycloak: + +```yaml +# keycloak-service-lb.yaml +apiVersion: v1 +kind: Service +metadata: + name: keycloak-lb + namespace: oidc +spec: + type: LoadBalancer + selector: + app: keycloak + app.kubernetes.io/instance: keycloak + ports: + - name: https + port: 443 + targetPort: 8443 + protocol: TCP +``` + +Apply both resources: + +```sh +kubectl apply -f keycloak.yaml +kubectl apply -f keycloak-service-lb.yaml +``` + +Get the LoadBalancer external IP: + +```sh +kubectl get svc keycloak-lb -n oidc -w +``` + +Apply the Keycloak deployment: + +```sh +kubectl apply -f keycloak.yaml +``` + +### 7 Verify Deployment + +Check the status of the Keycloak deployment: + +```sh +kubectl get keycloaks/keycloak -n oidc -o go-template='{% raw %}{{range .status.conditions}}CONDITION: {{.type}}{{"\n"}} STATUS: {{.status}}{{"\n"}} MESSAGE: {{.message}}{{"\n"}}{{end}}{% endraw %}' +``` + +When ready, you should see: + +``` +CONDITION: Ready + STATUS: true + MESSAGE: +CONDITION: HasErrors + STATUS: false + MESSAGE: +CONDITION: RollingUpdate + STATUS: false + MESSAGE: +``` + +## Accessing the Admin Console + +The Keycloak Operator generates initial admin credentials stored in a Secret: + +```sh +# Get the admin username +kubectl get secret keycloak-initial-admin -n oidc -o jsonpath='{.data.username}' | base64 --decode + +# Get the admin password +kubectl get secret keycloak-initial-admin -n oidc -o jsonpath='{.data.password}' | base64 --decode +``` + +Access the admin console at `https://auth.example.com/admin`. + +!!! warning "Security" + Change the default admin credentials and enable MFA before using in production. + +## Configuring Keycloak for kcp + +After Keycloak is running, configure it for kcp authentication: + +### 1. Create a Realm + +1. Log in to the Keycloak Admin Console +2. Create a new realm (e.g., `kcp`) +3. Configure the realm settings as needed + +### 2. Create a Client for kcp + +1. Navigate to **Clients** in your realm +2. Click **Create client** +3. Configure the client: + - **Client ID**: `kcp` + - **Client authentication**: Off (for public clients like CLI tools) + - **Valid redirect URIs**: + - `http://localhost:8000` + - `http://127.0.0.1:8000/` + - **Web origins**: `+` + +### 3. Configure Identity Providers (Optional) + +To enable social login (e.g., GitHub): + +1. Navigate to **Identity providers** +2. Add a new provider (e.g., GitHub) +3. Configure the provider with your OAuth app credentials + +## Configuring kcp to Use Keycloak + +Configure kcp to use Keycloak as the OIDC provider by setting the following flags: + +```sh +kcp start \ + --oidc-issuer-url=https://auth.example.com/realms/kcp \ + --oidc-client-id=kcp \ + --oidc-username-claim=preferred_username \ + --oidc-groups-claim=groups +``` + +Or via Helm values: + +```yaml +kcp: + oidc: + enabled: true + issuerURL: https://auth.example.com/realms/kcp + clientID: kcp + usernameClaim: preferred_username + groupsClaim: groups +``` + +## Configuring kubectl for OIDC + +To authenticate with kcp using Keycloak, install the [kubelogin](https://github.com/int128/kubelogin) plugin: + +```sh +kubectl krew install oidc-login +``` + +Configure kubectl credentials using the command line: + +```sh +kubectl config set-credentials oidc \ + --exec-api-version=client.authentication.k8s.io/v1beta1 \ + --exec-command=kubectl \ + --exec-arg=oidc-login \ + --exec-arg=get-token \ + --exec-arg=--oidc-issuer-url=https://auth.example.com/realms/kcp \ + --exec-arg=--oidc-client-id=kcp \ + --exec-arg=--oidc-extra-scope=email +``` + +Or manually edit your kubeconfig: + +```yaml +users: +- name: oidc + user: + exec: + apiVersion: client.authentication.k8s.io/v1beta1 + command: kubectl + args: + - oidc-login + - get-token + - --oidc-issuer-url=https://auth.example.com/realms/kcp + - --oidc-client-id=kcp + - --oidc-extra-scope=email +``` + +Then set your context to use the OIDC credentials: + +```sh +kubectl config set-context --current --user=oidc +``` + +## Custom Ingress Configuration + +If the default ingress does not fit your use case, disable it and create your own: + +```yaml +apiVersion: k8s.keycloak.org/v2alpha1 +kind: Keycloak +metadata: + name: keycloak + namespace: oidc +spec: + # ... other configuration + ingress: + enabled: false +``` + +Then create your own Ingress resource pointing to the `keycloak-service` service. + +## Troubleshooting + +### Check Keycloak Pods + +```sh +kubectl get pods -n oidc -l app=keycloak +kubectl logs -n oidc -l app=keycloak +``` + +### Check Database Connectivity + +```sh +kubectl get pods -n oidc -l cnpg.io/cluster=keycloak-auth +``` + +### Port Forward for Local Access + +For debugging, you can port-forward to the Keycloak service: + +```sh +kubectl port-forward -n oidc service/keycloak-service 8443:8443 +``` + +Then access Keycloak at `https://localhost:8443`. + +## Reference Files + +Example configuration files are available in the kcp repository: + +- [contrib/production/oidc-keycloak/](https://github.com/kcp-dev/kcp/tree/main/contrib/production/oidc-keycloak) + +## Additional Resources + +- [Keycloak Documentation](https://www.keycloak.org/documentation) +- [Keycloak Operator Guide](https://www.keycloak.org/operator/basic-deployment) +- [CloudNativePG Documentation](https://cloudnative-pg.io/documentation/) diff --git a/docs/content/setup/integrations/lima.md b/docs/content/setup/integrations/lima.md new file mode 100644 index 00000000000..e559fa1ed1f --- /dev/null +++ b/docs/content/setup/integrations/lima.md @@ -0,0 +1,124 @@ +# Lima + +You can run kcp inside a [Lima](https://github.com/lima-vm/lima)-managed VM, which makes it portable across macOS, Linux, and Windows (via WSL2). This setup gives you a disposable kcp control plane that integrates smoothly with your host kubectl. + +!!! info "Development Use Only" + This is essentially a development environment, where one can start a single instance of kcp for testing or limited-scope use cases. This is in no way intended for production usage. + +Create a Lima template for kcp and save the following as `kcp.yaml`: + +```yaml +minimumLimaVersion: 1.1.0 + +base: template://_images/ubuntu-lts + +mounts: [] + +containerd: + system: false + user: false + +provision: +- mode: system + script: | + #!/bin/bash + set -eux -o pipefail + command -v kcp >/dev/null 2>&1 && exit 0 + + export DEBIAN_FRONTEND=noninteractive + apt-get update + apt-get install -y curl wget + + KCP_VERSION=$(curl -s https://api.github.com/repos/kcp-dev/kcp/releases/latest | grep tag_name | cut -d '"' -f 4) + KCP_VERSION_NO_V=${KCP_VERSION#v} + + wget https://github.com/kcp-dev/kcp/releases/download/${KCP_VERSION}/kcp_${KCP_VERSION_NO_V}_linux_arm64.tar.gz + tar -xzf kcp_${KCP_VERSION_NO_V}_linux_arm64.tar.gz + mv bin/kcp /usr/local/bin/ + chmod +x /usr/local/bin/kcp + rm -f kcp_${KCP_VERSION_NO_V}_linux_arm64.tar.gz + + mkdir -p /var/.kcp/ + sudo chmod 755 /var/.kcp + + cat > /etc/systemd/system/kcp.service << EOF + [Unit] + Description=kcp server + After=network.target + + [Service] + Type=simple + User=root + ExecStart=/usr/local/bin/kcp start --root-directory=/var/.kcp/ --bind-address=127.0.0.1 + Restart=on-failure + StandardOutput=journal + StandardError=journal + + [Install] + WantedBy=multi-user.target + EOF + + systemctl daemon-reload + systemctl enable kcp + systemctl start kcp + +probes: +- script: | + #!/bin/bash + set -eux -o pipefail + if ! timeout 120s bash -c "until curl -f -s --cacert /var/.kcp/apiserver.crt https://127.0.0.1:6443/readyz >/dev/null; do sleep 3; done"; then + echo >&2 "kcp is not ready yet" + exit 1 + fi + hint: | + The kcp control plane is not ready yet. + Check the kcp logs with "limactl shell kcp sudo journalctl -f" or "tail -f /var/log/kcp.log" + +copyToHost: +- guest: "/var/.kcp/admin.kubeconfig" + host: "{{ '{{.Dir}}' }}/copied-from-guest/kubeconfig.yaml" + deleteOnStop: true + +message: | + To run `kubectl` on the host (assumes kubectl is installed), run: + ------ + export KUBECONFIG="{{ '{{.Dir}}' }}/copied-from-guest/kubeconfig.yaml" + kubectl get workspaces + ------ + +``` + +## Initialize the VM + +```sh +limactl create --name=kcp ./kcp.yaml +``` + +## Start the VM + +```sh +limactl start kcp --vm-type=qemu +``` + +!!! info + On macOS, Lima may default to vz (Apple Virtualization), while on Linux it defaults to qemu, and on Windows to wsl2. If you want consistency across environments, you can explicitly pass --vm-type=qemu when starting the VM. + +## Export the KCP kubeconfig + +```sh +export KUBECONFIG="/Users//.lima/kcp/copied-from-guest/kubeconfig.yaml" +``` + +## Verify API resources + +```sh +kubectl api-resources | grep kcp +``` + +You should see kcp-specific resources such as: + +```sh +workspaces ws tenancy.kcp.io/v1alpha1 false Workspace +logicalclusters core.kcp.io/v1alpha1 false LogicalCluster +... +``` diff --git a/docs/content/setup/integrations/mcp.md b/docs/content/setup/integrations/mcp.md new file mode 100644 index 00000000000..014fde8a61a --- /dev/null +++ b/docs/content/setup/integrations/mcp.md @@ -0,0 +1,426 @@ +--- +description: > + Set up the Model Context Protocol (MCP) server to enable AI assistants to interact with kcp. +--- + +# MCP Server Integration + +The [Kubernetes MCP Server](https://github.com/containers/kubernetes-mcp-server) provides a Model Context Protocol interface that enables AI assistants (like Claude, ChatGPT, etc.) to interact with Kubernetes clusters. When configured for kcp, it allows AI assistants to manage workspaces, logical clusters, and other kcp resources. + +!!! warning "Known Limitation" + The current MCP server integration does not have a clear cluster-inventory or "what clusters do I have access to" capability. We are actively working on better integration to address this limitation. + +## Overview + +The MCP server acts as a bridge between AI assistants and kcp, translating natural language requests into kcp API operations. It supports: + +- Workspace management (create, list, navigate) +- Logical cluster operations +- Resource management within workspaces +- kcp-specific toolsets + +## Authentication Modes + +The MCP server supports two authentication modes: + +| Mode | Description | +|------|-------------| +| **OIDC/Bearer Token** (recommended) | Validates JWT tokens from an OIDC provider and uses them for kcp API calls. Each user authenticates with their own identity. | +| **Kubeconfig** | Uses static credentials from a kubeconfig file. All requests use the same identity. | + +This guide focuses on OIDC authentication, which allows the MCP server to use the same identity provider as your kcp deployment. + +## How OIDC Authentication Works + +When running with OIDC enabled: + +1. The MCP server receives an `Authorization: Bearer ` header with each request +2. The server validates the JWT against the OIDC provider +3. A Kubernetes client is created using that bearer token +4. All kcp API calls are made with the user's token, preserving their identity and permissions + +## Prerequisites + +- A running kcp deployment with OIDC authentication (see [Helm installation](helm.md) or [production deployments](production/index.md)) +- Helm 3.x installed +- Access to create Secrets and ConfigMaps in your cluster +- The same OIDC provider configured for both kcp and the MCP server +- **OIDC Provider with Dynamic Client Registration (DCR) support** - The MCP server uses DCR for OAuth flows. Your IdP must: + - Support OpenID Connect Dynamic Client Registration + - Have the MCP server hostname configured as a trusted host (see [Keycloak configuration](#keycloak-dcr-configuration) below) + - Have clients configured with `kcp` as an allowed audience + +## Installation + +### 1. Create the Base Kubeconfig Secret + +The MCP server needs a base kubeconfig that provides the kcp API server endpoint and TLS settings. This kubeconfig does **not** need authentication credentials - the bearer token from OIDC replaces them. + +The kcp-operator can generate this kubeconfig automatically using the `Kubeconfig` custom resource: + +```bash +kubectl apply -f - < + server: https://api.your-kcp.example.com:6443 + name: kcp + contexts: + - context: + cluster: kcp + name: kcp + current-context: kcp +EOF +``` + +### 3. Create the TLS Certificate + +Create a cert-manager Certificate to generate TLS certificates signed by Let's Encrypt: + +```bash +kubectl apply -f - <