diff --git a/content/en/ecosystem/addons/capsule-proxy/_index.md b/content/en/ecosystem/addons/capsule-proxy/_index.md deleted file mode 100755 index b78004b..0000000 --- a/content/en/ecosystem/addons/capsule-proxy/_index.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Capsule Proxy -description: Improve the UX even more with the Capsule Proxy -weight: 1 -type: docs ---- - -Capsule Proxy is an add-on for Capsule Operator addressing some RBAC issues when enabling multi-tenancy in Kubernetes since users cannot list the owned cluster-scoped resources. One solution to this problem would be to grant all users `LIST` permissions for the relevant cluster-scoped resources (eg. `Namespaces`). However, this would allow users to list all cluster-scoped resources, which is not desirable in a multi-tenant environment and may lead to security issues. Kubernetes RBAC cannot list only the owned cluster-scoped resources since there are no ACL-filtered APIs. For example: - -```$ kubectl get namespaces -Error from server (Forbidden): namespaces is forbidden: -User "alice" cannot list resource "namespaces" in API group "" at the cluster scope -``` - -The reason, as the error message reported, is that the RBAC list action is available only at Cluster-Scope and it is not granted to users without appropriate permissions. - -To overcome this problem, many Kubernetes distributions introduced mirrored custom resources supported by a custom set of ACL-filtered APIs. However, this leads to radically change the user's experience of Kubernetes by introducing hard customizations that make it painful to move from one distribution to another. - -With Capsule, we took a different approach. As one of the key goals, we want to keep the same user experience on all the distributions of Kubernetes. We want people to use the standard tools they already know and love and it should just work. - - diff --git a/content/en/ecosystem/addons/capsule-proxy/installation.md b/content/en/ecosystem/addons/capsule-proxy/installation.md deleted file mode 100644 index 2d9e64c..0000000 --- a/content/en/ecosystem/addons/capsule-proxy/installation.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: Installation -description: > - Installation guide for the capsule-proxy -date: 2017-01-05 -weight: 4 ---- -Capsule Proxy is an optional add-on of the main Capsule Operator, so make sure you have a working instance of Capsule before attempting to install it. Use the capsule-proxy only if you want Tenant Owners to list their Cluster-Scope resources. - -The capsule-proxy can be deployed in standalone mode, e.g. running as a pod bridging any Kubernetes client to the APIs server. Optionally, it can be deployed as a sidecar container in the backend of a dashboard. - -Running outside a Kubernetes cluster is also viable, although a valid KUBECONFIG file must be provided, using the environment variable KUBECONFIG or the default file in $HOME/.kube/config. - -A Helm Chart is available here. - -## Exposure - -Depending on your environment, you can expose the capsule-proxy by: - - * `Ingress` - * `NodePort Service` - * `LoadBalance Service` - * `HostPort` - * `HostNetwork` - -Here how it looks like when exposed through an Ingress Controller: - -### Distribute CA within the Cluster - -The capsule-proxy requires the CA certificate to be distributed to the clients. The CA certificate is stored in a Secret named `capsule-proxy` in the `capsule-system` namespace, by default. In most cases the distribution of this secret is required for other clients within the cluster (e.g. the Tekton Dashboard). If you are using Ingress or any other endpoints for all the clients, this step is probably not required. - -Here's an example of how to distribute the CA certificate to the namespace `tekton-pipelines` by using `kubectl` and `jq`: - -```shell - kubectl get secret capsule-proxy -n capsule-system -o json \ - | jq 'del(.metadata["namespace","creationTimestamp","resourceVersion","selfLink","uid"])' \ - | kubectl apply -n tekton-pipelines -f - -``` - -This can be used for development purposes, but it's not recommended for production environments. Here are solutions to distribute the CA certificate, which might be useful for production environments: - - * [Kubernetes Reflector](https://github.com/EmberStack/kubernetes-reflector) \ No newline at end of file diff --git a/content/en/ecosystem/addons/capsule-proxy/options.md b/content/en/ecosystem/addons/capsule-proxy/options.md deleted file mode 100644 index e1950ad..0000000 --- a/content/en/ecosystem/addons/capsule-proxy/options.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -title: Controller Options -description: > - Configure the Capsule Proxy Controller -date: 2024-02-20 -weight: 10 ---- - -You can customize the Capsule Proxy with the following configuration - -## Flags - - - - -## Feature Gates - -Feature Gates are a set of key/value pairs that can be used to enable or disable certain features of the Capsule Proxy. The following feature gates are available: - -| **Feature Gate** | **Default Value** | **Description** | -| :--- | :--- | :--- | -| `ProxyAllNamespaced` | `false` | `ProxyAllNamespaced` allows to proxy all the Namespaced objects. When enabled, it will discover apis and ensure labels are set for resources in all tenant namespaces resulting in increased memory. However this feature helps with user experience. | -| `SkipImpersonationReview` | `false` | `SkipImpersonationReview` allows to skip the impersonation review for all requests containing impersonation headers (user and groups). **DANGER:** Enabling this flag allows any user to impersonate as any user or group essentially bypassing any authorization. Only use this option in trusted environments where authorization/authentication is offloaded to external systems. | -| `ProxyClusterScoped` | `false` | `ProxyClusterScoped` allows to proxy all clusterScoped objects for all tenant users. These can be defined via [ProxySettings](/docs/integrations/capsule-proxy/proxysettings/#cluster-resources) | diff --git a/content/en/ecosystem/addons/capsule-proxy/proxysettings.md b/content/en/ecosystem/addons/capsule-proxy/proxysettings.md deleted file mode 100644 index a6b00ad..0000000 --- a/content/en/ecosystem/addons/capsule-proxy/proxysettings.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -title: ProxySettings -description: > - Configure proxy settings for your tenants -date: 2024-02-20 -weight: 4 ---- - - - - - - -#### Primitives - -> Namespaces are treated specially. A users can list the namespaces they own, but they cannot list all the namespaces in the cluster. You can't define additional selectors. - -Primitives are strongly considered for tenants, therefor - - -The proxy setting kind is an enum accepting the supported resources: - -| **Enum** | **Description** | **Effective Operations** | -| --- | --- | --- | -| `Tenant` | Users are able to `LIST` this tenant | - `LIST` | -| `StorageClasses` | Perform operations on the [allowed StorageClasses](/docs/tenants/enforcement/#storageclasses) for the tenant | - `LIST` | - - - - - * **Nodes**: Based on the [NodeSelector](/docs/tenants/enforcement/#nodeselector) and the Scheduling Expressions nodes can be listed - * **[StorageClasses](/docs/tenants/enforcement/#storageclasses)**: Perform actions on the allowed StorageClasses for the tenant - * **[IngressClasses](/docs/tenants/enforcement/#ingressclasses)**: Perform actions on the allowed IngressClasses for the tenant - * **[PriorityClasses](/docs/tenants/enforcement/#priorityclasses)**: Perform actions on the allowed PriorityClasses for the tenant - PriorityClasses - * **[RuntimeClasses](/docs/tenants/enforcement/#runtimeclasses)**: Perform actions on the allowed RuntimeClasses for the tenant - * **[PersistentVolumes](/docs/tenants/enforcement/#persistentvolumes)**: Perform actions on the PersistentVolumes owned by the tenant - - GatewayClassesProxy ProxyServiceKind = "GatewayClasses" - TenantProxy ProxyServiceKind = "Tenant" - - -Each Resource kind can be granted with several verbs, such as: - - * `List` - * `Update` - * `Delete` - - - -#### Cluster Resources - -This approach is for more generic cluster scoped resources. - - -TBD - - - -## Proxy Settings - - - -## Tenants - -The Capsule Proxy is a multi-tenant application. Each tenant is a separate instance of the Capsule Proxy. The tenant is identified by the `tenantId` in the URL. The `tenantId` is a unique identifier for the tenant. The `tenantId` is used to identify the tenant in the Capsule Proxy. - diff --git a/content/en/ecosystem/addons/fluxcd/_index.md b/content/en/ecosystem/addons/fluxcd/_index.md deleted file mode 100644 index 5b194ed..0000000 --- a/content/en/ecosystem/addons/fluxcd/_index.md +++ /dev/null @@ -1,573 +0,0 @@ ---- -title: How to operate Tenants GitOps with Flux -weight: 10 -description: How to operate Tenants the GitOps way with Flux and Capsule together -type: single ---- - -# Multi-tenancy the GitOps way - -This document will guide you to manage Tenant resources the GitOps way with Flux configured with the [multi-tenancy lockdown](https://fluxcd.io/docs/installation/#multi-tenancy-lockdown). - -The proposed approach consists on making Flux to reconcile Tenant resources as Tenant Owners, while still providing Namespace as a Service to Tenants. - -This means that Tenants can operate and declare multiple Namespaces in their own Git repositories while not escaping the policies enforced by Capsule. - -## Quickstart - -### Install - -In order to make it work you can install the FluxCD addon via Helm: - -```shell -helm install -n capsule-system capsule-addon-fluxcd \ - oci://ghcr.io/projectcapsule/charts/capsule-addon-fluxcd -``` - -### Configure Tenants - -In order to make Flux controllers reconcile Tenant resources impersonating a Tenant Owner, a Tenant Owner as Service Account is required. - -To be recognized by the addon that will automate the required configurations, the `ServiceAccount` needs the `capsule.addon.fluxcd/enabled=true` annotation. - -Assuming a configured *oil* `Tenant`, the following Tenant Owner `ServiceAccount` must be declared: - -```yml ---- -apiVersion: v1 -kind: Namespace -metadata: - name: oil-system ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: gitops-reconciler - namespace: oil-system - annotations: - capsule.addon.fluxcd/enabled: "true" -``` - -set it as a valid *oil* `Tenant` owner, and made Capsule recognize its `Group`: - -```yml ---- -apiVersion: capsule.clastix.io/v1beta2 -kind: Tenant -metadata: - name: oil -spec: - additionalRoleBindings: - - clusterRoleName: cluster-admin - subjects: - - name: gitops-reconciler - kind: ServiceAccount - namespace: oil-system - owners: - - name: system:serviceaccount:oil-system:gitops-reconciler - kind: ServiceAccount ---- -apiVersion: capsule.clastix.io/v1beta2 -kind: CapsuleConfiguration -metadata: - name: default -spec: - userGroups: - - capsule.clastix.io - - system:serviceaccounts:oil-system -``` - -The addon will automate: -* RBAC configuration for the `Tenant` owner `ServiceAccount` -* `Tenant` owner `ServiceAccount` token generation -* `Tenant` owner `kubeconfig` needed to send Flux reconciliation requests through the Capsule proxy -* `Tenant` `kubeconfig` distribution accross all Tenant `Namespace`s. - -The last automation is needed so that the `kubeconfig` can be set on `Kustomization`s/`HelmRelease`s across all `Tenant`'s `Namespace`s. - -More details on this are available in the deep-dive section. - -### How to use - -Consider a `Tenant` named *oil* that has a dedicated Git repository that contains oil's configurations. - -You as a platform administrator want to provide to the *oil* `Tenant` a Namespace-as-a-Service with a GitOps experience, allowing the tenant to version the configurations in a Git repository. - -You, as Tenant owner, can configure Flux [reconciliation](https://fluxcd.io/flux/concepts/#reconciliation) resources to be applied as Tenant owner: - -```yml ---- -apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 -kind: Kustomization -metadata: - name: oil-apps - namespace: oil-system -spec: - serviceAccountName: gitops-reconciler - kubeConfig: - secretRef: - name: gitops-reconciler-kubeconfig - key: kubeconfig - sourceRef: - kind: GitRepository - name: oil ---- -apiVersion: source.toolkit.fluxcd.io/v1beta2 -kind: GitRepository -metadata: - name: oil - namespace: oil-system -spec: - url: https://github.com/oil/oil-apps -``` - -Let's analyze the setup field by field: -- the `GitRepository` and the `Kustomization` are in a Tenant system `Namespace` -- the `Kustomization` refers to a `ServiceAccount` to be impersonated when reconciling the resources the `Kustomization` refers to: this ServiceAccount is a *oil* **Tenant owner** -- the `Kustomization` refers also to a `kubeConfig` to be used when reconciling the resources the `Kustomization` refers to: this is needed to make requests through the **Capsule proxy** in order to operate on cluster-wide resources as a Tenant - -The *oil* tenant can also declare new `Namespace`s thanks to the segregation provided by Capsule. - -> Note: it can be avoided to explicitely set the the service account name when it's set as default Service Account name at Flux's [kustomize-controller level](https://fluxcd.io/flux/installation/configuration/multitenancy/#how-to-configure-flux-multi-tenancy) via the `default-service-account` flag. - -More information are available in the [addon repository](https://github.com/projectcapsule/capsule-addon-fluxcd). - -## Deep dive - -### Flux and multi-tenancy - -Flux v2 released a [set of features](https://fluxcd.io/blog/2022/05/may-2022-security-announcement/#whats-next-for-flux) that further increased security for multi-tenancy scenarios. - -These features enable you to: -- disable cross-Namespace reference of Source CRs from Reconciliation CRs and Notification CRs. This way, especially for tenants, they can't access resources outside their space. This can be achieved with `--no-cross-namespace-refs=true` option of kustomize, helm, notification, image-reflector, image-automation controllers. -- set a default `ServiceAccount` impersonation for Reconciliation CRs. This is supposed to be an unprivileged SA that reconciles just the tenant's desired state. This will be enforced when is not otherwise specified explicitly in Reconciliation CR spec. This can be enforced with the `--default-service-account=` option of helm and kustomize controllers. - - > For this responsibility we identify a Tenant GitOps Reconciler identity, which is a ServiceAccount and it's also the tenant owner (more on tenants and owners later on, with Capsule). - -- disallow remote bases for Kustomizations. Actually, this is not strictly required, but it decreases the risk of referencing Kustomizations which aren't part of the controlled GitOps pipelines. In a multi-tenant scenario this is important too. They can be disabled with `--no-remote-bases=true` option of the kustomize controller. - -Where required, to ensure privileged Reconciliation resources have the needed privileges to be reconciled, we can explicitly set a privileged `ServiceAccount`s. - -In any case, is required that the `ServiceAccount` is in the same `Namespace` of the `Kustomization`, so unprivileged spaces should not have privileged `ServiceAccount`s available. - -For example, for the root `Kustomization`: - -```yaml -apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 -kind: Kustomization -metadata: - name: flux-system - namespace: flux-system -spec: - serviceAccountName: kustomize-controller # It has cluster-admin permissions - path: ./clusters/staging - sourceRef: - kind: GitRepository - name: flux-system -``` - -In example, the cluster admin is supposed to apply this Kustomization, during the cluster bootstrap that i.e. will reconcile also Flux itself. -All the remaining Reconciliation resources can be children of this Kustomization. - -![bootstrap](./assets/kustomization-hierarchy-root-tenants.png) - -### Namespace-as-a-Service - -Tenants could have his own set of Namespaces to operate on but it should be prepared by higher-level roles, like platform admins: the declarations would be part of the platform space. -They would be responsible of tenants administration, and each change (e.g. new tenant Namespace) should be a request that would pass through approval. - -![no-naas](./assets/flux-tenants-reconciliation.png) - -What if we would like to provide tenants the ability to manage also their own space the GitOps-way? Enter Capsule. - -![naas](./assets/flux-tenants-capsule-reconciliation.png) - -## Manual setup - -> Legenda: -> - Privileged space: group of Namespaces which are not part of any Tenant. -> - Privileged identity: identity that won't pass through Capsule tenant access control. -> - Unprivileged space: group of Namespaces which are part of a Tenant. -> - Unprivileged identity: identity that would pass through Capsule tenant access control. -> - Tenant GitOps Reconciler: a machine Tenant Owner expected to reconcile Tenant desired state. - -### Capsule - -Capsule provides a Custom Resource `Tenant` and ability to set its owners through `spec.owners` as references to: -- `User` -- `Group` -- `ServiceAccount` - -#### Tenant and Tenant Owner - -We would like to let a machine reconcile Tenant's states, we'll need a `ServiceAccount` as a Tenant Owner: - -```yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: gitops-reconciler - namespace: my-tenant ---- -apiVersion: capsule.clastix.io/v1beta2 -kind: Tenant -metadata: - name: my-tenant -spec: - owners: - - name: system:serviceaccount:my-tenant:gitops-reconciler # the Tenant GitOps Reconciler -``` - -From now on, we'll refer to it as the **Tenant GitOps Reconciler**. - -#### Tenant Groups - -We also need to state that Capsule should enforce tenant access control for requests coming from tenants, and we can do that by specifying one of the `Group`s bound by default by Kubernetes to the Tenant GitOps Reconciler `ServiceAccount` in the `CapsuleConfiguration`: - -```yaml -apiVersion: capsule.clastix.io/v1beta2 -kind: CapsuleConfiguration -metadata: - name: default -spec: - userGroups: - - system:serviceaccounts:my-tenant -``` - -Other privileged requests, e.g. for reconciliation coming from the Flux privileged `ServiceAccount`s like `flux-system/kustomize-controller` will bypass Capsule. - -### Flux - -Flux enables to specify with which identity Reconciliation resources are reconciled, through: -- `ServiceAccount` impersonation -- `kubeconfig` - -#### ServiceAccount - -As by default Flux reconciles those resources with Flux `cluster-admin` Service Accounts, we set at controller-level the **default `ServiceAccount` impersonation** to the unprivileged **Tenant GitOps Reconciler**: - -```yaml -apiVersion: kustomize.config.k8s.io/v1beta1 -kind: Kustomization -resources: -- flux-controllers.yaml -patches: - - patch: | - - op: add - path: /spec/template/spec/containers/0/args/0 - value: --default-service-account=gitops-reconciler # the Tenant GitOps Reconciler - target: - kind: Deployment - name: "(kustomize-controller|helm-controller)" -``` - -This way tenants can't make Flux apply their Reconciliation resources with Flux's privileged Service Accounts, by not specifying a `spec.ServiceAccountName` on them. - -At the same time at resource-level in privileged space we still can specify a privileged ServiceAccount, and its reconciliation requests won't pass through Capsule validation: - -```yaml -apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 -kind: Kustomization -metadata: - name: flux-system - namespace: flux-system -spec: - serviceAccountName: kustomize-controller - path: ./clusters/staging - sourceRef: - kind: GitRepository - name: flux-system -``` - -#### Kubeconfig - -We also need to specify on Tenant's Reconciliation resources, the `Secret` with **`kubeconfig`** configured to use the **Capsule Proxy** as the API server in order to provide the Tenant GitOps Reconciler the ability to list cluster-level resources. -The `kubeconfig` would specify also as the token the Tenant GitOps Reconciler SA token, - -For example: - -```yaml -apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 -kind: Kustomization -metadata: - name: my-app - namespace: my-tenant -spec: - kubeConfig: - secretRef: - name: gitops-reconciler-kubeconfig - key: kubeconfig - sourceRef: - kind: GitRepository - name: my-tenant - path: ./staging -``` - -> We'll see how to prepare the related `Secret` (i.e. *gitops-reconciler-kubeconfig*) later on. - -Each request made with this kubeconfig will be done impersonating the user of the default impersonation SA, that is the same of the token specified in the kubeconfig. -To deepen on this please go to [#Insights](#insights). - -## The recipe - -### How to setup Tenants GitOps-ready - -Given that [Capsule](github.com/projectcapsule/capsule) and [Capsule Proxy](github.com/clastix/capsule-proxy) are installed, and [Flux v2](https://github.com/fluxcd/flux2) configured with [multi-tenancy lockdown](https://fluxcd.io/docs/installation/#multi-tenancy-lockdown) features, of which the patch below: - -```yaml -apiVersion: kustomize.config.k8s.io/v1beta1 -kind: Kustomization -resources: -- flux-components.yaml -patches: - - patch: | - - op: add - path: /spec/template/spec/containers/0/args/0 - value: --no-cross-namespace-refs=true - target: - kind: Deployment - name: "(kustomize-controller|helm-controller|notification-controller|image-reflector-controller|image-automation-controller)" - - patch: | - - op: add - path: /spec/template/spec/containers/0/args/- - value: --no-remote-bases=true - target: - kind: Deployment - name: "kustomize-controller" - - patch: | - - op: add - path: /spec/template/spec/containers/0/args/0 - value: --default-service-account=gitops-reconciler # The Tenant GitOps Reconciler - target: - kind: Deployment - name: "(kustomize-controller|helm-controller)" - - patch: | - - op: add - path: /spec/serviceAccountName - value: kustomize-controller - target: - kind: Kustomization - name: "flux-system" -``` - -this is the required set of resources to setup a Tenant: -- `Namespace`: the Tenant GitOps Reconciler "home". This is not part of the Tenant to avoid a chicken & egg problem: - ```yaml - apiVersion: v1 - kind: Namespace - metadata: - name: my-tenant - ``` -- `ServiceAccount` of the Tenant GitOps Reconciler, in the above `Namespace`: - ```yaml - apiVersion: v1 - kind: ServiceAccount - metadata: - name: gitops-reconciler - namespace: my-tenant - ``` -- `Tenant` resource with the above Tenant GitOps Reconciler's SA as Tenant Owner, with: -- Additional binding to *cluster-admin* `ClusterRole` for the Tenant's `Namespace`s and `Namespace` of the Tenant GitOps Reconciler' `ServiceAccount`. - By default Capsule binds only `admin` ClusterRole, which has no privileges over Custom Resources, but *cluster-admin* has. This is needed to operate on Flux CRs: - ```yaml - apiVersion: capsule.clastix.io/v1beta2 - kind: Tenant - metadata: - name: my-tenant - spec: - additionalRoleBindings: - - clusterRoleName: cluster-admin - subjects: - - name: gitops-reconciler - kind: ServiceAccount - namespace: my-tenant - owners: - - name: system:serviceaccount:my-tenant:gitops-reconciler - kind: ServiceAccount - ``` -- Additional binding to *cluster-admin* `ClusterRole` for home `Namespace` of the Tenant GitOps Reconciler' `ServiceAccount`, so that the Tenant GitOps Reconciler can create Flux CRs on the tenant home Namespace and use Reconciliation resource's `spec.targetNamespace` to place resources to `Tenant` `Namespace`s: - ```yaml - apiVersion: rbac.authorization.k8s.io/v1 - kind: RoleBinding - metadata: - name: gitops-reconciler - namespace: my-tenant - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-admin - subjects: - - kind: ServiceAccount - name: gitops-reconciler - namespace: my-tenant - ``` -- Additional `Group` in the `CapsuleConfiguration` to make Tenant GitOps Reconciler requests pass through Capsule admission (group `system:serviceaccount:`): - ```yaml - apiVersion: capsule.clastix.io/v1alpha1 - kind: CapsuleConfiguration - metadata: - name: default - spec: - userGroups: - - system:serviceaccounts:my-tenant - ``` -- Additional `ClusterRole` with related `ClusterRoleBinding` that allows the Tenant GitOps Reconciler to impersonate his own `User` (e.g. `system:serviceaccount:my-tenant:gitops-reconciler`): - ```yaml - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: my-tenant-gitops-reconciler-impersonator - rules: - - apiGroups: [""] - resources: ["users"] - verbs: ["impersonate"] - resourceNames: ["system:serviceaccount:my-tenant:gitops-reconciler"] - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: my-tenant-gitops-reconciler-impersonate - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: my-tenant-gitops-reconciler-impersonator - subjects: - - name: gitops-reconciler - kind: ServiceAccount - namespace: my-tenant - ``` -- `Secret` with `kubeconfig` for the Tenant GitOps Reconciler with Capsule Proxy as `kubeconfig.server` and the SA token as `kubeconfig.token`. - > This is supported only with Service Account static tokens. -- Flux Source and Reconciliation resources that refer to Tenant desired state. This typically points to a specific path inside a dedicated Git repository, where tenant's root configuration reside: - ```yaml - apiVersion: source.toolkit.fluxcd.io/v1beta2 - kind: GitRepository - metadata: - name: my-tenant - namespace: my-tenant - spec: - url: https://github.com/my-tenant/all.git # Git repository URL - ref: - branch: main # Git reference - --- - apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 - kind: Kustomization - metadata: - name: my-tenant - namespace: my-tenant - spec: - kubeConfig: - secretRef: - name: gitops-reconciler-kubeconfig - key: kubeconfig - sourceRef: - kind: GitRepository - name: my-tenant - path: config # Path to config from GitRepository Source - ``` - This `Kustomization` can in turn refer to further `Kustomization` resources creating a tenant configuration hierarchy. - -#### Generate the Capsule Proxy kubeconfig Secret - -You need to create a `Secret` in the Tenant GitOps Reconciler home `Namespace`, containing the `kubeconfig` that specifies: -- `server`: Capsule Proxy `Service` URL with related CA certificate for TLS -- `token`: the token of the `Tenant` GitOps Reconciler - -With required privileges over the target `Namespace` to create `Secret`, you can generate it with the `proxy-kubeconfig-generator` utility: - -```sh -$ go install github.com/maxgio92/proxy-kubeconfig-generator@latest -$ proxy-kubeconfig-generator \ - --kubeconfig-secret-key kubeconfig \ - --namespace my-tenant \ - --server 'https://capsule-proxy.capsule-system.svc:9001' \ - --server-tls-secret-namespace capsule-system \ - --server-tls-secret-name capsule-proxy \ - --serviceaccount gitops-reconciler -``` - -### How a Tenant can declare his state - -Considering the example above, a Tenant `my-tenant` could place in his own repository (i.e. `https://github.com/my-tenant/all`), on branch `main` at path `/config` further Reconciliation resources, like: - -```yaml -apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 -kind: Kustomization -metadata: - name: my-apps - namespace: my-tenant -spec: - kubeConfig: - secretRef: - name: gitops-reconciler-kubeconfig - key: kubeconfig - sourceRef: - kind: GitRepository - name: my-tenant - path: config/apps -``` - -that refer to the same Source but different path (i.e. `config/apps`) that could contain his applications' manifests. - -The same is valid for a `HelmRelease`s, that instead will refer to an `HelmRepository` Source. - -The reconciliation requests will pass through Capsule Proxy as Tenant GitOps Reconciler with impersonation. Then, as the identity group of the requests matches the Capsule groups they will be validated by Capsule, and finally the RBAC will provide boundaries to Tenant GitOps Reconciler privileges. - -> If the `spec.kubeConfig` is not specified the Flux privileged `ServiceAccount` will impersonate the default unprivileged Tenant GitOps Reconciler `ServiceAccount` as configured with `--default-service-account` option of kustomize and helm controllers, but it list requests on cluster-level resources like `Namespace`s will fail. - -## Full setup - -To have a glimpse on a full setup you can follow the [flux2-capsule-multi-tenancy](https://github.com/clastix/flux2-capsule-multi-tenancy.git) repository. -For simplicity, the system and tenants declarations are on the same repository but on dedicated git branches. - -It's a fork of [flux2-multi-tenancy](https://github.com/fluxcd/flux2-multi-tenancy.git) but with the integration we saw with Capsule. - -## Insights - -### Why ServiceAccount that impersonates its own User - -As stated just above, you'd be wondering why a user would make a request impersonating himself (i.e. the Tenant GitOps Reconciler ServiceAccount User). - -This is because we need to make tenant reconciliation requests through Capsule Proxy and we want to protect from risk of privilege escalation done through bypass of impersonation. - -### Threats - -##### Bypass unprivileged impersonation - -The reason why we can't set impersonation to be optional is because, as each tenant is allowed to not specify neither the kubeconfig nor the impersonation SA for the Reconciliation resource, and because in any case that kubeconfig could contain whatever privileged credentials, Flux would otherwise use the privileged ServiceAccount, to reconcile tenant resources. - -That way, a tenant would be capable of managing the GitOps way the cluster as he was a cluster admin. - -Furthermore, let's see if there are other vulnerabilities we are able to protect from. - -##### Impersonate privileged SA - -Then, what if a tenant tries to escalate by using one of the Flux controllers privileged `ServiceAccount`s? - -As `spec.ServiceAccountName` for Reconciliation resource cannot cross-namespace reference Service Accounts, tenants are able to let Flux apply his own resources only with ServiceAccounts that reside in his own Namespaces. Which is, Namespace of the ServiceAccount and Namespace of the Reconciliation resource must match. - -He could neither create the Reconciliation resource where a privileged ServiceAccount is present (like flux-system), as the Namespace has to be owned by the Tenant. Capsule would block those Reconciliation resource creation requests. - -##### Create and impersonate privileged SA - -Then, what if a tenant tries to escalate by creating a privileged `ServiceAccount` inside on of his own `Namespace`s? - -A tenant could create a `ServiceAccount` in an owned `Namespace`, but he can't neither bind at cluster-level nor at a non-owned Namespace-level a ClusterRole, as that wouldn't be permitted by Capsule admission controllers. - -Now let's go on with the practical part. - -##### Change ownership of privileged Namespaces (e.g. flux-system) - -He could try to use privileged `ServiceAccount` by changing ownership of a privileged Namespace so that he could create Reconciliation resource there and using the privileged SA. -This is not permitted as he can't patch Namespaces which have not been created by him. Capsule request validation would not pass. - -For other protections against threats in this multi-tenancy scenario please see the Capsule [Multi-Tenancy Benchmark](/docs/general/mtb). - -## References -- https://fluxcd.io/docs/installation/#multi-tenancy-lockdown -- https://fluxcd.io/blog/2022/05/may-2022-security-announcement/ -- https://github.com/clastix/capsule-proxy/issues/218 -- https://github.com/projectcapsule/capsule/issues/528 -- https://github.com/clastix/flux2-capsule-multi-tenancy -- https://github.com/fluxcd/flux2-multi-tenancy -- https://fluxcd.io/docs/guides/repository-structure/ diff --git a/content/en/ecosystem/integrations/rancher.md b/content/en/ecosystem/integrations/rancher.md index c3fa763..ed3a780 100644 --- a/content/en/ecosystem/integrations/rancher.md +++ b/content/en/ecosystem/integrations/rancher.md @@ -29,10 +29,380 @@ Capsule allows tenants isolation and resources control in a declarative way, whi You can read in detail how the integration works and how to configure it, in the following guides. -How to integrate Rancher Projects with Capsule Tenants + * [How to integrate Rancher Projects with Capsule Tenants](#tenants-and-projects) How to enable cluster-wide resources and Rancher shell access. +![capsule rancher addon](/images/content/capsule-rancher-addon.drawio.png) + + ## Tenants and Projects +This guide explains how to setup the integration between Capsule and Rancher Projects. + +It then explains how for the tenant user, the access to Kubernetes resources is transparent. + +### Pre-requisites + +- An authentication provider in Rancher, e.g. an OIDC identity provider +- A *Tenant Member* `Cluster Role` in Rancher + +#### Configure an identity provider for Kubernetes + +You can follow [this general guide](/docs/operating/authentication/#oidc) to configure an OIDC authentication for Kubernetes. + +For a Keycloak specific setup yon can check [this resources list](./oidc-keycloak.md). + +#### Known issues + +##### Keycloak new URLs without `/auth` makes Rancher crash + +- [rancher/rancher#38480](https://github.com/rancher/rancher/issues/38480) +- [rancher/rancher#38683](https://github.com/rancher/rancher/issues/38683) + +#### Create the Tenant Member Cluster Role + +A custom Rancher `Cluster Role` is needed to allow Tenant users, to read cluster-scope resources and Rancher doesn't provide e built-in Cluster Role with this tailored set of privileges. + +When logged-in to the Rancher UI as administrator, from the Users & Authentication page, create a Cluster Role named *Tenant Member* with the following privileges: + +- `get`, `list`, `watch` operations over `IngressClasses` resources. +- `get`, `list`, `watch` operations over `StorageClasses` resources. +- `get`, `list`, `watch` operations over `PriorityClasses` resources. +- `get`, `list`, `watch` operations over `Nodes` resources. +- `get`, `list`, `watch` operations over `RuntimeClasses` resources. + +### Configuration (administration) + +#### Tenant onboarding + +When onboarding tenants, the administrator needs to create the following, in order to bind the `Project` with the `Tenant`: + +- In Rancher, create a `Project`. +- In the target Kubernetes cluster, create a `Tenant`, with the following specification: + ```yaml + kind: Tenant + ... + spec: + namespaceOptions: + additionalMetadata: + annotations: + field.cattle.io/projectId: ${CLUSTER_ID}:${PROJECT_ID} + labels: + field.cattle.io/projectId: ${PROJECT_ID} + ``` + where `$CLUSTER_ID` and `$PROEJCT_ID` can be retrieved, assuming a valid `$CLUSTER_NAME`, as: + + ```shell + CLUSTER_NAME=foo + CLUSTER_ID=$(kubectl get cluster -n fleet-default ${CLUSTER_NAME} -o jsonpath='{.status.clusterName}') + PROJECT_IDS=$(kubectl get projects -n $CLUSTER_ID -o jsonpath="{.items[*].metadata.name}") + for project_id in $PROJECT_IDS; do echo "${project_id}"; done + ``` + + More on declarative `Project`s [here](https://github.com/rancher/rancher/issues/35631). +- In the identity provider, create a user with [correct OIDC claim](https://capsule.clastix.io/docs/guides/oidc-auth) of the Tenant. +- In Rancher, add the new user to the `Project` with the *Read-only* `Role`. +- In Rancher, add the new user to the `Cluster` with the *Tenant Member* `Cluster Role`. + +#### Create the Tenant Member Project Role + +A custom `Project Role` is needed to allow Tenant users, with minimum set of privileges and create and delete `Namespace`s. + +Create a Project Role named *Tenant Member* that inherits the privileges from the following Roles: +- *read-only* +- *create-ns* + + +#### Usage + +When the configuration administrative tasks have been completed, the tenant users are ready to use the Kubernetes cluster transparently. + +For example can create Namespaces in a self-service mode, that would be otherwise impossible with the sole use of Rancher Projects. + +#### Namespace creation + +From the tenant user perspective both CLI and the UI are valid interfaces to communicate with. + +#### From CLI + +- Tenants `kubectl`-logs in to the OIDC provider +- Tenant creates a Namespace, as a valid OIDC-discoverable user. + +the `Namespace` is now part of both the Tenant and the Project. + +> As administrator, you can verify with: +> +> ```shell +> kubectl get tenant ${TENANT_NAME} -o jsonpath='{.status}' +> kubectl get namespace -l field.cattle.io/projectId=${PROJECT_ID} +> ``` + +#### From UI + +- Tenants logs in to Rancher, with a valid OIDC-discoverable user (in a valid Tenant group). +- Tenant user create a valid Namespace + +the `Namespace` is now part of both the Tenant and the Project. + +> As administrator, you can verify with: +> +> ```shell +> kubectl get tenant ${TENANT_NAME} -o jsonpath='{.status}' +> kubectl get namespace -l field.cattle.io/projectId=${PROJECT_ID} +> ``` + +### Additional administration + +#### Project monitoring + +Before proceeding is recommended to read the official Rancher documentation about [Project Monitors](https://ranchermanager.docs.rancher.com/v2.6/how-to-guides/advanced-user-guides/monitoring-alerting-guides/prometheus-federator-guides/project-monitors). + +In summary, the setup is composed by a cluster-level Prometheus, Prometheus Federator via which single Project-level Prometheus federate to. + +#### Network isolation + +Before proceeding is recommended to read the official Capsule documentation about [`NetworkPolicy` at `Tenant`-level](/docs/tenants/enforcement/#networkpolicies)`. + +##### Network isolation and Project Monitor + +As Rancher's Project Monitor deploys the Prometheus stack in a `Namespace` that is not part of **neither** the `Project` **nor** the `Tenant` `Namespace`s, is important to apply the label selectors in the `NetworkPolicy` `ingress` rules to the `Namespace` created by Project Monitor. + +That Project monitoring `Namespace` will be named as `cattle-project--monitoring`. + +For example, if the `NetworkPolicy` is configured to allow all ingress traffic from `Namespace` with label `capsule.clastix.io/tenant=foo`, this label is to be applied to the Project monitoring `Namespace` too. + +Then, a `NetworkPolicy` can be applied at `Tenant`-level with Capsule `GlobalTenantResource`s. For example it can be applied a minimal policy for the *oil* `Tenant`: + +```yaml +apiVersion: capsule.clastix.io/v1beta2 +kind: GlobalTenantResource +metadata: + name: oil-networkpolicies +spec: + tenantSelector: + matchLabels: + capsule.clastix.io/tenant: oil + resyncPeriod: 360s + pruningOnDelete: true + resources: + - namespaceSelector: + matchLabels: + capsule.clastix.io/tenant: oil + rawItems: + - apiVersion: networking.k8s.io/v1 + kind: NetworkPolicy + metadata: + name: oil-minimal + spec: + podSelector: {} + policyTypes: + - Ingress + - Egress + ingress: + # Intra-Tenant + - from: + - namespaceSelector: + matchLabels: + capsule.clastix.io/tenant: oil + # Rancher Project Monitor stack + - from: + - namespaceSelector: + matchLabels: + role: monitoring + # Kubernetes nodes + - from: + - ipBlock: + cidr: 192.168.1.0/24 + egress: + # Kubernetes DNS server + - to: + - namespaceSelector: {} + podSelector: + matchLabels: + k8s-app: kube-dns + ports: + - port: 53 + protocol: UDP + # Intra-Tenant + - to: + - namespaceSelector: + matchLabels: + capsule.clastix.io/tenant: oil + # Kubernetes API server + - to: + - ipBlock: + cidr: 10.43.0.1/32 + ports: + - port: 443 +``` + +## Capsule Proxy and Rancher Projects + +This guide explains how to setup the integration between Capsule Proxy and Rancher Projects. + +It then explains how for the tenant user, the access to Kubernetes cluster-wide resources is transparent. + +### Rancher Shell and Capsule + +In order to integrate the Rancher Shell with Capsule it's needed to route the Kubernetes API requests made from the shell, via Capsule Proxy. + +The [capsule-rancher-addon](https://github.com/clastix/capsule-addon-rancher/tree/master/charts/capsule-rancher-addon) allows the integration transparently. + +#### Install the Capsule addon + +Add the Clastix Helm repository `https://clastix.github.io/charts`. + +By updating the cache with Clastix's Helm repository a Helm chart named `capsule-rancher-addon` is available. + +Install keeping attention to the following Helm values: + +* `proxy.caSecretKey`: the `Secret` key that contains the CA certificate used to sign the Capsule Proxy TLS certificate (it should be`"ca.crt"` when Capsule Proxy has been configured with certificates generated with Cert Manager). +* `proxy.servicePort`: the port configured for the Capsule Proxy Kubernetes `Service` (`443` in this setup). +* `proxy.serviceURL`: the name of the Capsule Proxy `Service` (by default `"capsule-proxy.capsule-system.svc"` hen installed in the *capsule-system* `Namespace`). + +### Rancher Cluster Agent + +In both CLI and dashboard use cases, the [Cluster Agent](https://ranchermanager.docs.rancher.com/v2.5/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/about-rancher-agents) is responsible for the two-way communication between Rancher and the downstream cluster. + +In a standard setup, the Cluster Agents communicates to the API server. In this setup it will communicate with Capsule Proxy to ensure filtering of cluster-scope resources, for Tenants. + +Cluster Agents accepts as arguments: +- `KUBERNETES_SERVICE_HOST` environment variable +- `KUBERNETES_SERVICE_PORT` environment variable + +which will be set, at cluster import-time, to the values of the Capsule Proxy `Service`. For example: +- `KUBERNETES_SERVICE_HOST=capsule-proxy.capsule-system.svc` +- (optional) `KUBERNETES_SERVICE_PORT=9001`. You can skip it by installing Capsule Proxy with Helm value `service.port=443`. + +The expected CA is the one for which the certificate is inside the `kube-root-ca` `ConfigMap` in the same `Namespace` of the Cluster Agent (*cattle-system*). + +### Capsule Proxy + +[Capsule Proxy](docs/proxy/) needs to provide a x509 certificate for which the root CA is trusted by the Cluster Agent. +The goal can be achieved by, either using the Kubernetes CA to sign its certificate, or by using a dedicated root CA. + +#### With the Kubernetes root CA + +> Note: this can be achieved when the Kubernetes root CA keypair is accessible. For example is likely to be possibile with on-premise setup, but not with managed Kubernetes services. + +With this approach Cert Manager will sign certificates with the Kubernetes root CA for which it's needed to be provided a `Secret`. + +```shell +kubectl create secret tls -n capsule-system kubernetes-ca-key-pair --cert=/path/to/ca.crt --key=/path/to/ca.key +``` + +When installing Capsule Proxy with Helm chart, it's needed to specify to generate Capsule Proxy `Certificate`s with Cert Manager with an external `ClusterIssuer`: +- `certManager.externalCA.enabled=true` +- `certManager.externalCA.secretName=kubernetes-ca-key-pair` +- `certManager.generateCertificates=true` + +and disable the job for generating the certificates without Cert Manager: +- `options.generateCertificates=false` + +#### Enable tenant users access cluster resources + +In order to allow tenant users to list cluster-scope resources, like `Node`s, Tenants need to be configured with proper `proxySettings`, for example: + +```yaml +apiVersion: capsule.clastix.io/v1beta2 +kind: Tenant +metadata: + name: oil +spec: + owners: + - kind: User + name: alice + proxySettings: + - kind: Nodes + operations: + - List +[...] +``` + +Also, in order to assign or filter nodes per Tenant, it's needed labels on node in order to be selected: + +```shell +kubectl label node worker-01 capsule.clastix.io/tenant=oil +``` + + and a node selector at Tenant level: + +```yaml +apiVersion: capsule.clastix.io/v1beta2 +kind: Tenant +metadata: + name: oil +spec: + nodeSelector: + capsule.clastix.io/tenant: oil +[...] +``` + +The final manifest is: + +```yaml +apiVersion: capsule.clastix.io/v1beta2 +kind: Tenant +metadata: + name: oil +spec: + owners: + - kind: User + name: alice + proxySettings: + - kind: Node + operations: + - List + nodeSelector: + capsule.clastix.io/tenant: oil +``` + +The same appplies for: +- `Nodes` +- `StorageClasses` +- `IngressClasses` +- `PriorityClasses` + +More on this in the [official documentation](https://capsule.clastix.io/docs/general/proxy#tenant-owner-authorization). + + +## Configure OIDC authentication with Keycloak + +### Pre-requisites + +- Keycloak realm for Rancher +- Rancher OIDC authentication provider + +### Keycloak realm for Rancher + +These instructions is specific to a setup made with Keycloak as an OIDC identity provider. + +#### Mappers + +- Add to userinfo Group Membership type, claim name `groups` +- Add to userinfo Audience type, claim name `client audience` +- Add to userinfo, full group path, Group Membership type, claim name `full_group_path` + +More on this on the [official guide](/docs/operating/authentication/#oidc). + +### Rancher OIDC authentication provider + +Configure an OIDC authentication provider, with Client with issuer, return URLs specific to the Keycloak setup. + +> Use old and Rancher-standard paths with `/auth` subpath (see issues below). +> +> Add custom paths, remove `/auth` subpath in return and issuer URLs. + +### Configuration + +#### Configure Tenant users +1. In Rancher, configure OIDC authentication with Keycloak to use [with Rancher](https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-keycloak-oidc). +1. In Keycloak, Create a Group in the rancher Realm: *capsule.clastix.io*. +1. In Keycloak, Create a User in the rancher Realm, member of *capsule.clastix.io* Group. +1. In the Kubernetes target cluster, update the `CapsuleConfiguration` by adding the `"keycloakoidc_group://capsule.clastix.io"` Kubernetes `Group`. +1. Login to Rancher with Keycloak with the new user. +1. In Rancher as an administrator, set the user custom role with `get` of Cluster. +1. In Rancher as an administrator, add the Rancher user ID of the just-logged in user as Owner of a `Tenant`. +1. (optional) configure `proxySettings` for the `Tenant` to enable tenant users to access cluster-wide resources. diff --git a/data/addons.yaml b/data/addons.yaml index 21e8945..217a505 100644 --- a/data/addons.yaml +++ b/data/addons.yaml @@ -13,6 +13,24 @@ addons: #layoutColor: "#0000000" #descriptionColor: "#000000" + - name: "Rancher" + logo: "https://www.rancher.com/assets/img/logos/rancher-logo-cow-blue.svg" + tags: + - "community" + - "ux" + links: + - link: "/ecosystem/integrations/rancher/" + icon: "fa fa-book" + - link: "https://github.com/clastix/capsule-addon-rancher" + icon: "fab fa-github" + + description: "Integrate Capsule with Rancher to manage Capsule Tenants and their resources with Rancher Projects." + size: 50% + background: "#00264d" + #layoutColor: "#0000000" + #descriptionColor: "#000000" + + - name: "ArgoCD" logo: "https://github.com/peak-scale/capsule-argo-addon/blob/main/docs/images/capsule-argo.png?raw=true" tags: diff --git a/data/resources.yaml b/data/resources.yaml index 6c5d226..9111e59 100644 --- a/data/resources.yaml +++ b/data/resources.yaml @@ -9,6 +9,10 @@ resources: date: "2025-02-10" thumbnail: "/images/content/multi-tenant-spectrum.png" type: "article" + - title: "Taming the Kube tenancy kraken (Capsule with Rancher)" + youtube: "dEVeWXUNbxQ" + date: "2024-12-12" + type: "video" - title: "Painless Multi-Tenant Kafka on Kubernetes with Istio at ASML - Thomas Reichel & Dominique Chanet" youtube: "qMkV5qeOnfg" date: "2024-10-07" diff --git a/static/images/content/capsule-rancher-addon.drawio.png b/static/images/content/capsule-rancher-addon.drawio.png new file mode 100644 index 0000000..8e61efc Binary files /dev/null and b/static/images/content/capsule-rancher-addon.drawio.png differ