Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 9 additions & 4 deletions config/_default/hugo.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ params:
github_project_repo: 'https://github.com/projectcapsule/capsule'
github_branch: main
gcs_engine_id: ''
offlineSearch: false
offlineSearch: true
prism_syntax_highlighting: false
showLineNumbers: false
version_menu_pagelinks: true
Expand Down Expand Up @@ -159,17 +159,22 @@ menu:
pre: <i class='fas fa-users'></i>
weight: 13

- name: Live-Demo
url: https://killercoda.com/peak-scale-test/course/playgrounds/capsule
pre: <i class='fas fa-terminal'></i>
weight: 14

- name: Resources
url: /resources/
pre: <i class='fa-brands fa-readme'></i>
weight: 14
weight: 15

- name: Support
url: /support/
pre: <i class='fas fa-briefcase'></i>
weight: 15
weight: 16

- name: Project
url: /project/
pre: <i class='fas fa-flask'></i>
weight: 16
weight: 17
7 changes: 4 additions & 3 deletions content/en/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,20 +4,21 @@ title: Project Capsule

{{< blocks/cover title="Capsule" image_anchor="top" height="full" >}}
# A multi-tenancy and policy-based framework for Kubernetes { class="text-center" }

### Developed in 🇮🇹 / 🇨🇭 / 🇧🇬 { class="text-center mb-4" }
<div class="mt-5 mx-auto">
<a class="btn btn-lg btn-primary me-3 mb-4" href="/docs/overview">
Learn More <i class="fas fa-arrow-alt-circle-right ms-2"></i>
</a>

<a class="btn btn-lg btn-primary me-3 mb-4" href="https://killercoda.com/peak-scale-test/course/playgrounds/capsule">
Demo <i class="fas fa-arrow-alt-circle-right ms-2"></i>
Live-Demo <i class="fas fa-arrow-alt-circle-right ms-2"></i>
</a>
</div>

{{< blocks/link-down color="info" >}}
{{< /blocks/cover >}}


<a href="/adopters">
{{< blocks/section color="white" type="row" >}}

{{< adopters-slider >}}
Expand Down
60 changes: 50 additions & 10 deletions content/en/docs/operating/authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description: Integrate Capsule with Authentication of your Kubernetes cluster
weight: 5
---

Capsule does not care about the authentication strategy used in the cluster and all the Kubernetes methods of authentication are supported. The only requirement to use Capsule is to assign tenant users to the group defined by userGroups option in the CapsuleConfiguration, which defaults to capsule.clastix.io.
Capsule does not care about the authentication strategy used in the cluster and all the Kubernetes methods of authentication are supported. The only requirement to use Capsule is to assign tenant users to the group defined by userGroups option in the CapsuleConfiguration, which defaults to `projectcapsule.dev`.

## OIDC

Expand All @@ -15,16 +15,17 @@ In the following guide, we'll use [Keycloak](https://www.keycloak.org/) an Open
Configure Keycloak as OIDC server:

* Add a realm called caas, or use any existing realm instead
* Add a group capsule.clastix.io
* Add a user alice assigned to group capsule.clastix.io
* Add an OIDC client called kubernetes
* Add a group `projectcapsule.dev`
* Add a user alice assigned to group `projectcapsule.dev`
* Add an OIDC client called `kubernetes` (Public)
* Add an OIDC client called `kubernetes-auth` (Confidential (Client Secret))

For the kubernetes client, create protocol mappers called groups and audience
If everything is done correctly, now you should be able to authenticate in Keycloak and see user groups in JWT tokens. Use the following snippet to authenticate in Keycloak as alice user:

```bash
$ KEYCLOAK=sso.clastix.io
$ REALM=caas
$ REALM=kubernetes-auth
$ OIDC_ISSUER=${KEYCLOAK}/realms/${REALM}

$ curl -k -s https://${OIDC_ISSUER}/protocol/openid-connect/token \
Expand Down Expand Up @@ -54,7 +55,7 @@ To introspect the `ID_TOKEN` token run:
```bash
$ curl -k -s https://${OIDC_ISSUER}/protocol/openid-connect/introspect \
-d token=${ID_TOKEN} \
--user ${OIDC_CLIENT_ID}:${OIDC_CLIENT_SECRET} | jq
--user kubernetes-auth:${OIDC_CLIENT_SECRET} | jq
```

The result will be like the following:
Expand Down Expand Up @@ -82,6 +83,45 @@ The result will be like the following:

Configuring Kubernetes for OIDC Authentication requires adding several parameters to the API Server. Please, refer to the [documentation](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens) for details and examples. Most likely, your kube-apiserver.yaml manifest will looks like the following:

#### Authentication Configuration (Recommended)

The configuration file approach allows you to configure multiple JWT authenticators, each with a unique issuer.url and issuer.discoveryURL. The configuration file even allows you to specify CEL expressions to map claims to user attributes, and to validate claims and user information.

```yaml
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthenticationConfiguration
jwt:
- issuer:
url: https://${OIDC_ISSUER}
audiences:
- kubernetes
- kubernetes-auth
audienceMatchPolicy: MatchAny
claimMappings:
username:
claim: 'email'
prefix: ""
groups:
claim: 'groups'
prefix: ""
certificateAuthority: <PEM encoded CA certificates>
```

This file must be present and consistent across all kube-apiserver instances in the cluster. Add the following flag to the kube-apiserver manifest:

```yaml
spec:
containers:
- command:
- kube-apiserver
...
- --authentication-configuration-file=/etc/kubernetes/authentication/authentication.yaml
```

[Read More](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#using-authentication-configuration)

#### OIDC Flags (Legacy)

```yaml
spec:
containers:
Expand All @@ -90,7 +130,7 @@ spec:
...
- --oidc-issuer-url=https://${OIDC_ISSUER}
- --oidc-ca-file=/etc/kubernetes/oidc/ca.crt
- --oidc-client-id=${OIDC_CLIENT_ID}
- --oidc-client-id=kubernetes
- --oidc-username-claim=preferred_username
- --oidc-groups-claim=groups
- --oidc-username-prefix=-
Expand All @@ -112,7 +152,7 @@ nodes:
extraArgs:
oidc-issuer-url: https://${OIDC_ISSUER}
oidc-username-claim: preferred_username
oidc-client-id: ${OIDC_CLIENT_ID}
oidc-client-id: kubernetes
oidc-username-prefix: "keycloak:"
oidc-groups-claim: groups
oidc-groups-prefix: "keycloak:"
Expand All @@ -133,7 +173,7 @@ One way to use OIDC authentication is the use of a kubectl plugin. The [Kubelogi
```shell
kubectl oidc-login setup \
--oidc-issuer-url=https://${OIDC_ISSUER} \
--oidc-client-id=${OIDC_CLIENT_ID} \
--oidc-client-id=kubernetes-auth \
--oidc-client-secret=${OIDC_CLIENT_SECRET}
```

Expand All @@ -146,7 +186,7 @@ $ kubectl config set-credentials oidc \
--auth-provider=oidc \
--auth-provider-arg=idp-issuer-url=https://${OIDC_ISSUER} \
--auth-provider-arg=idp-certificate-authority=/path/to/ca.crt \
--auth-provider-arg=client-id=${OIDC_CLIENT_ID} \
--auth-provider-arg=client-id=kubernetes-auth \
--auth-provider-arg=client-secret=${OIDC_CLIENT_SECRET} \
--auth-provider-arg=refresh-token=${REFRESH_TOKEN} \
--auth-provider-arg=id-token=${ID_TOKEN} \
Expand Down
5 changes: 5 additions & 0 deletions content/en/docs/operating/best-practices/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
title: Best Practices
weight: 2
description: Best Practices when running Capsule in production
---
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Architecture
weight: 10
weight: 1
description: Architecture references and considerations
---

Expand All @@ -14,6 +14,39 @@ In Capsule, we introduce a new persona called the `Tenant Owner`. The goal is to

Capsule provides robust tools to strictly enforce tenant boundaries, ensuring that each tenant operates within its defined limits. This separation of duties promotes both security and efficient resource management.

### Key Decisions

Introducing a new separation of duties can lead to a significant paradigm shift. This has technical implications and may also impact your organizational structure. Therefore, when designing a multi-tenant platform pattern, carefully consider the following aspects. As **Cluster Administrator**, ask yourself:

* 🔑 **How much ownership can be delegated to Tenant Owners (Platform Users)?**

The answer to this question may be influenced by the following aspects:

* **Are the Cluster Adminsitrators willing to grant permissions to Tenant Owners**?
* _You might have a problem with know-how and probably your organisation is not yet pushing Kubernetes itself enough as a key strategic plattform. The key here is enabling Plattform Users through good UX and know-how transfers_

* **Who is responsible for the deployed workloads within the Tenants?**?
* _If Platform Administrators are still handling this, a true “shift left” has not yet been achieved._

* **Who gets paged during a production outage within a Tenant’s application?**?
* _You’ll need robust monitoring that enables Tenant Owners to clearly understand and manage what’s happening inside their own tenant._

* **Are your customers technically capable of working directly with the Kubernetes API?**?
* _If not, you may need to build a more user-friendly platform with better UX — for example, a multi-tenant ArgoCD setup, or UI layers like Headlamp._


## Layouts

Let's dicuss different Tenant Layouts which could be used . These are just approaches we have seen, however you might also find a combination of these which fits your use-case.

### Tenant As A Service

With this approach you essentially just provide your Customers with the Tenant on your cluster. The rest is their responsability. This concludes to a shared responsibility model. This can be achieved when also the Tenant Owners are responsible for everything they are provisiong within their Tenant's namespaces.

![Resourcepool Dashboard](/images/content/architecture/layout-taas.drawio.png)



## Scheduling

Workload distribution across your compute infrastructure can be approached in various ways, depending on your specific priorities. Regardless of the use case, it's essential to preserve maximum flexibility for your platform administrators. This means ensuring that:
Expand All @@ -32,7 +65,7 @@ Strong tenant isolation, ensuring that any noisy neighbor effects remain confine

### Shared

With this approach you share the nodes amongst all Tenants, therefor giving you more potential for optimizing resources on a node level. It's a common pattern to separate the controllers needed to power your distro (operators) form the actual workload. This ensures smooth operations for the clust
With this approach you share the nodes amongst all Tenants, therefor giving you more potential for optimizing resources on a node level. It's a common pattern to separate the controllers needed to power your Distribution (operators) form the actual workload. This ensures smooth operations for the cluster

**Overview**:

Expand All @@ -43,7 +76,8 @@ With this approach you share the nodes amongst all Tenants, therefor giving you

![Shared Nodepool](/images/content/scheduling-shared.drawio.png)

There's some further aspects you must think about with shared approaches:

We provide the concept of [ResourcePools](/docs/resourcepools/) to manage resources cross namespaces. There's some further aspects you must think about with shared approaches:

* [PriorityClasses](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/)
* [ResourceQuotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/)
Expand Down
34 changes: 34 additions & 0 deletions content/en/docs/operating/best-practices/general-advice.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
---
title: General Advice
weight: 2
description: This is general advice you should consider before making Kubernetes Distribution consideration
---

This is general advice you should consider before making Kubernetes Distribution consideration. They are partly relevant for Multi-Tenancy with Capsule.

### Authentication

User authentication for the platform should be handled via a central OIDC-compatible identity provider system (e.g., Keycloak, Azure AD, Okta, or any other OIDC-compliant provider).
The rationale is that other central platform components — such as ArgoCD, Grafana, Headlamp, or Harbor — should also integrate with the same authentication mechanism. This enables a unified login experience and reduces administrative complexity in managing users and permissions.

[Capsule relies on native Kubernetes RBAC](/docs/operating/authentication/), so it's important to consider how the Kubernetes API handles user authentication.

### OCI Pull-Cache

By default, Kubernetes clusters pull images directly from upstream registries like `docker.io`, `quay.io`, `ghcr.io`, or `gcr.io`. In production environments, this can lead to issues — especially because Docker Hub enforces rate limits that may cause image pull failures with just a few nodes or frequent deployments (e.g., when pods are rescheduled).

To ensure availability, performance, and control over container images, it's essential to provide an on-premise OCI mirror.
This mirror should be configured via the CRI (Container Runtime Interface) by defining it as a mirror endpoint in registries.conf for default registries (e.g., `docker.io`).
This way, all nodes automatically benefit from caching without requiring developers to change image URLs.

### Secrets Management

In more complex environments with multiple clusters and applications, managing secrets manually via YAML or Helm is no longer practical.
Instead, a centralized secrets management system should be established — such as Vault, AWS Secrets Manager, Azure Key Vault, or the CNCF project [OpenBao](https://openbao.org/) (formerly the Vault community fork).

To integrate these external secret stores with Kubernetes, the [External Secrets Operator (ESO)](https://external-secrets.io/latest/) is a recommended solution. It automatically syncs defined secrets from external sources as Kubernetes secrets, and supports dynamic rotation, access control, and auditing.

If no external secret store is available, there should at least be a secure way to store sensitive data in Git.
In our ecosystem, we provide a solution based on SOPS (Secrets OPerationS) for this use case.

[👉 Demonstration](https://killercoda.com/peakscale/course/playgrounds/sops-secrets)
9 changes: 9 additions & 0 deletions content/en/docs/operating/best-practices/images.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
---
title: Container Images
weight: 5
description: Multi-Tenant Container Images considerations
---

> [Until this issue is resolved (might be in Kubernetes 1.34)](https://github.com/kubernetes/enhancements/issues/2535)

it's recommended to use the [ImagePullPolicy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy) `Always` for private registries on shared nodes. This ensures that no images can be used which are already pulled to the node.
Loading
Loading