Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions content/en/docs/operating/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ Introducing a new separation of duties can lead to a significant paradigm shift.

The answer to this question may be influenced by the following aspects:

* **Are the Cluster Adminsitrators willing to grant permissions to Tenant Owners**?
* _You might have a problem with know-how and probably your organisation is not yet pushing Kubernetes itself enough as a key strategic plattform. The key here is enabling Plattform Users through good UX and know-how transfers_
* **Are the Cluster Administrators willing to grant permissions to Tenant Owners**?
* _You might have a problem with know-how and probably your organisation is not yet pushing Kubernetes itself enough as a key strategic platform. The key here is enabling Platform Users through good UX and know-how transfers_

* **Who is responsible for the deployed workloads within the Tenants?**?
* _If Platform Administrators are still handling this, a true “shift left” has not yet been achieved._
Expand Down Expand Up @@ -58,7 +58,7 @@ Any entity which needs to interact with tenants and their namespaces must be def
**Every Tenant Owner must be a [Capsule User](#capsule-users)**


They manage the namespaces within their tenants and perform administrative tasks confined to their tenant boundaries. This delegation allows teams to operate more autonomously while still adhering to organizational policies. Tenant Owners can be used to shift reposnsability of one tenant towards this user group. promoting them to the SPOC of all namespaces within the tenant.
They manage the namespaces within their tenants and perform administrative tasks confined to their tenant boundaries. This delegation allows teams to operate more autonomously while still adhering to organizational policies. Tenant Owners can be used to shift responsibility of one tenant towards this user group, promoting them to the SPOC of all namespaces within the tenant.

Tenant Owners can:

Expand All @@ -71,11 +71,11 @@ Capsule provides robust tools to strictly enforce tenant boundaries, ensuring th

## Layouts

Let's dicuss different Tenant Layouts which could be used . These are just approaches we have seen, however you might also find a combination of these which fits your use-case.
Let's discuss different Tenant Layouts which could be used. These are just approaches we have seen, however you might also find a combination of these which fits your use-case.

### Tenant As A Service

With this approach you essentially just provide your Customers with the Tenant on your cluster. The rest is their responsability. This concludes to a shared responsibility model. This can be achieved when also the Tenant Owners are responsible for everything they are provisiong within their Tenant's namespaces.
With this approach you essentially just provide your Customers with the Tenant on your cluster. The rest is their responsibility. This concludes to a shared responsibility model. This can be achieved when also the Tenant Owners are responsible for everything they are provisioning within their Tenant's namespaces.

![Resourcepool Dashboard](/images/content/architecture/layout-taas.drawio.png)

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/operating/best-practices/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ spec:

### Deny Namespace Metadata

In the above example we allow traffic from namespaces with the label `company.com/system: "true"`. This is meant for Kubernetes Operators to eg. scrape the workloads within a tenant. However without further enforcement any namespace can set this label and therefor gain access to any tenant namespace. To prevent this, we must restrict, who can declare this label on namespaces.
In the above example we allow traffic from namespaces with the label `company.com/system: "true"`. This is meant for Kubernetes Operators to eg. scrape the workloads within a tenant. However without further enforcement any namespace can set this label and thereby gain access to any tenant namespace. To prevent this, we must restrict who can declare this label on namespaces.

We can deny such labels on tenant basis. So in this scenario every tenant should disallow the use of these labels on namespaces:

Expand Down
8 changes: 4 additions & 4 deletions content/en/docs/operating/setup/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@ description: >
Understand the Capsule configuration options and how to use them.
---

The configuration for the capsule controller is done via it's dedicated configration Custom Resource. You can explain the configuration options and how to use them:
The configuration for the capsule controller is done via its dedicated configuration Custom Resource. You can explain the configuration options and how to use them:

## CapsuleConfiguration

The configuration for Capsule is done via it's dedicated configration Custom Resource. You can explain the configuration options and how to use them:
The configuration for Capsule is done via its dedicated configuration Custom Resource. You can explain the configuration options and how to use them:

```shell
kubectl explain capsuleConfiguration.spec
Expand Down Expand Up @@ -74,7 +74,7 @@ manager:
```

### `nodeMetadata`
Allows to set the forbidden metadata for the worker nodes that could be patched by a Tenant. This applies only if the Tenant has an active NodeSelector, and the Owner have right to patch their nodes.
Allows to set the forbidden metadata for the worker nodes that could be patched by a Tenant. This applies only if the Tenant has an active NodeSelector, and the Owners have the right to patch their nodes.

```yaml
manager:
Expand Down Expand Up @@ -108,7 +108,7 @@ manager:

### `allowServiceAccountPromotion`

ServiceAccounts within tenant namespaces can be promoted to owners of the given tenant this can be achieved by labeling the serviceaccount and then they are considered owners. This can only be done by other owners of the tenant. However ServiceAccounts which have been promoted to owner can not promote further serviceAccounts.
ServiceAccounts within tenant namespaces can be promoted to owners of the given tenant; this can be achieved by labeling the ServiceAccount and then they are considered owners. This can only be done by other owners of the tenant. However ServiceAccounts which have been promoted to owner cannot promote further ServiceAccounts.

[Read More](/docs/tenants/permissions/#serviceaccount-promotion)

Expand Down
14 changes: 7 additions & 7 deletions content/en/docs/resourcepools/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,8 +196,8 @@ spec:
This allows users to essentially schedule anything in the namespace:

```shell
NAME AGE REQUEST LIMIT
capsule-pool-exmaple 2m47s
NAME AGE REQUEST LIMIT
capsule-pool-example 2m47s
```

To prevent this, you might consider using the [DefaultsZero option](#defaultszero). This option can also be combined with setting other defaults, not part of the `.spec.quota.hard`. Here we are additionally restricting the creation of `persistentvolumeclaims`:
Expand Down Expand Up @@ -238,7 +238,7 @@ Options that can be defined on a per-`ResourcePool` basis and influence the gene

#### OrderedQueue

When `ResourecePoolClaims` are allocated to a pool, they are placed in a queue. The pool attempts to allocate claims in the order of their [creation timestamps](#priority). However, even if a claim was created earlier, if it requests more resources than are currently available, it will remain in the queue. Meanwhile, a lower-priority claim that fits within the available resources may still be allocated—despite its lower priority.
When `ResourcePoolClaims` are allocated to a pool, they are placed in a queue. The pool attempts to allocate claims in the order of their [creation timestamps](#priority). However, even if a claim was created earlier, if it requests more resources than are currently available, it will remain in the queue. Meanwhile, a lower-priority claim that fits within the available resources may still be allocated—despite its lower priority.

Enabling this option enforces strict ordering: claims cannot be skipped, even if they block other claims from being fulfilled due to resource exhaustion. The `CreationTimestamp` is strictly respected, meaning that once a claim is queued, no subsequent claim can bypass it—even if it requires fewer resources.

Expand Down Expand Up @@ -292,7 +292,7 @@ spec:

## ResourcePoolClaims

`ResourcePoolClaims` declared claims of resources from a single `ResourcePool`. When a `ResourcePoolClaim` is successfully bound to a `ResourcePool`, it's requested resources are stacked to the `ResourceQuota` from the `ResourcePool` in the corresponding namespaces, where the `ResourcePoolClaim` was declared. So the declaration of a `ResourcePoolClaim` is very simple:
`ResourcePoolClaims` declared claims of resources from a single `ResourcePool`. When a `ResourcePoolClaim` is successfully bound to a `ResourcePool`, its requested resources are stacked to the `ResourceQuota` from the `ResourcePool` in the corresponding namespaces, where the `ResourcePoolClaim` was declared. So the declaration of a `ResourcePoolClaim` is very simple:

```yaml
apiVersion: capsule.clastix.io/v1beta2
Expand Down Expand Up @@ -345,7 +345,7 @@ kubectl annotate resourcepoolclaim skip-the-line -n solar-prod projectcapsule.d

#### Immutable

Once a `ResourcePoolClaim` has successfully claimed resources from a `ResourcePool`, the claim is immutable. This means that the claim cannot be modified or deleted until the resources have been released back to the `ResourcePool`. This means `ResourcePoolClaim` can not be expanded or shrunk, without [releasing](#release).
Once a `ResourcePoolClaim` has successfully claimed resources from a `ResourcePool`, the claim is immutable. This means that the claim cannot be modified or deleted until the resources have been released back to the `ResourcePool`. This means a `ResourcePoolClaim` cannot be expanded or shrunk without [releasing](#release).

### Queue

Expand Down Expand Up @@ -648,9 +648,9 @@ Success 🍀

This part should provide you with a little bit of back story, as to why this implementation was done the way it currently is. Let's start.

Since the begining of capsule we are struggeling with a concurrency probelm regarding `ResourcesQuotas`, this was already early detected in [Issue 49](https://github.com/projectcapsule/capsule/issues/49). Let's quickly recap what really the problem is with the current `ResourceQuota` centric approach.
Since the beginning of Capsule we are struggling with a concurrency problem regarding `ResourceQuotas`; this was already early detected in [Issue 49](https://github.com/projectcapsule/capsule/issues/49). Let's quickly recap what really the problem is with the current `ResourceQuota` centric approach.

With the current `ResourceQuota` with `Scope: Tenant` we encounter the problem, that resourcequotas spread across multiple namespaces refering to one tenant quota can be overprovisioned, if an operation is executed in parallel (eg. total is `services/count: 3`, in each namespace you could then create 3 services, leading to a possible overprovision of hard `* amount-namespaces`). The Problem in this approach is, that we are not doing anything with Webhooks, therefor we rely on the speed of the controller, where this entire construct becomes a matter of luck and racing conditions.
With the current `ResourceQuota` with `Scope: Tenant` we encounter the problem that resource quotas spread across multiple namespaces referring to one tenant quota can be overprovisioned if an operation is executed in parallel (eg. total is `services/count: 3`, in each namespace you could then create 3 services, leading to a possible overprovision of hard `* amount-namespaces`). The problem in this approach is that we are not doing anything with Webhooks; therefore we rely on the speed of the controller, where this entire construct becomes a matter of luck and racing conditions.

So, there needs to be change. But times have also changed and we have listened to our users, so the new approach to `ResourceQuotas` should:

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/tenants/permissions.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ To explain these entries, let's inspect one of them:

* `kind`: It can be [User](#users), [Group](#groups) or [ServiceAccount](#serviceaccounts)
* `name`: Is the reference name of the user, group or serviceaccount we want to bind
* `clusterRoles`: ClusterRoles which are bound for each namespace of teh tenant to the owner. By default, Capsule assigns `admin` and `capsule-namespace-deleter` roles to each owner, but you can customize them as explained in [Owner Roles](#owner-roles) section.
* `clusterRoles`: ClusterRoles which are bound for each namespace of the tenant to the owner. By default, Capsule assigns `admin` and `capsule-namespace-deleter` roles to each owner, but you can customize them as explained in [Owner Roles](#owner-roles) section.

With this information available you

Expand Down
Loading