You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/docs/operating/architecture.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,8 +12,8 @@ Introducing a new separation of duties can lead to a significant paradigm shift.
12
12
13
13
The answer to this question may be influenced by the following aspects:
14
14
15
-
***Are the Cluster Adminsitrators willing to grant permissions to Tenant Owners**?
16
-
*_You might have a problem with know-how and probably your organisation is not yet pushing Kubernetes itself enough as a key strategic plattform. The key here is enabling Plattform Users through good UX and know-how transfers_
15
+
***Are the Cluster Administrators willing to grant permissions to Tenant Owners**?
16
+
*_You might have a problem with know-how and probably your organisation is not yet pushing Kubernetes itself enough as a key strategic platform. The key here is enabling Platform Users through good UX and know-how transfers_
17
17
18
18
***Who is responsible for the deployed workloads within the Tenants?**?
19
19
*_If Platform Administrators are still handling this, a true “shift left” has not yet been achieved._
@@ -58,7 +58,7 @@ Any entity which needs to interact with tenants and their namespaces must be def
58
58
**Every Tenant Owner must be a [Capsule User](#capsule-users)**
59
59
60
60
61
-
They manage the namespaces within their tenants and perform administrative tasks confined to their tenant boundaries. This delegation allows teams to operate more autonomously while still adhering to organizational policies. Tenant Owners can be used to shift reposnsability of one tenant towards this user group. promoting them to the SPOC of all namespaces within the tenant.
61
+
They manage the namespaces within their tenants and perform administrative tasks confined to their tenant boundaries. This delegation allows teams to operate more autonomously while still adhering to organizational policies. Tenant Owners can be used to shift responsibility of one tenant towards this user group, promoting them to the SPOC of all namespaces within the tenant.
Let's dicuss different Tenant Layouts which could be used. These are just approaches we have seen, however you might also find a combination of these which fits your use-case.
74
+
Let's discuss different Tenant Layouts which could be used. These are just approaches we have seen, however you might also find a combination of these which fits your use-case.
75
75
76
76
### Tenant As A Service
77
77
78
-
With this approach you essentially just provide your Customers with the Tenant on your cluster. The rest is their responsability. This concludes to a shared responsibility model. This can be achieved when also the Tenant Owners are responsible for everything they are provisiong within their Tenant's namespaces.
78
+
With this approach you essentially just provide your Customers with the Tenant on your cluster. The rest is their responsibility. This concludes to a shared responsibility model. This can be achieved when also the Tenant Owners are responsible for everything they are provisioning within their Tenant's namespaces.
Copy file name to clipboardExpand all lines: content/en/docs/operating/best-practices/networking.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -68,7 +68,7 @@ spec:
68
68
69
69
### Deny Namespace Metadata
70
70
71
-
In the above example we allow traffic from namespaces with the label `company.com/system: "true"`. This is meant for Kubernetes Operators to eg. scrape the workloads within a tenant. However without further enforcement any namespace can set this label and therefor gain access to any tenant namespace. To prevent this, we must restrict, who can declare this label on namespaces.
71
+
In the above example we allow traffic from namespaces with the label `company.com/system: "true"`. This is meant for Kubernetes Operators to eg. scrape the workloads within a tenant. However without further enforcement any namespace can set this label and thereby gain access to any tenant namespace. To prevent this, we must restrict who can declare this label on namespaces.
72
72
73
73
We can deny such labels on tenant basis. So in this scenario every tenant should disallow the use of these labels on namespaces:
Copy file name to clipboardExpand all lines: content/en/docs/operating/setup/configuration.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,11 +5,11 @@ description: >
5
5
Understand the Capsule configuration options and how to use them.
6
6
---
7
7
8
-
The configuration for the capsule controller is done via it's dedicated configration Custom Resource. You can explain the configuration options and how to use them:
8
+
The configuration for the capsule controller is done via its dedicated configuration Custom Resource. You can explain the configuration options and how to use them:
9
9
10
10
## CapsuleConfiguration
11
11
12
-
The configuration for Capsule is done via it's dedicated configration Custom Resource. You can explain the configuration options and how to use them:
12
+
The configuration for Capsule is done via its dedicated configuration Custom Resource. You can explain the configuration options and how to use them:
13
13
14
14
```shell
15
15
kubectl explain capsuleConfiguration.spec
@@ -74,7 +74,7 @@ manager:
74
74
```
75
75
76
76
### `nodeMetadata`
77
-
Allows to set the forbidden metadata for the worker nodes that could be patched by a Tenant. This applies only if the Tenant has an active NodeSelector, and the Owner have right to patch their nodes.
77
+
Allows to set the forbidden metadata for the worker nodes that could be patched by a Tenant. This applies only if the Tenant has an active NodeSelector, and the Owners have the right to patch their nodes.
78
78
79
79
```yaml
80
80
manager:
@@ -108,7 +108,7 @@ manager:
108
108
109
109
### `allowServiceAccountPromotion`
110
110
111
-
ServiceAccounts within tenant namespaces can be promoted to owners of the given tenant this can be achieved by labeling the serviceaccount and then they are considered owners. This can only be done by other owners of the tenant. However ServiceAccounts which have been promoted to owner can not promote further serviceAccounts.
111
+
ServiceAccounts within tenant namespaces can be promoted to owners of the given tenant; this can be achieved by labeling the ServiceAccount and then they are considered owners. This can only be done by other owners of the tenant. However ServiceAccounts which have been promoted to owner cannot promote further ServiceAccounts.
Copy file name to clipboardExpand all lines: content/en/docs/resourcepools/_index.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -196,8 +196,8 @@ spec:
196
196
This allows users to essentially schedule anything in the namespace:
197
197
198
198
```shell
199
-
NAME AGE REQUEST LIMIT
200
-
capsule-pool-exmaple 2m47s
199
+
NAME AGE REQUEST LIMIT
200
+
capsule-pool-example 2m47s
201
201
```
202
202
203
203
To prevent this, you might consider using the [DefaultsZero option](#defaultszero). This option can also be combined with setting other defaults, not part of the `.spec.quota.hard`. Here we are additionally restricting the creation of `persistentvolumeclaims`:
@@ -238,7 +238,7 @@ Options that can be defined on a per-`ResourcePool` basis and influence the gene
238
238
239
239
#### OrderedQueue
240
240
241
-
When `ResourecePoolClaims` are allocated to a pool, they are placed in a queue. The pool attempts to allocate claims in the order of their [creation timestamps](#priority). However, even if a claim was created earlier, if it requests more resources than are currently available, it will remain in the queue. Meanwhile, a lower-priority claim that fits within the available resources may still be allocated—despite its lower priority.
241
+
When `ResourcePoolClaims` are allocated to a pool, they are placed in a queue. The pool attempts to allocate claims in the order of their [creation timestamps](#priority). However, even if a claim was created earlier, if it requests more resources than are currently available, it will remain in the queue. Meanwhile, a lower-priority claim that fits within the available resources may still be allocated—despite its lower priority.
242
242
243
243
Enabling this option enforces strict ordering: claims cannot be skipped, even if they block other claims from being fulfilled due to resource exhaustion. The `CreationTimestamp` is strictly respected, meaning that once a claim is queued, no subsequent claim can bypass it—even if it requires fewer resources.
244
244
@@ -292,7 +292,7 @@ spec:
292
292
293
293
## ResourcePoolClaims
294
294
295
-
`ResourcePoolClaims` declared claims of resources from a single `ResourcePool`. When a `ResourcePoolClaim` is successfully bound to a `ResourcePool`, it's requested resources are stacked to the `ResourceQuota` from the `ResourcePool` in the corresponding namespaces, where the `ResourcePoolClaim` was declared. So the declaration of a `ResourcePoolClaim` is very simple:
295
+
`ResourcePoolClaims` declared claims of resources from a single `ResourcePool`. When a `ResourcePoolClaim` is successfully bound to a `ResourcePool`, its requested resources are stacked to the `ResourceQuota` from the `ResourcePool` in the corresponding namespaces, where the `ResourcePoolClaim` was declared. So the declaration of a `ResourcePoolClaim` is very simple:
Once a `ResourcePoolClaim` has successfully claimed resources from a `ResourcePool`, the claim is immutable. This means that the claim cannot be modified or deleted until the resources have been released back to the `ResourcePool`. This means `ResourcePoolClaim` can not be expanded or shrunk, without [releasing](#release).
348
+
Once a `ResourcePoolClaim` has successfully claimed resources from a `ResourcePool`, the claim is immutable. This means that the claim cannot be modified or deleted until the resources have been released back to the `ResourcePool`. This means a `ResourcePoolClaim` cannot be expanded or shrunk without [releasing](#release).
349
349
350
350
### Queue
351
351
@@ -648,9 +648,9 @@ Success 🍀
648
648
649
649
This part should provide you with a little bit of back story, as to why this implementation was done the way it currently is. Let's start.
650
650
651
-
Since the begining of capsule we are struggeling with a concurrency probelm regarding `ResourcesQuotas`, this was already early detected in [Issue 49](https://github.com/projectcapsule/capsule/issues/49). Let's quickly recap what really the problem is with the current `ResourceQuota` centric approach.
651
+
Since the beginning of Capsule we are struggling with a concurrency problem regarding `ResourceQuotas`; this was already early detected in [Issue 49](https://github.com/projectcapsule/capsule/issues/49). Let's quickly recap what really the problem is with the current `ResourceQuota` centric approach.
652
652
653
-
With the current `ResourceQuota` with `Scope: Tenant` we encounter the problem, that resourcequotas spread across multiple namespaces refering to one tenant quota can be overprovisioned, if an operation is executed in parallel (eg. total is `services/count: 3`, in each namespace you could then create 3 services, leading to a possible overprovision of hard `* amount-namespaces`). The Problem in this approach is, that we are not doing anything with Webhooks, therefor we rely on the speed of the controller, where this entire construct becomes a matter of luck and racing conditions.
653
+
With the current `ResourceQuota` with `Scope: Tenant` we encounter the problem that resource quotas spread across multiple namespaces referring to one tenant quota can be overprovisioned if an operation is executed in parallel (eg. total is `services/count: 3`, in each namespace you could then create 3 services, leading to a possible overprovision of hard `* amount-namespaces`). The problem in this approach is that we are not doing anything with Webhooks; therefore we rely on the speed of the controller, where this entire construct becomes a matter of luck and racing conditions.
654
654
655
655
So, there needs to be change. But times have also changed and we have listened to our users, so the new approach to `ResourceQuotas` should:
Copy file name to clipboardExpand all lines: content/en/docs/tenants/permissions.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,7 +42,7 @@ To explain these entries, let's inspect one of them:
42
42
43
43
* `kind`: It can be [User](#users), [Group](#groups) or [ServiceAccount](#serviceaccounts)
44
44
* `name`: Is the reference name of the user, group or serviceaccount we want to bind
45
-
* `clusterRoles`: ClusterRoles which are bound for each namespace of teh tenant to the owner. By default, Capsule assigns `admin` and `capsule-namespace-deleter` roles to each owner, but you can customize them as explained in [Owner Roles](#owner-roles) section.
45
+
* `clusterRoles`: ClusterRoles which are bound for each namespace of the tenant to the owner. By default, Capsule assigns `admin` and `capsule-namespace-deleter` roles to each owner, but you can customize them as explained in [Owner Roles](#owner-roles) section.
0 commit comments