You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/docs/resourcepools/_index.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -292,7 +292,7 @@ spec:
292
292
293
293
## ResourcePoolClaims
294
294
295
-
`ResourcePoolClaims` declared claims of resources from a single `ResourcePool`. When a `ResourcePoolClaim` is successfully bound to a `ResourcePool`, it's requested resources are stacked to the `ResourceQuota` from the `ResourcePool` in the correspinding namespaces, where the `ResourcePoolClaim` was declared. So the declaration of a `ResourcePoolClaim` is very simple:
295
+
`ResourcePoolClaims` declared claims of resources from a single `ResourcePool`. When a `ResourcePoolClaim` is successfully bound to a `ResourcePool`, it's requested resources are stacked to the `ResourceQuota` from the `ResourcePool` in the corresponding namespaces, where the `ResourcePoolClaim` was declared. So the declaration of a `ResourcePoolClaim` is very simple:
296
296
297
297
```yaml
298
298
apiVersion: capsule.clastix.io/v1beta2
@@ -308,7 +308,7 @@ spec:
308
308
309
309
```
310
310
311
-
`ResourcePoolClaims`are decoupled from the lifecycle of `ResourcePools`. If a `ResourcePool` is deleted where a `ResourcePoolClaim` was bound to, the `ResourcePoolClaim` becomes unassigned, but is not deleted.
311
+
`ResourcePoolClaims`are decoupled from the lifecycle of `ResourcePools`. If a `ResourcePool` is deleted where a `ResourcePoolClaim` was bound to, the `ResourcePoolClaim` becomes unassigned, but is not deleted.
312
312
313
313
### Allocation
314
314
@@ -554,7 +554,7 @@ spec:
554
554
limits.memory: 384Mi
555
555
```
556
556
557
-
The same can be done for the `capsule-migration-1` `ResourceQuota`.
557
+
The same can be done for the `capsule-migration-1` `ResourceQuota`.
558
558
559
559
```yaml
560
560
---
@@ -666,7 +666,7 @@ So, there needs to be change. But times have also changed and we have listened t
666
666
667
667
**Our initial Idea for a redesign was simple**: What if we just intercepted operations on the `resourcequota/status` subresource and calculate the offsets (or essentially what still can fit) on a Admission-Webhook. If another operation would have taken place the client operation would have thrown a conflict and rejected the admission, until it retries. Makes sense, right?
668
668
669
-
Here we have the problem, that even if we would block resourcequota status updates and wait until the actual quantity was added to the total, the resources have already been scheduled. The reason for that, is that the status for resourcequotas is **eventually** consistent, but what really matters at that moment is the hard spec (see this response from a maintainer [kubernetes/kubernetes#123434 (comment)](https://github.com/kubernetes/kubernetes/issues/123434#issuecomment-1964920277)). So essentially no matter the status, you can always provision as much resources, as the `.spec.hard` of a `ResourceQuota` indicates. This makes perfect sense, if your `ResourceQuota` is acting in a single namespace. However in our scenario, we have the same `ResourceQuota` in n-namespaces. So the overprovisioning problem still persists.
669
+
Here we have the problem, that even if we would block resourcequota status updates and wait until the actual quantity was added to the total, the resources have already been scheduled. The reason for that, is that the status for resourcequotas is **eventually** consistent, but what really matters at that moment is the hard spec (see this response from a maintainer [kubernetes/kubernetes#123434 (comment)](https://github.com/kubernetes/kubernetes/issues/123434#issuecomment-1964920277)). So essentially no matter the status, you can always provision as much resources, as the `.spec.hard` of a `ResourceQuota` indicates. This makes perfect sense, if your `ResourceQuota` is acting in a single namespace. However in our scenario, we have the same `ResourceQuota` in n-namespaces. So the overprovisioning problem still persists.
670
670
671
671
672
672
**Thinking of other ways**: So the next idea was essentially increasing the `ResourceQuota.spec.hard` based on the workloads which are added to a namespaces (essentially a reversed approach). The workflow for this would look like something like this:
@@ -684,7 +684,7 @@ But there's some problems with this approach as well:
684
684
* if you eg. schedule a pod and the quota is `count/0` there's no admission call on the resourcequota, which would be the easiest. So we would need to find a way to know, there's something new requesting resources. For example [Rancher](https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces#4-optional-add-resource-quotas) works around this problem with namespaced `DefaultLimits`. But this is not the agile approach we would like to offer.
685
685
* The only indication that I know of is that we get an Event, which we can intercept with admission (`ResourceQuota Denied`), regarding quotaoverprovision.
686
686
687
-
If you eg update the resource quota that a pod now has space, it takes some time until that's registered and actually scheduled (just tested it for pods). I guess the timing depends on the kube-controller-manager flag `--concurrent-resource-quota-syncs` and/or `--resource-quota-sync-period
687
+
If you eg update the resource quota that a pod now has space, it takes some time until that's registered and actually scheduled (just tested it for pods). I guess the timing depends on the kube-controller-manager flag `--concurrent-resource-quota-syncs` and/or `--resource-quota-sync-period
688
688
689
689
So it's really really difficult to increase quotas by the resources which are actually requested, especially the adding new resources process is where the performance would take a heavy hit.
Copy file name to clipboardExpand all lines: content/en/docs/tenants/enforcement.md
+17-17Lines changed: 17 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ The cluster admin can "taint" the namespaces created by tenant owners with addit
18
18
apiVersion: capsule.clastix.io/v1beta2
19
19
kind: Tenant
20
20
metadata:
21
-
name: oil
21
+
name: solar
22
22
spec:
23
23
owners:
24
24
- name: alice
@@ -64,7 +64,7 @@ Assigns additional labels and annotations to all namespaces created in the `sola
64
64
apiVersion: capsule.clastix.io/v1beta2
65
65
kind: Tenant
66
66
metadata:
67
-
name: oil
67
+
name: solar
68
68
spec:
69
69
owners:
70
70
- name: alice
@@ -123,7 +123,7 @@ spec:
123
123
denied:
124
124
- foo.acme.net
125
125
- bar.acme.net
126
-
deniedRegex: .*.acme.net
126
+
deniedRegex: .*.acme.net
127
127
forbiddenLabels:
128
128
denied:
129
129
- foo.acme.net
@@ -152,7 +152,7 @@ Bill, the cluster admin, can deny Tenant Owners to add or modify specific labels
152
152
apiVersion: capsule.clastix.io/v1beta2
153
153
kind: CapsuleConfiguration
154
154
metadata:
155
-
name: default
155
+
name: default
156
156
spec:
157
157
nodeMetadata:
158
158
forbiddenAnnotations:
@@ -208,8 +208,8 @@ metadata:
208
208
spec:
209
209
ports:
210
210
- protocol: TCP
211
-
port: 80
212
-
targetPort: 8080
211
+
port: 80
212
+
targetPort: 8080
213
213
selector:
214
214
run: nginx
215
215
type: ClusterIP
@@ -416,15 +416,15 @@ To prevent misuses of Pod Priority Class, Bill, the cluster admin, can enforce t
416
416
apiVersion: capsule.clastix.io/v1beta2
417
417
kind: Tenant
418
418
metadata:
419
-
name: oil
419
+
name: solar
420
420
spec:
421
421
owners:
422
422
- name: alice
423
423
kind: User
424
424
priorityClasses:
425
425
matchLabels:
426
426
env: "production"
427
-
```
427
+
```
428
428
429
429
With the said Tenant specification, Alice can create a Pod resource if `spec.priorityClassName` equals to:
430
430
@@ -502,7 +502,7 @@ If a Pod is going to use a non-allowed Runtime Class, it will be rejected by the
502
502
503
503
### NodeSelector
504
504
505
-
Bill, the cluster admin, can dedicate a pool of worker nodes to the oil tenant, to isolate the tenant applications from other noisy neighbors.
505
+
Bill, the cluster admin, can dedicate a pool of worker nodes to the solar tenant, to isolate the tenant applications from other noisy neighbors.
506
506
507
507
These nodes are labeled by Bill as `pool=renewable`
508
508
@@ -557,7 +557,7 @@ spec:
557
557
```
558
558
559
559
Any attempt of Alice to change the selector on the pods will result in an error from the PodNodeSelector Admission Controller plugin.
560
-
560
+
561
561
```bash
562
562
kubectl auth can-i edit ns -n solar-production
563
563
no
@@ -911,7 +911,7 @@ metadata:
911
911
spec:
912
912
ingressClassName: legacy
913
913
rules:
914
-
- host: oil.acmecorp.com
914
+
- host: solar.acmecorp.com
915
915
http:
916
916
paths:
917
917
- backend:
@@ -980,7 +980,7 @@ Kubernetes network policies control network traffic between namespaces and betwe
980
980
To meet this requirement, Bill needs to define network policies that deny pods belonging to Alice's namespaces to access pods in namespaces belonging to other tenants, e.g. Bob's tenant `water`, or in system namespaces, e.g. `kube-system`.
981
981
982
982
> Keep in mind, that because of how the NetworkPolicies API works, the users can still add a policy which contradicts what the Tenant has set, resulting in users being able to circumvent the initial limitation set by the tenant admin. Two options can be put in place to mitigate this potential privilege escalation: 1. providing a restricted role rather than the default admin one 2. using Calico's GlobalNetworkPolicy, or Cilium's CiliumClusterwideNetworkPolicy which are defined at the cluster-level, thus creating an order of packet filtering.
983
-
983
+
984
984
Also, Bill can make sure pods belonging to a tenant namespace cannot access other network infrastructures like cluster nodes, load balancers, and virtual machines running other services.
985
985
986
986
Bill can set network policies in the tenant manifest, according to the requirements:
@@ -1004,12 +1004,12 @@ spec:
1004
1004
- ipBlock:
1005
1005
cidr: 0.0.0.0/0
1006
1006
except:
1007
-
- 192.168.0.0/16
1007
+
- 192.168.0.0/16
1008
1008
ingress:
1009
1009
- from:
1010
1010
- namespaceSelector:
1011
1011
matchLabels:
1012
-
capsule.clastix.io/tenant: oil
1012
+
capsule.clastix.io/tenant: water
1013
1013
- podSelector: {}
1014
1014
- ipBlock:
1015
1015
cidr: 192.168.0.0/16
@@ -1202,7 +1202,7 @@ With the said Tenant specification, Alice can create a Persistent Volume Claims
1202
1202
1203
1203
Capsule assures that all Persistent Volume Claims created by Alice will use only one of the valid storage classes. Assume the StorageClass `ceph-rbd` has the label `env: production`:
1204
1204
1205
-
```bash
1205
+
```bash
1206
1206
kubectl apply -f - << EOF
1207
1207
kind: PersistentVolumeClaim
1208
1208
apiVersion: v1
@@ -1233,7 +1233,7 @@ It's possible to assign each tenant a StorageClass which will be used, if no val
Copy file name to clipboardExpand all lines: content/en/docs/tenants/permissions.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ Capsule introduces the principal, that tenants must have owners. The owner of a
11
11
12
12
### Group Scope
13
13
14
-
Capsule selects users, which are eligable to be considered for tenancy by their group. The define the group of users that can be considered for tenancy, you can use the `userGroups` option in the CapsuleConfiguration.
14
+
Capsule selects users, which are eligable to be considered for tenancy by their group. To define the group of users that can be considered for tenancy, you can use the `userGroups` option in the CapsuleConfiguration.
15
15
16
16
Another commonly used example if you want to promote serviceaccount to tenant-owners, their group must be present:
17
17
@@ -38,7 +38,7 @@ Learn how to assign ownership to users, groups and serviceaccounts.
38
38
39
39
To keep things simple, we assume that Bill just creates a client certificate for authentication using X.509 Certificate Signing Request, so Alice's certificate has `"/CN=alice/O=projectcapsule.dev"`.
40
40
41
-
**Bill** creates a new tenant oil in the CaaS management portal according to the tenant's profile:
41
+
**Bill** creates a new tenant solar in the CaaS management portal according to the tenant's profile:
42
42
43
43
```yaml
44
44
apiVersion: capsule.clastix.io/v1beta2
@@ -113,7 +113,7 @@ spec:
113
113
kind: User
114
114
```
115
115
116
-
However, it's more likely that Bill assigns the ownership of the solar tenant to a group of users instead of a single one, especially if you use [OIDC AUthentication](/docs/guides/authentication#oidc). Bill creates a new group account solar-users in the Acme Corp. identity management system and then he assigns Alice and Bob identities to the solar-users group.
116
+
However, it's more likely that Bill assigns the ownership of the solar tenant to a group of users instead of a single one, especially if you use [OIDC Authentication](/docs/guides/authentication#oidc). Bill creates a new group account solar-users in the Acme Corp. identity management system and then he assigns Alice and Bob identities to the solar-users group.
117
117
118
118
```yaml
119
119
apiVersion: capsule.clastix.io/v1beta2
@@ -126,7 +126,7 @@ spec:
126
126
kind: Group
127
127
```
128
128
129
-
With the configuration above, any user belonging to the `solar-users` group will be the owner of the oil tenant with the same permissions of Alice. For example, Bob can log in with his credentials and issue
129
+
With the configuration above, any user belonging to the `solar-users` group will be the owner of the solar tenant with the same permissions of Alice. For example, Bob can log in with his credentials and issue
130
130
131
131
```bash
132
132
kubectl auth can-i create namespaces
@@ -137,7 +137,7 @@ All the groups you want to promot to Tenant Owners must be part of the Group Sco
137
137
138
138
#### ServiceAccounts
139
139
140
-
You can use the Group subject to grant serviceaccounts the ownership of a tenant. For example, you can create a group of serviceaccounts and assign it to the tenant:
140
+
You can use the Group subject to grant ServiceAccounts the ownership of a tenant. For example, you can create a group of ServiceAccounts and assign it to the tenant:
141
141
142
142
```yaml
143
143
apiVersion: capsule.clastix.io/v1beta2
@@ -150,7 +150,7 @@ spec:
150
150
kind: ServiceAccount
151
151
```
152
152
153
-
Bill can create a Service Account called robot, for example, in the `tenant-system` namespace and leave it to act as Tenant Owner of the oil tenant
153
+
Bill can create a ServiceAccount called robot, for example, in the `tenant-system` namespace and leave it to act as Tenant Owner of the solar tenant
In some cases, the cluster admin needs to narrow the range of permissions assigned to tenant owners by assigning a Cluster Role with less permissions than above. Capsule supports the dynamic assignment of any ClusterRole resources for each Tenant Owner.
280
280
@@ -438,7 +438,7 @@ spec:
438
438
EOF
439
439
```
440
440
441
-
As you can see the subjects is a classic [rolebinding subject](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-subjects). This way you grant permissions to the subject user **Joe**, who only can list and watch servicemonitors in the solar tenant namespaces, but has no other permissions.
441
+
As you can see the subjects is a classic [rolebinding subject](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-subjects). This way you grant permissions to the subject user **Joe**, who only can list and watch servicemonitors in the solar tenant namespaces, but has no other permissions.
Copy file name to clipboardExpand all lines: content/en/docs/tenants/quotas.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ With help of Capsule, Bill, the cluster admin, can set and enforce resources quo
13
13
This feature will be deprecated in a future release of Capsule. Instead use [Resource Pools](/docs/resourcepools/) to handle any cases around distributed ResourceQuotas
14
14
{{% /alert %}}
15
15
16
-
With help of Capsule, Bill, the cluster admin, can set and enforce resources quota and limits for Alice's tenant.Set resources quota for each namespace in the Alice's tenant by defining them in the tenant spec:
16
+
With help of Capsule, Bill, the cluster admin, can set and enforce resources quota and limits for Alice's tenant.Set resources quota for each namespace in the Alice's tenant by defining them in the tenant spec:
17
17
18
18
```yaml
19
19
apiVersion: capsule.clastix.io/v1beta2
@@ -120,7 +120,7 @@ spec:
120
120
121
121
### Tenant Scope
122
122
123
-
By setting enforcement at tenant level, i.e. `spec.resourceQuotas`.scope=Tenant, Capsule aggregates resources usage for all namespaces in the tenant and adjusts all the `ResourceQuota` usage as aggregate. In such case, Alice can check the used resources at the tenant level by inspecting the annotations in ResourceQuota object of any namespace in the tenant:
123
+
By setting enforcement at tenant level, i.e. `spec.resourceQuotas.scope=Tenant`, Capsule aggregates resources usage for all namespaces in the tenant and adjusts all the `ResourceQuota` usage as aggregate. In such case, Alice can check the used resources at the tenant level by inspecting the annotations in ResourceQuota object of any namespace in the tenant:
124
124
125
125
```bash
126
126
kubectl -n solar-production get resourcequotas capsule-solar-1 -o yaml
@@ -263,7 +263,7 @@ spec:
263
263
264
264
The Additional Role Binding referring to the Cluster Role mysql-namespace-admin is required to let Alice [manage their Custom Resource instances](/docs/tenants/permissions/#custom-resources).
265
265
266
-
The pattern for the quota.resources.capsule.clastix.io annotation is the following:
266
+
The pattern for the quota.resources.capsule.clastix.io annotation is the following:
0 commit comments