Skip to content

Commit 33d8170

Browse files
authored
Merge pull request #32 from santhakali/fix-tenant-permissions-page-typos
Fix: Typos in the Tenant Permissions page documentation
2 parents 57bcdf8 + 5cee67f commit 33d8170

4 files changed

Lines changed: 33 additions & 33 deletions

File tree

content/en/docs/resourcepools/_index.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -292,7 +292,7 @@ spec:
292292

293293
## ResourcePoolClaims
294294

295-
`ResourcePoolClaims` declared claims of resources from a single `ResourcePool`. When a `ResourcePoolClaim` is successfully bound to a `ResourcePool`, it's requested resources are stacked to the `ResourceQuota` from the `ResourcePool` in the correspinding namespaces, where the `ResourcePoolClaim` was declared. So the declaration of a `ResourcePoolClaim` is very simple:
295+
`ResourcePoolClaims` declared claims of resources from a single `ResourcePool`. When a `ResourcePoolClaim` is successfully bound to a `ResourcePool`, it's requested resources are stacked to the `ResourceQuota` from the `ResourcePool` in the corresponding namespaces, where the `ResourcePoolClaim` was declared. So the declaration of a `ResourcePoolClaim` is very simple:
296296

297297
```yaml
298298
apiVersion: capsule.clastix.io/v1beta2
@@ -308,7 +308,7 @@ spec:
308308
309309
```
310310

311-
`ResourcePoolClaims` are decoupled from the lifecycle of `ResourcePools`. If a `ResourcePool` is deleted where a `ResourcePoolClaim` was bound to, the `ResourcePoolClaim` becomes unassigned, but is not deleted.
311+
`ResourcePoolClaims` are decoupled from the lifecycle of `ResourcePools`. If a `ResourcePool` is deleted where a `ResourcePoolClaim` was bound to, the `ResourcePoolClaim` becomes unassigned, but is not deleted.
312312

313313
### Allocation
314314

@@ -554,7 +554,7 @@ spec:
554554
limits.memory: 384Mi
555555
```
556556

557-
The same can be done for the `capsule-migration-1` `ResourceQuota`.
557+
The same can be done for the `capsule-migration-1` `ResourceQuota`.
558558

559559
```yaml
560560
---
@@ -666,7 +666,7 @@ So, there needs to be change. But times have also changed and we have listened t
666666

667667
**Our initial Idea for a redesign was simple**: What if we just intercepted operations on the `resourcequota/status` subresource and calculate the offsets (or essentially what still can fit) on a Admission-Webhook. If another operation would have taken place the client operation would have thrown a conflict and rejected the admission, until it retries. Makes sense, right?
668668

669-
Here we have the problem, that even if we would block resourcequota status updates and wait until the actual quantity was added to the total, the resources have already been scheduled. The reason for that, is that the status for resourcequotas is **eventually** consistent, but what really matters at that moment is the hard spec (see this response from a maintainer [kubernetes/kubernetes#123434 (comment)](https://github.com/kubernetes/kubernetes/issues/123434#issuecomment-1964920277)). So essentially no matter the status, you can always provision as much resources, as the `.spec.hard` of a `ResourceQuota` indicates. This makes perfect sense, if your `ResourceQuota` is acting in a single namespace. However in our scenario, we have the same `ResourceQuota` in n-namespaces. So the overprovisioning problem still persists.
669+
Here we have the problem, that even if we would block resourcequota status updates and wait until the actual quantity was added to the total, the resources have already been scheduled. The reason for that, is that the status for resourcequotas is **eventually** consistent, but what really matters at that moment is the hard spec (see this response from a maintainer [kubernetes/kubernetes#123434 (comment)](https://github.com/kubernetes/kubernetes/issues/123434#issuecomment-1964920277)). So essentially no matter the status, you can always provision as much resources, as the `.spec.hard` of a `ResourceQuota` indicates. This makes perfect sense, if your `ResourceQuota` is acting in a single namespace. However in our scenario, we have the same `ResourceQuota` in n-namespaces. So the overprovisioning problem still persists.
670670

671671

672672
**Thinking of other ways**: So the next idea was essentially increasing the `ResourceQuota.spec.hard` based on the workloads which are added to a namespaces (essentially a reversed approach). The workflow for this would look like something like this:
@@ -684,7 +684,7 @@ But there's some problems with this approach as well:
684684
* if you eg. schedule a pod and the quota is `count/0` there's no admission call on the resourcequota, which would be the easiest. So we would need to find a way to know, there's something new requesting resources. For example [Rancher](https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces#4-optional-add-resource-quotas) works around this problem with namespaced `DefaultLimits`. But this is not the agile approach we would like to offer.
685685
* The only indication that I know of is that we get an Event, which we can intercept with admission (`ResourceQuota Denied`), regarding quotaoverprovision.
686686

687-
If you eg update the resource quota that a pod now has space, it takes some time until that's registered and actually scheduled (just tested it for pods). I guess the timing depends on the kube-controller-manager flag `--concurrent-resource-quota-syncs` and/or `--resource-quota-sync-period
687+
If you eg update the resource quota that a pod now has space, it takes some time until that's registered and actually scheduled (just tested it for pods). I guess the timing depends on the kube-controller-manager flag `--concurrent-resource-quota-syncs` and/or `--resource-quota-sync-period
688688

689689
So it's really really difficult to increase quotas by the resources which are actually requested, especially the adding new resources process is where the performance would take a heavy hit.
690690

content/en/docs/tenants/enforcement.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ The cluster admin can "taint" the namespaces created by tenant owners with addit
1818
apiVersion: capsule.clastix.io/v1beta2
1919
kind: Tenant
2020
metadata:
21-
name: oil
21+
name: solar
2222
spec:
2323
owners:
2424
- name: alice
@@ -64,7 +64,7 @@ Assigns additional labels and annotations to all namespaces created in the `sola
6464
apiVersion: capsule.clastix.io/v1beta2
6565
kind: Tenant
6666
metadata:
67-
name: oil
67+
name: solar
6868
spec:
6969
owners:
7070
- name: alice
@@ -123,7 +123,7 @@ spec:
123123
denied:
124124
- foo.acme.net
125125
- bar.acme.net
126-
deniedRegex: .*.acme.net
126+
deniedRegex: .*.acme.net
127127
forbiddenLabels:
128128
denied:
129129
- foo.acme.net
@@ -152,7 +152,7 @@ Bill, the cluster admin, can deny Tenant Owners to add or modify specific labels
152152
apiVersion: capsule.clastix.io/v1beta2
153153
kind: CapsuleConfiguration
154154
metadata:
155-
name: default
155+
name: default
156156
spec:
157157
nodeMetadata:
158158
forbiddenAnnotations:
@@ -208,8 +208,8 @@ metadata:
208208
spec:
209209
ports:
210210
- protocol: TCP
211-
port: 80
212-
targetPort: 8080
211+
port: 80
212+
targetPort: 8080
213213
selector:
214214
run: nginx
215215
type: ClusterIP
@@ -416,15 +416,15 @@ To prevent misuses of Pod Priority Class, Bill, the cluster admin, can enforce t
416416
apiVersion: capsule.clastix.io/v1beta2
417417
kind: Tenant
418418
metadata:
419-
name: oil
419+
name: solar
420420
spec:
421421
owners:
422422
- name: alice
423423
kind: User
424424
priorityClasses:
425425
matchLabels:
426426
env: "production"
427-
```
427+
```
428428

429429
With the said Tenant specification, Alice can create a Pod resource if `spec.priorityClassName` equals to:
430430

@@ -502,7 +502,7 @@ If a Pod is going to use a non-allowed Runtime Class, it will be rejected by the
502502

503503
### NodeSelector
504504

505-
Bill, the cluster admin, can dedicate a pool of worker nodes to the oil tenant, to isolate the tenant applications from other noisy neighbors.
505+
Bill, the cluster admin, can dedicate a pool of worker nodes to the solar tenant, to isolate the tenant applications from other noisy neighbors.
506506

507507
These nodes are labeled by Bill as `pool=renewable`
508508

@@ -557,7 +557,7 @@ spec:
557557
```
558558

559559
Any attempt of Alice to change the selector on the pods will result in an error from the PodNodeSelector Admission Controller plugin.
560-
560+
561561
```bash
562562
kubectl auth can-i edit ns -n solar-production
563563
no
@@ -911,7 +911,7 @@ metadata:
911911
spec:
912912
ingressClassName: legacy
913913
rules:
914-
- host: oil.acmecorp.com
914+
- host: solar.acmecorp.com
915915
http:
916916
paths:
917917
- backend:
@@ -980,7 +980,7 @@ Kubernetes network policies control network traffic between namespaces and betwe
980980
To meet this requirement, Bill needs to define network policies that deny pods belonging to Alice's namespaces to access pods in namespaces belonging to other tenants, e.g. Bob's tenant `water`, or in system namespaces, e.g. `kube-system`.
981981

982982
> Keep in mind, that because of how the NetworkPolicies API works, the users can still add a policy which contradicts what the Tenant has set, resulting in users being able to circumvent the initial limitation set by the tenant admin. Two options can be put in place to mitigate this potential privilege escalation: 1. providing a restricted role rather than the default admin one 2. using Calico's GlobalNetworkPolicy, or Cilium's CiliumClusterwideNetworkPolicy which are defined at the cluster-level, thus creating an order of packet filtering.
983-
983+
984984
Also, Bill can make sure pods belonging to a tenant namespace cannot access other network infrastructures like cluster nodes, load balancers, and virtual machines running other services.
985985

986986
Bill can set network policies in the tenant manifest, according to the requirements:
@@ -1004,12 +1004,12 @@ spec:
10041004
- ipBlock:
10051005
cidr: 0.0.0.0/0
10061006
except:
1007-
- 192.168.0.0/16
1007+
- 192.168.0.0/16
10081008
ingress:
10091009
- from:
10101010
- namespaceSelector:
10111011
matchLabels:
1012-
capsule.clastix.io/tenant: oil
1012+
capsule.clastix.io/tenant: water
10131013
- podSelector: {}
10141014
- ipBlock:
10151015
cidr: 192.168.0.0/16
@@ -1202,7 +1202,7 @@ With the said Tenant specification, Alice can create a Persistent Volume Claims
12021202

12031203
Capsule assures that all Persistent Volume Claims created by Alice will use only one of the valid storage classes. Assume the StorageClass `ceph-rbd` has the label `env: production`:
12041204

1205-
```bash
1205+
```bash
12061206
kubectl apply -f - << EOF
12071207
kind: PersistentVolumeClaim
12081208
apiVersion: v1
@@ -1233,7 +1233,7 @@ It's possible to assign each tenant a StorageClass which will be used, if no val
12331233
apiVersion: capsule.clastix.io/v1beta2
12341234
kind: Tenant
12351235
metadata:
1236-
name: oil
1236+
name: solar
12371237
spec:
12381238
owners:
12391239
- name: alice
@@ -1416,4 +1416,4 @@ spec:
14161416
- name: alice
14171417
kind: User
14181418
preventDeletion: true
1419-
```
1419+
```

content/en/docs/tenants/permissions.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Capsule introduces the principal, that tenants must have owners. The owner of a
1111

1212
### Group Scope
1313

14-
Capsule selects users, which are eligable to be considered for tenancy by their group. The define the group of users that can be considered for tenancy, you can use the `userGroups` option in the CapsuleConfiguration.
14+
Capsule selects users, which are eligable to be considered for tenancy by their group. To define the group of users that can be considered for tenancy, you can use the `userGroups` option in the CapsuleConfiguration.
1515

1616
Another commonly used example if you want to promote serviceaccount to tenant-owners, their group must be present:
1717

@@ -38,7 +38,7 @@ Learn how to assign ownership to users, groups and serviceaccounts.
3838

3939
To keep things simple, we assume that Bill just creates a client certificate for authentication using X.509 Certificate Signing Request, so Alice's certificate has `"/CN=alice/O=projectcapsule.dev"`.
4040

41-
**Bill** creates a new tenant oil in the CaaS management portal according to the tenant's profile:
41+
**Bill** creates a new tenant solar in the CaaS management portal according to the tenant's profile:
4242

4343
```yaml
4444
apiVersion: capsule.clastix.io/v1beta2
@@ -113,7 +113,7 @@ spec:
113113
kind: User
114114
```
115115

116-
However, it's more likely that Bill assigns the ownership of the solar tenant to a group of users instead of a single one, especially if you use [OIDC AUthentication](/docs/guides/authentication#oidc). Bill creates a new group account solar-users in the Acme Corp. identity management system and then he assigns Alice and Bob identities to the solar-users group.
116+
However, it's more likely that Bill assigns the ownership of the solar tenant to a group of users instead of a single one, especially if you use [OIDC Authentication](/docs/guides/authentication#oidc). Bill creates a new group account solar-users in the Acme Corp. identity management system and then he assigns Alice and Bob identities to the solar-users group.
117117

118118
```yaml
119119
apiVersion: capsule.clastix.io/v1beta2
@@ -126,7 +126,7 @@ spec:
126126
kind: Group
127127
```
128128
129-
With the configuration above, any user belonging to the `solar-users` group will be the owner of the oil tenant with the same permissions of Alice. For example, Bob can log in with his credentials and issue
129+
With the configuration above, any user belonging to the `solar-users` group will be the owner of the solar tenant with the same permissions of Alice. For example, Bob can log in with his credentials and issue
130130

131131
```bash
132132
kubectl auth can-i create namespaces
@@ -137,7 +137,7 @@ All the groups you want to promot to Tenant Owners must be part of the Group Sco
137137

138138
#### ServiceAccounts
139139

140-
You can use the Group subject to grant serviceaccounts the ownership of a tenant. For example, you can create a group of serviceaccounts and assign it to the tenant:
140+
You can use the Group subject to grant ServiceAccounts the ownership of a tenant. For example, you can create a group of ServiceAccounts and assign it to the tenant:
141141

142142
```yaml
143143
apiVersion: capsule.clastix.io/v1beta2
@@ -150,7 +150,7 @@ spec:
150150
kind: ServiceAccount
151151
```
152152

153-
Bill can create a Service Account called robot, for example, in the `tenant-system` namespace and leave it to act as Tenant Owner of the oil tenant
153+
Bill can create a ServiceAccount called robot, for example, in the `tenant-system` namespace and leave it to act as Tenant Owner of the solar tenant
154154

155155
```bash
156156
kubectl --as system:serviceaccount:tenant-system:robot --as-group projectcapsule.dev auth can-i create namespaces
@@ -274,7 +274,7 @@ items:
274274
kind: List
275275
metadata:
276276
resourceVersion: ""
277-
```
277+
```
278278

279279
In some cases, the cluster admin needs to narrow the range of permissions assigned to tenant owners by assigning a Cluster Role with less permissions than above. Capsule supports the dynamic assignment of any ClusterRole resources for each Tenant Owner.
280280

@@ -438,7 +438,7 @@ spec:
438438
EOF
439439
```
440440

441-
As you can see the subjects is a classic [rolebinding subject](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-subjects). This way you grant permissions to the subject user **Joe**, who only can list and watch servicemonitors in the solar tenant namespaces, but has no other permissions.
441+
As you can see the subjects is a classic [rolebinding subject](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-subjects). This way you grant permissions to the subject user **Joe**, who only can list and watch servicemonitors in the solar tenant namespaces, but has no other permissions.
442442

443443
### Custom Resources
444444

content/en/docs/tenants/quotas.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ With help of Capsule, Bill, the cluster admin, can set and enforce resources quo
1313
This feature will be deprecated in a future release of Capsule. Instead use [Resource Pools](/docs/resourcepools/) to handle any cases around distributed ResourceQuotas
1414
{{% /alert %}}
1515

16-
With help of Capsule, Bill, the cluster admin, can set and enforce resources quota and limits for Alice's tenant.Set resources quota for each namespace in the Alice's tenant by defining them in the tenant spec:
16+
With help of Capsule, Bill, the cluster admin, can set and enforce resources quota and limits for Alice's tenant. Set resources quota for each namespace in the Alice's tenant by defining them in the tenant spec:
1717

1818
```yaml
1919
apiVersion: capsule.clastix.io/v1beta2
@@ -120,7 +120,7 @@ spec:
120120

121121
### Tenant Scope
122122

123-
By setting enforcement at tenant level, i.e. `spec.resourceQuotas`.scope=Tenant, Capsule aggregates resources usage for all namespaces in the tenant and adjusts all the `ResourceQuota` usage as aggregate. In such case, Alice can check the used resources at the tenant level by inspecting the annotations in ResourceQuota object of any namespace in the tenant:
123+
By setting enforcement at tenant level, i.e. `spec.resourceQuotas.scope=Tenant`, Capsule aggregates resources usage for all namespaces in the tenant and adjusts all the `ResourceQuota` usage as aggregate. In such case, Alice can check the used resources at the tenant level by inspecting the annotations in ResourceQuota object of any namespace in the tenant:
124124

125125
```bash
126126
kubectl -n solar-production get resourcequotas capsule-solar-1 -o yaml
@@ -263,7 +263,7 @@ spec:
263263

264264
The Additional Role Binding referring to the Cluster Role mysql-namespace-admin is required to let Alice [manage their Custom Resource instances](/docs/tenants/permissions/#custom-resources).
265265

266-
The pattern for the quota.resources.capsule.clastix.io annotation is the following:
266+
The pattern for the quota.resources.capsule.clastix.io annotation is the following:
267267

268268
* `quota.resources.capsule.clastix.io/${PLURAL_NAME}.${API_GROUP}_${API_VERSION}`
269269

0 commit comments

Comments
 (0)